All month, I've been talking about software development.
I've talked about a lot of things that I believe are important to writing good software.
If you've been following any of these posts, you may have noticed one peculiar omission. Nowhere, in any of these posts, do I make anything more than a passing reference to programming.
I said at the start that this was not going to be a geek-fest. In fact, it has been something of an anti-geek rant. I'm especially grateful for the comments mentioning that posts were easy to understand, even for the non-IT person. That was the whole idea. I was on a mission to make things accessible to folks with no IT background whatsoever.
Because, the theme of this month has really been about putting you, non-technical folks who just happen to use software for work and for leisure, front and center of the development process.
Software development is about far more than writing code.
The paradox is, to write good software, you have to start by forgetting about writing software altogether.
Tuesday, April 30, 2013
Monday, April 29, 2013
Y is for Yes, Minister
In an ideal world, a project sponsor would commission a new piece of software, or a significant enhancement to something in place, and let the experts thrash out the details. The business experts would say what was needed and provide the business case for it, while the development experts (i.e. you!) would say what was feasible and estimate how much it will cost. Between them, they will hopefully arrive at a workable proposal.
Sometimes the sponsor will have to make hard choices if there is a mismatch between cost and funding, or time and deadlines, but the result should always be a project that is scoped to be realistic and achievable within the constraints.
This is not an ideal world
The reality is that your project will often be subject to political interference of some sort, whether it be big "P" or little "p".
This often shows itself as a drive to achieve a given objective with arbitrary and unrealistic constraints.
By "arbitrary", I mean set by some outside authority on the basis of factors that have nothing to do with the realities of the project. The constraints will be anything but arbitrary to the person setting them, but they are not based on any consideration of factors that you can manage within the project.
"Unrealistic" speaks for itself.
Sometimes you can negotiate more realistic expectations. Sometimes not.
This is where I would argue that good habits of thought, the theme of this series of posts, come into their own.
When it comes to negotiating changes in budget or deadline constraints, clear thinking based on experience will help build a case. Moreover, if you've already established a track record for accurate estimates and delivering on promises (both of which are made easier with sound discipline) then your assessment will carry more weight.
If the answer is still "no" then these good habits become even more important.
People often view quality as a cost. I see it as an investment: expend more effort now, and see the results in better results and lower support and maintenance over the life of the system.
But I've also found that good habits of thought and retaining a focus on appropriate quality lead to shortened development times and lower costs at the outset. Maybe it really isn't possible to deliver everything in the time available, in which case you want to be sure to recognize the most important bits and to understand the implications of leaving some aspects out.
Now is the time to home in hard on what's really important to the business - and to the person calling the shots. Now, more than ever, is the time to design for the future, to separate out elements so they can be developed quicker, or even added in later, to make your system robust so you aren't bogged down in support and rework and can move swiftly on to delivering the rest of the package.
When your back's to the wall, you really want the best tools possible at your disposal, because sometimes all you can do is grit your teeth and say, "Yes, Minister."
Sometimes the sponsor will have to make hard choices if there is a mismatch between cost and funding, or time and deadlines, but the result should always be a project that is scoped to be realistic and achievable within the constraints.
This is not an ideal world
The reality is that your project will often be subject to political interference of some sort, whether it be big "P" or little "p".
This often shows itself as a drive to achieve a given objective with arbitrary and unrealistic constraints.
By "arbitrary", I mean set by some outside authority on the basis of factors that have nothing to do with the realities of the project. The constraints will be anything but arbitrary to the person setting them, but they are not based on any consideration of factors that you can manage within the project.
"Unrealistic" speaks for itself.
Sometimes you can negotiate more realistic expectations. Sometimes not.
This is where I would argue that good habits of thought, the theme of this series of posts, come into their own.
When it comes to negotiating changes in budget or deadline constraints, clear thinking based on experience will help build a case. Moreover, if you've already established a track record for accurate estimates and delivering on promises (both of which are made easier with sound discipline) then your assessment will carry more weight.
If the answer is still "no" then these good habits become even more important.
People often view quality as a cost. I see it as an investment: expend more effort now, and see the results in better results and lower support and maintenance over the life of the system.
But I've also found that good habits of thought and retaining a focus on appropriate quality lead to shortened development times and lower costs at the outset. Maybe it really isn't possible to deliver everything in the time available, in which case you want to be sure to recognize the most important bits and to understand the implications of leaving some aspects out.
Now is the time to home in hard on what's really important to the business - and to the person calling the shots. Now, more than ever, is the time to design for the future, to separate out elements so they can be developed quicker, or even added in later, to make your system robust so you aren't bogged down in support and rework and can move swiftly on to delivering the rest of the package.
When your back's to the wall, you really want the best tools possible at your disposal, because sometimes all you can do is grit your teeth and say, "Yes, Minister."
Saturday, April 27, 2013
X is (kinda) for eXperts' eXodus
Sorry, I'm resorting to cheating a bit for X. But I know I'll be in good company...
Back in the early days, any business that invested in a computer had little choice but to create an IT department to look after it, program it, and craft a network around it.
As the industry has matured, various aspects of IT have transformed from dark art to predictable engineering, and from niche specialty to wholesale commodity.
And as soon as something becomes a commodity you can buy off the peg, why would you waste time doing it for yourself? You want to be free to do what you do best.
Taxi companies don't keep a car manufacturing plant in the back yard. They buy their vehicles from people whose business it is to make them. That leaves them free to deal with their business of moving passengers around.
This makes sense for a lot of things in the IT world too - servers, workstations, whole networks, and many mundane business applications, can be bought in where needed and scaled up easily.
But, you can have too much of a good thing
The trend now is to see everything IT as a commodity, and something you should best ship out the door to the experts, because IT is not your core business. As a result, many of those in-house IT shops have been entirely disbanded and outsourced.
And with them often goes a lot of competitive expertise.
Why does that matter, if your business is not IT?
IT is all about enabling your business processes, about making them smoother, more efficient, or smarter. Even enabling you to do things you could never have done in the past. The thing is, some of those processes are what give you a competitive edge. They are what distinguish you from all the other folks in the same business as you.
Suppose your business is manufacturing - I don't really care what you manufacture - but suppose your competitive edge comes from outstanding client relationships. You have long-lasting, trusting relationships and you can anticipate what your clients will need next, what is important to them, and can suggest new ways you can meet their needs. This fruitful partnership is what keeps them coming back to you rather than to your competitors.
In that case, you probably want a top class client relationship management system. One that intimately supports how you have chosen to do business.
Sure, there are lots of such systems out there, but how are you going to rise above your competition if you are using the same software that they are? This is an area where you need to be doing something that none of them are doing.
And how can you keep doing what you do best if the software insists you bend your process to fit its limitations? IT itself may not be your core business, but you'd better be sure that IT is in very close harmony with those parts of your business that are core.
The only way to achieve that is to keep the development and evolution of that key enabling software very close to your heart. The best people to do that are people who have skin in the game, who understand your business and care about its success, in other words ... your own employees.
Back in the early days, any business that invested in a computer had little choice but to create an IT department to look after it, program it, and craft a network around it.
As the industry has matured, various aspects of IT have transformed from dark art to predictable engineering, and from niche specialty to wholesale commodity.
And as soon as something becomes a commodity you can buy off the peg, why would you waste time doing it for yourself? You want to be free to do what you do best.
Taxi companies don't keep a car manufacturing plant in the back yard. They buy their vehicles from people whose business it is to make them. That leaves them free to deal with their business of moving passengers around.
This makes sense for a lot of things in the IT world too - servers, workstations, whole networks, and many mundane business applications, can be bought in where needed and scaled up easily.
But, you can have too much of a good thing
The trend now is to see everything IT as a commodity, and something you should best ship out the door to the experts, because IT is not your core business. As a result, many of those in-house IT shops have been entirely disbanded and outsourced.
And with them often goes a lot of competitive expertise.
Why does that matter, if your business is not IT?
IT is all about enabling your business processes, about making them smoother, more efficient, or smarter. Even enabling you to do things you could never have done in the past. The thing is, some of those processes are what give you a competitive edge. They are what distinguish you from all the other folks in the same business as you.
Suppose your business is manufacturing - I don't really care what you manufacture - but suppose your competitive edge comes from outstanding client relationships. You have long-lasting, trusting relationships and you can anticipate what your clients will need next, what is important to them, and can suggest new ways you can meet their needs. This fruitful partnership is what keeps them coming back to you rather than to your competitors.
In that case, you probably want a top class client relationship management system. One that intimately supports how you have chosen to do business.
Sure, there are lots of such systems out there, but how are you going to rise above your competition if you are using the same software that they are? This is an area where you need to be doing something that none of them are doing.
And how can you keep doing what you do best if the software insists you bend your process to fit its limitations? IT itself may not be your core business, but you'd better be sure that IT is in very close harmony with those parts of your business that are core.
The only way to achieve that is to keep the development and evolution of that key enabling software very close to your heart. The best people to do that are people who have skin in the game, who understand your business and care about its success, in other words ... your own employees.
Friday, April 26, 2013
W is for Why write about this topic?
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
I've had a fascination with computers since they first started emerging from sprawling corporate basements and took their early halting steps into people's homes back in the 70's.
As a hobbyist, my fascination was with the idea of lining up a set of instructions and then seeing them run to produce a result, a bit like watching a train running on a track.
Yes, trains fascinated me too.
As a professional, I had to leave my hobbyist anarchy behind and develop serious disciplines in my craft. I also learned for myself something a friend at university told me, though it took many years for the message to become real to me: the technology isn't really important, it's people that matter.
This series of posts contain lessons I've learned, mostly the hard way, over the years. But I don't expect to impart wisdom or change the world in a handful of blog posts. If I'm honest, my motivation is less noble and more self-indulgent than that. This is a thinly-disguised month-long rant.
The trouble is that we don't seem to have progressed much as an industry. And it frustrates me.
Sure, we have smaller, faster, more powerful devices than we could have imagined twenty years ago. I am boggled by tablets and smart phones. These would have seemed like magic in the eighties.
I can still remember the thrill of pleasure when a Star Trek game I was writing first painted a crude star map on the screen in green characters. How far we've come since then. Games have progressed, and computer animation brings undreamed of cinematic possibilities.
But somehow, the corporate IT world is still mentally stuck in the seventies. Most web applications are honestly no better than tarted-up versions of the green screen systems they replaced.
Most businesses of any size couldn't survive now without some large business systems to manage their records, but the idea of people being in charge, and freed up to be more creative, is largely a sick joke. Take a poll of office workers, and I bet you'll find in many cases that the systems are in charge, and people are relegated to little more than priestly acolytes invoking ill-understood rituals to placate the beast squatting on their desks.
There is a criminal waste of human time and missed opportunity in practically every office across the world, as people grind their teeth in frustration at the digital crap they are given to work with.
Business IT has failed to keep up with the times, and that angers me.
Rant over.
I've had a fascination with computers since they first started emerging from sprawling corporate basements and took their early halting steps into people's homes back in the 70's.
As a hobbyist, my fascination was with the idea of lining up a set of instructions and then seeing them run to produce a result, a bit like watching a train running on a track.
Yes, trains fascinated me too.
As a professional, I had to leave my hobbyist anarchy behind and develop serious disciplines in my craft. I also learned for myself something a friend at university told me, though it took many years for the message to become real to me: the technology isn't really important, it's people that matter.
This series of posts contain lessons I've learned, mostly the hard way, over the years. But I don't expect to impart wisdom or change the world in a handful of blog posts. If I'm honest, my motivation is less noble and more self-indulgent than that. This is a thinly-disguised month-long rant.
The trouble is that we don't seem to have progressed much as an industry. And it frustrates me.
Sure, we have smaller, faster, more powerful devices than we could have imagined twenty years ago. I am boggled by tablets and smart phones. These would have seemed like magic in the eighties.
I can still remember the thrill of pleasure when a Star Trek game I was writing first painted a crude star map on the screen in green characters. How far we've come since then. Games have progressed, and computer animation brings undreamed of cinematic possibilities.
But somehow, the corporate IT world is still mentally stuck in the seventies. Most web applications are honestly no better than tarted-up versions of the green screen systems they replaced.
Most businesses of any size couldn't survive now without some large business systems to manage their records, but the idea of people being in charge, and freed up to be more creative, is largely a sick joke. Take a poll of office workers, and I bet you'll find in many cases that the systems are in charge, and people are relegated to little more than priestly acolytes invoking ill-understood rituals to placate the beast squatting on their desks.
There is a criminal waste of human time and missed opportunity in practically every office across the world, as people grind their teeth in frustration at the digital crap they are given to work with.
Business IT has failed to keep up with the times, and that angers me.
Rant over.
Thursday, April 25, 2013
V is for Validation
No, not the kind where everyone tells you what a wonderful writer you are, though that would be nice...
Validation, in software terms, is where your software checks - or validates - the data it is being given to make sure it looks reasonable.
At its simplest, validation takes place when the user types something into a screen. It makes sure, for example, that you don't type in a word where a number is wanted, or put in a date of February 31.
Validation is a vital part of defensive design. It is one of your system's first lines of defense - keep the garbage out in the first place.
It is also an act of kindness. It is kinda childish to let someone spend ages filling in a screen of data only to blow a raspberry at them and point out a mistake right on the first line that you could have warned them about at the time.
It can also save your users from embarrassing, or even dangerous, mistakes. There have been examples recently of people losing out because of negligent (IMHO) design in online banking systems. Sending money to a stranger's account by accident because of a single digit mistake in the account number, when a simple check digit algorithm could make such errors impossible. Or hitting a zero instead of a decimal point by accident - considerate validation would at least warn if a payment amount looked suspiciously large.
And many security loopholes in websites exploit a shocking lack of validation, such as limiting the length of data input or preventing embedded code from being executed.
Too strict validation can also be a problem. Witness the centenarian who is told her date of birth cannot be accepted by the system.
Validation is like salt. Both too much and too little can spoil the dish.
Validation, in software terms, is where your software checks - or validates - the data it is being given to make sure it looks reasonable.
At its simplest, validation takes place when the user types something into a screen. It makes sure, for example, that you don't type in a word where a number is wanted, or put in a date of February 31.
Validation is a vital part of defensive design. It is one of your system's first lines of defense - keep the garbage out in the first place.
It is also an act of kindness. It is kinda childish to let someone spend ages filling in a screen of data only to blow a raspberry at them and point out a mistake right on the first line that you could have warned them about at the time.
It can also save your users from embarrassing, or even dangerous, mistakes. There have been examples recently of people losing out because of negligent (IMHO) design in online banking systems. Sending money to a stranger's account by accident because of a single digit mistake in the account number, when a simple check digit algorithm could make such errors impossible. Or hitting a zero instead of a decimal point by accident - considerate validation would at least warn if a payment amount looked suspiciously large.
And many security loopholes in websites exploit a shocking lack of validation, such as limiting the length of data input or preventing embedded code from being executed.
Too strict validation can also be a problem. Witness the centenarian who is told her date of birth cannot be accepted by the system.
Validation is like salt. Both too much and too little can spoil the dish.
Wednesday, April 24, 2013
U is for Unhappy paths
The purchase order system you need to build should be easy.
All the user has to do is select a supplier from the list of registered suppliers that you deal with, then enter a series of parts, descriptions, and quantities, and you're done. Print it off, mail it, wait for the order to be delivered.
You write the code in a week, and you sit back, waiting for praise and thanks to shower on you for the wonderful job you did.
What showers down on you, instead, is fifty shades of batshit
Users get half-way through entering an order, then notice a mistake on one of the earlier lines. But you gave them no way to go back up the screen so they have to scrap it and start again.
Users start, then realize they need to look up a product code, but they can't leave an order half-finished. They have to scrap it and start again.
It gets worse if you do reach the end of your order, because there's no opportunity to review it before it gets automatically printed off, or electronically faxed if the supplier is set up that way. Because you were being quick and efficient and decided to save them precious time. Orders are flying out the building with mistakes in them, that your company is being invoiced for by your suppliers.
And just because an order was correct at the time doesn't mean it will stay that way. One supplier is out of stock so you need to re-order either an alternate part or from a different supplier. But your system is expecting all goods ordered to be delivered. There's no means to cancel an order once placed.
Then the supplier only delivers five of the eight items ordered. The rest will follow next week, but you assumed it would all be delivered together and didn't think of part shipments.
All this adds up to one simple lesson...
Coding the happy path is typically less than 10% of the work in developing a system. If you only work on the happy path to save time, you'll be in a world of hurt later on.
Dealing with the unhappy paths should be the lion's share of your effort, and doing so gracefully marks the professional from the amateur.
All the user has to do is select a supplier from the list of registered suppliers that you deal with, then enter a series of parts, descriptions, and quantities, and you're done. Print it off, mail it, wait for the order to be delivered.
You write the code in a week, and you sit back, waiting for praise and thanks to shower on you for the wonderful job you did.
What showers down on you, instead, is fifty shades of batshit
Users get half-way through entering an order, then notice a mistake on one of the earlier lines. But you gave them no way to go back up the screen so they have to scrap it and start again.
Users start, then realize they need to look up a product code, but they can't leave an order half-finished. They have to scrap it and start again.
It gets worse if you do reach the end of your order, because there's no opportunity to review it before it gets automatically printed off, or electronically faxed if the supplier is set up that way. Because you were being quick and efficient and decided to save them precious time. Orders are flying out the building with mistakes in them, that your company is being invoiced for by your suppliers.
And just because an order was correct at the time doesn't mean it will stay that way. One supplier is out of stock so you need to re-order either an alternate part or from a different supplier. But your system is expecting all goods ordered to be delivered. There's no means to cancel an order once placed.
Then the supplier only delivers five of the eight items ordered. The rest will follow next week, but you assumed it would all be delivered together and didn't think of part shipments.
All this adds up to one simple lesson...
Coding the happy path is typically less than 10% of the work in developing a system. If you only work on the happy path to save time, you'll be in a world of hurt later on.
Dealing with the unhappy paths should be the lion's share of your effort, and doing so gracefully marks the professional from the amateur.
Tuesday, April 23, 2013
T is for Training
Today's software is intuitive! Anyone can use it without any kind of training.
Can't they?
Bullshit!
Why does this myth persist?
Well, the cynic in me says that companies like Microsoft have done a superlative sales job in convincing us that their interfaces are intuitive. And company executives happily buy into it because it means they can get away without training their staff. But I contend that this is nothing more than vacuous sales talk with no real substance.
But, they argue, user interfaces these days are graphical. Pictures are easy to understand. So the interface is easy.
Oh, really?
As I type the draft of this post in MS Word, in the toolbar at the top of the screen are two icons that are almost identical. They consist of what looks like a sheet of paper with a magnifying glass on top of it.
Any guesses? And no cheating! If the symbol really is intuitive you should be able to tell me right away what it represents.
OK. If pressed, I'd have said they were something to do with zooming in or magnifying part of the page. But no. One is "Print preview". One is "Navigation pane". Two very different meanings for almost identical icons. Neither of which has anything to do with a magnifying glass.
The truth is that most icons are little better than arbitrary pictures with an assigned meaning. The icon means this because I say it does.
The reason the "intuitive" myth persists is not that it's in any way true, but that many of the common symbols in use have become largely standardized and have entered our common lexicon as a computer-literate population.
That doesn't make them intuitive. They've been learned!
The reason I am dwelling on this is that it might be reasonable to depend on learned meaning for widespread software, but when you write your business applications you'd better plan on training your users. No matter what you may think, what you put in front of them will not be intuitive, it will need to be learned.
On top of this, if you are doing anything worthwhile for your business sponsor, it ought to involve a serious re-think of the way the business works, and a significant expansion on the capabilities of the business. You will need to train people not just in the technicalities of the user interface, but also in the ways to get the best business value from the software.
If you fail to do that, all the investment on your part in building the best darned asset management system in the universe will be precisely for naught, because they won't use it to its full potential.
And, after all your hard work, wouldn't that be a shame?
Monday, April 22, 2013
S is for Separation of concepts
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
Many years ago, when I was a fresh-faced programmer, I worked on a replacement for an old payroll system.
In this old system, employees were all flagged with a "pay type", which was either "M" or "W".
"M" type employees were paid monthly on a salary. Each month they received one twelfth of the annual amount for their pay grade, and their salary was paid straight into their bank account.
"W" type employees, by contrast, submitted timesheets each week from which their pay was calculated on an hourly rate, and each Friday they got an envelope with cash in it.
This innocuous "pay type" rolled three entirely distinct and separate concepts into one field: How often you get paid, how your pay is calculated, and how it gets paid out.
At the time, this probably seemed like an obvious and direct translation of the business need into computer code. Clean and simple.
In more enlightened times, this approach is a Very Bad Thing.
Sadly, though, this is a very easy trap for business users to fall in to. They are used to expressing "what is", and in their minds, two or three independent concepts easily get intertwined through unvarying usage. They might even say, "But weekly paid means time sheets, that's just how it is." Or the habit of thought might be so ingrained that they don't even articulate it. It's an unspoken, unwritten rule.
Worse, as a software developer it's easy to get intimidated into accepting this state of affairs. But your job, as a software developer, is to get your surgical scalpel out and separate these conjoined twins.
In this example, the "pay type" in the new payroll system became a "pay cycle" field whose only job was to determine when people got paid. The method of calculating pay was determined by which pay scale you were assigned to, and how you got paid was an entirely separate "pay method" field.
The concepts got separated.
The benefits were immediate.
Manual workers could now be paid into their bank accounts, which used to be the preserve of white collar staff. We could easily accommodate new agreements with different pay cycles, like different groups being paid on different days of the month, or paid every two or four weeks rather than weekly. And it became possible to mix payment calculations so that salaried employees could, on occasion, submit timesheets for overtime, or weekly employees whose agreement included a fixed element on top of the timesheet hours.
Separating concepts like this gives you enormous flexibility. When you go into Subway for a sandwich, they don't say you can only have chicken on toasted whole wheat, or ham on white. They allow you to mix and match filling, topping, bread etc. to suit your taste.
And the killer argument for separating out concepts like this: divide and conquer! It's almost always easier to program that way!
Who can possibly say "no" to that?
Many years ago, when I was a fresh-faced programmer, I worked on a replacement for an old payroll system.
In this old system, employees were all flagged with a "pay type", which was either "M" or "W".
"M" type employees were paid monthly on a salary. Each month they received one twelfth of the annual amount for their pay grade, and their salary was paid straight into their bank account.
"W" type employees, by contrast, submitted timesheets each week from which their pay was calculated on an hourly rate, and each Friday they got an envelope with cash in it.
This innocuous "pay type" rolled three entirely distinct and separate concepts into one field: How often you get paid, how your pay is calculated, and how it gets paid out.
At the time, this probably seemed like an obvious and direct translation of the business need into computer code. Clean and simple.
In more enlightened times, this approach is a Very Bad Thing.
Sadly, though, this is a very easy trap for business users to fall in to. They are used to expressing "what is", and in their minds, two or three independent concepts easily get intertwined through unvarying usage. They might even say, "But weekly paid means time sheets, that's just how it is." Or the habit of thought might be so ingrained that they don't even articulate it. It's an unspoken, unwritten rule.
Worse, as a software developer it's easy to get intimidated into accepting this state of affairs. But your job, as a software developer, is to get your surgical scalpel out and separate these conjoined twins.
In this example, the "pay type" in the new payroll system became a "pay cycle" field whose only job was to determine when people got paid. The method of calculating pay was determined by which pay scale you were assigned to, and how you got paid was an entirely separate "pay method" field.
The concepts got separated.
The benefits were immediate.
Manual workers could now be paid into their bank accounts, which used to be the preserve of white collar staff. We could easily accommodate new agreements with different pay cycles, like different groups being paid on different days of the month, or paid every two or four weeks rather than weekly. And it became possible to mix payment calculations so that salaried employees could, on occasion, submit timesheets for overtime, or weekly employees whose agreement included a fixed element on top of the timesheet hours.
Separating concepts like this gives you enormous flexibility. When you go into Subway for a sandwich, they don't say you can only have chicken on toasted whole wheat, or ham on white. They allow you to mix and match filling, topping, bread etc. to suit your taste.
And the killer argument for separating out concepts like this: divide and conquer! It's almost always easier to program that way!
Who can possibly say "no" to that?
Saturday, April 20, 2013
R is for Robustness
© Photographer Stephanie Raines | Agency: Dreamstime.com
Robustness is a quality that saves your application from crashing and burning every time the unexpected happens.
And, believe me, in business software you should expect the unexpected.
Frequently.
Many of the things I have or will cover in this series will naturally lead to more robust applications. Things like defensive design stop your code being tripped up by unexpected data, and validation helps stop bad data appearing in the first place. Insulation and separation help break down a monster meal into bite sized chunks. Small chunks are easier to code and test, and to be confident they are working properly.
Even with all the above, your code will choke sometimes. It might not even be your fault. Servers crash. Network connections fail. What happens next depends on you.
Suppose your user is entering the final line of a fifty line purchase order when the system crashes. Good robust design will ensure that the user has lost as little work as possible if the worst happens. This means keeping track of what's been done up to the most recent point possible - in a way that it can be resumed later. In this example, you don't want to have to start off again at line one. You really want to find everything held safe at least up to line forty nine, leaving you to finish off what you were doing before the gremlins struck.
This is not rocket science. In fact, why only cater for system-generated gremlins? Users sometimes need to "crash out" of long and complicated tasks too. Forgot some of the details you needed to enter? Need to go and check something out half-way through? Phone rings and you really need to deal with it, which involves abandoning your transaction to do something else? Urgent need for the washroom? Allowing your users to leave a complicated task half finished, do something else, and come back later is a darned useful design feature. One that adds inherent robustness to your system.
This is all about bullet-proofing your application. I've more to say about bullet-proofing your business function too, which will come in a later post about Unhappy Paths.
Friday, April 19, 2013
Q is for Quality
This post is a bit of a grab bag of brief thoughts on Quality.
What does it mean?
When you mention quality, people usually think immediately of high-end superlatives: The Ritz, filet mignon, Rolls Royce.
That is only a narrow concept of quality. I find it more useful to think of quality as being the best fit for purpose. After all, how useful do you think a Rolls Royce would be for delivering two tons of concrete to a muddy building site?
What is good quality in one context might be appalling in another. The trick is to understand what constitutes quality for the application you are building. Not all applications need high-end security, or hot failover redundancy, or real time transaction responses.
When is it important?
When you mention quality, most developers think of testing, and stop there. But quality should be a mind set throughout the development process.
Yes, testing code is an important part, but so is confirming the quality of other products along the way. You don't build a house without good foundations. Equally, what good is building quality code if your requirements aren't clear, or if the design is dodgy? Every step in your development process should have its own quality checks and quality assurance methods applied.
What is the purpose of testing?
This is a question I like to pose to developers when I need them to up their game.
Most will tell me that the purpose of testing is to find and correct any bugs in their code.
Wrong!
The purpose of testing, in my opinion, is to demonstrate that the code is working correctly.
It's a subtle difference in emphasis, but if you write code with that perspective in mind your attention is far more focused on getting it right than expecting others to pick up your mistakes.
What does it mean?
When you mention quality, people usually think immediately of high-end superlatives: The Ritz, filet mignon, Rolls Royce.
That is only a narrow concept of quality. I find it more useful to think of quality as being the best fit for purpose. After all, how useful do you think a Rolls Royce would be for delivering two tons of concrete to a muddy building site?
What is good quality in one context might be appalling in another. The trick is to understand what constitutes quality for the application you are building. Not all applications need high-end security, or hot failover redundancy, or real time transaction responses.
When is it important?
When you mention quality, most developers think of testing, and stop there. But quality should be a mind set throughout the development process.
Yes, testing code is an important part, but so is confirming the quality of other products along the way. You don't build a house without good foundations. Equally, what good is building quality code if your requirements aren't clear, or if the design is dodgy? Every step in your development process should have its own quality checks and quality assurance methods applied.
What is the purpose of testing?
This is a question I like to pose to developers when I need them to up their game.
Most will tell me that the purpose of testing is to find and correct any bugs in their code.
Wrong!
The purpose of testing, in my opinion, is to demonstrate that the code is working correctly.
It's a subtle difference in emphasis, but if you write code with that perspective in mind your attention is far more focused on getting it right than expecting others to pick up your mistakes.
Thursday, April 18, 2013
P is for Presentation
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
© Photographer Dana Rothstein | Agency: Dreamstime.com
Sorry to appear so shallow and superficial, but, ya know, presentation matters!
You can go a long way by paying attention to little things, like sticking to a consistent presentation style - fonts, colors, spatial layout, lining up labels and edges of fields (up to a point, you can actually get too pedantic about making things line up that really shouldn't).
If you think that's just unimportant nit-picking, think about this. Give users two applications, one with a clean and professional interface, and one that looks like it's been thrown together by a five-year-old, and guess which one they'll prefer? More importantly, see how they talk about it. One will be blamed for everything that goes wrong, and will be seen as less reliable and more error prone - even if the two applications are actually identical under the covers.
It's just as easy to do it right as to make a mess of it, but doing it right (as with most things) takes a bit of thought up front.
OK, so presentation matters, but that doesn't give you license to go overboard. Flashy graphics might lighten things up and enhance the user experience, but they can also go horribly wrong.
No user is going to forgive you if your over-eager presentation style slows them down. If the system is slow to respond, it had better not be because it's taking time to load your irrelevant images or paint that pie chart in 3-D with digitally-rendered pie crust. If it is, I suggest you pack your bags and leave town before anyone notices.
And don't - ever - let needless presentation gimmicks actually intrude on your user's work, and especially don't force them into unnecessary clicking or typing.
Remember Clippy, the never-lamented MS Office assistant? Enough said.
© Photographer Dana Rothstein | Agency: Dreamstime.com
Sorry to appear so shallow and superficial, but, ya know, presentation matters!
You can go a long way by paying attention to little things, like sticking to a consistent presentation style - fonts, colors, spatial layout, lining up labels and edges of fields (up to a point, you can actually get too pedantic about making things line up that really shouldn't).
If you think that's just unimportant nit-picking, think about this. Give users two applications, one with a clean and professional interface, and one that looks like it's been thrown together by a five-year-old, and guess which one they'll prefer? More importantly, see how they talk about it. One will be blamed for everything that goes wrong, and will be seen as less reliable and more error prone - even if the two applications are actually identical under the covers.
It's just as easy to do it right as to make a mess of it, but doing it right (as with most things) takes a bit of thought up front.
OK, so presentation matters, but that doesn't give you license to go overboard. Flashy graphics might lighten things up and enhance the user experience, but they can also go horribly wrong.
No user is going to forgive you if your over-eager presentation style slows them down. If the system is slow to respond, it had better not be because it's taking time to load your irrelevant images or paint that pie chart in 3-D with digitally-rendered pie crust. If it is, I suggest you pack your bags and leave town before anyone notices.
And don't - ever - let needless presentation gimmicks actually intrude on your user's work, and especially don't force them into unnecessary clicking or typing.
Remember Clippy, the never-lamented MS Office assistant? Enough said.
Wednesday, April 17, 2013
O is for Open ears, eyes, mind
This is the other end of the spectrum from my last post on developers' egos.
A business system is all about the end users, and from behind your desk you have no idea what their world is like. To get beyond itemized requirements listed in a dry document and become truly useful, nothing beats getting out from behind your desk and walking a mile in their shoes.
Literally.
Walk with the stores clerk taking a stock count, and see what the real working world looks like. When he says he sometimes wants to count an item out of sequence he's not being awkward. The stores are forever getting rearranged, and the walk order recorded in the system can take weeks to catch up with reality.
Sit with the cashier taking payment from a confused octogenarian, and you appreciate the need for flexible payment options, and why that thirty second security timeout is driving them nuts.
Go out into the field with a forester and try reading your data entry screen through the scratches and mud splatters on his ruggedized tablet.
Do that with an open mind, think like an end user, not like a developer, and some odd-sounding requirements suddenly become clear.
And here's a little carrot to sell you on the benefits of getting to know their world. Once you can see thing from your users' perspective, you are in an enviable position to let your creativity loose once more. This is no longer with a view to showing off digital coolness to fellow geeks. You can now suggest solutions that the business community would never have dreamed of. Cool solutions that they really will thank you for. That's got to be worth a few field trips away from your cozy cubicle.
A business system is all about the end users, and from behind your desk you have no idea what their world is like. To get beyond itemized requirements listed in a dry document and become truly useful, nothing beats getting out from behind your desk and walking a mile in their shoes.
Literally.
Walk with the stores clerk taking a stock count, and see what the real working world looks like. When he says he sometimes wants to count an item out of sequence he's not being awkward. The stores are forever getting rearranged, and the walk order recorded in the system can take weeks to catch up with reality.
Sit with the cashier taking payment from a confused octogenarian, and you appreciate the need for flexible payment options, and why that thirty second security timeout is driving them nuts.
Go out into the field with a forester and try reading your data entry screen through the scratches and mud splatters on his ruggedized tablet.
Do that with an open mind, think like an end user, not like a developer, and some odd-sounding requirements suddenly become clear.
And here's a little carrot to sell you on the benefits of getting to know their world. Once you can see thing from your users' perspective, you are in an enviable position to let your creativity loose once more. This is no longer with a view to showing off digital coolness to fellow geeks. You can now suggest solutions that the business community would never have dreamed of. Cool solutions that they really will thank you for. That's got to be worth a few field trips away from your cozy cubicle.
Tuesday, April 16, 2013
N is for No to Ego
While we're on a motivational thread, here is, to my mind, a big no-no when it comes to business software ... the developer's ego.
It often seems to me that the only possible reason for some software features is to show off how darned clever the programmer is. I don't have any proof of this, it just seems that way.
OK, maybe a hint of proof in the way so many new features are touted on the basis of how cool they are, rather than any real benefit to the end user.
OK, mea culpa, I used to think that way too
As a hobbyist, it was absolutely the done thing to embellish games with gimmicks that would wow geeky onlookers. After all, why go for a simple drop down list when you could make a Town Crier character walk onto the screen and, with a theatrical flourish, unroll the list of options on a scroll?
While this kind of thing might cause a moment of entertainment first time around, believe me the novelty soon wears off when you are using the software day after day, and your focus is on completing a stack of work to a deadline.
Under those conditions, "cute" and "cool" quickly become grounds for keyboard-through-screen syndrome.
It often seems to me that the only possible reason for some software features is to show off how darned clever the programmer is. I don't have any proof of this, it just seems that way.
OK, maybe a hint of proof in the way so many new features are touted on the basis of how cool they are, rather than any real benefit to the end user.
OK, mea culpa, I used to think that way too
As a hobbyist, it was absolutely the done thing to embellish games with gimmicks that would wow geeky onlookers. After all, why go for a simple drop down list when you could make a Town Crier character walk onto the screen and, with a theatrical flourish, unroll the list of options on a scroll?
While this kind of thing might cause a moment of entertainment first time around, believe me the novelty soon wears off when you are using the software day after day, and your focus is on completing a stack of work to a deadline.
Under those conditions, "cute" and "cool" quickly become grounds for keyboard-through-screen syndrome.
Monday, April 15, 2013
M is for Motivation
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
Last time, I talked about laziness as a personal motivation for doing a good job. Now I'm looking at another motivation for promoting good software that business should, but often don't, pay enough attention to.
Poor software costs money!
My department was building, module by module, an enterprise system. One of our departments had need of a system to manage electrical contracting work. We gave them a rough estimate for a simple system of $50k, which was basically our staff time at cost. This was in the days when that would get you a substantial amount of work for $50k.
"Too expensive," they said. A vendor dangled a baited hook in front of their eyes. They bit. "We can get a system that does everything we need for just $20k."
Long story short, the $20k soon ballooned into $100k once they took all the hardware into account, plus contract services to add in the bits that they needed that the system didn't do. And we spent the original $50k in our own time anyway, on interfaces to link them into the customer data and corporate billing system.
And the system still didn't work.
They spent months, and many sleepless nights in sheer frustration trying to get a problematic, bug-ridden system to do even the basics. They couldn't track work or invoice accurately. They lost business.
Eventually we had a window in our development schedule to help them out of this nightmare. Three months (and the promised $50k) later, they had a system that worked, integrated seamlessly, and did what they needed.
The difference?
We were motivated to tell the truth in the first place, and to do a good job.
The original vendor had no motivation to do either. Being economical with the truth got them the work in the first place, and they were only interested in screwing what they could out of the one-off installation, not in a long term business relationship.
Motivation, for good or for ill, is a powerful beast.
Last time, I talked about laziness as a personal motivation for doing a good job. Now I'm looking at another motivation for promoting good software that business should, but often don't, pay enough attention to.
Poor software costs money!
My department was building, module by module, an enterprise system. One of our departments had need of a system to manage electrical contracting work. We gave them a rough estimate for a simple system of $50k, which was basically our staff time at cost. This was in the days when that would get you a substantial amount of work for $50k.
"Too expensive," they said. A vendor dangled a baited hook in front of their eyes. They bit. "We can get a system that does everything we need for just $20k."
Long story short, the $20k soon ballooned into $100k once they took all the hardware into account, plus contract services to add in the bits that they needed that the system didn't do. And we spent the original $50k in our own time anyway, on interfaces to link them into the customer data and corporate billing system.
And the system still didn't work.
They spent months, and many sleepless nights in sheer frustration trying to get a problematic, bug-ridden system to do even the basics. They couldn't track work or invoice accurately. They lost business.
Eventually we had a window in our development schedule to help them out of this nightmare. Three months (and the promised $50k) later, they had a system that worked, integrated seamlessly, and did what they needed.
The difference?
We were motivated to tell the truth in the first place, and to do a good job.
The original vendor had no motivation to do either. Being economical with the truth got them the work in the first place, and they were only interested in screwing what they could out of the one-off installation, not in a long term business relationship.
Motivation, for good or for ill, is a powerful beast.
Saturday, April 13, 2013
L is for Laziness
© Photographer Remigiusz Oprzadek | Agency: Dreamstime.com
Laziness is a good thing.
Yes, you heard me.
And I stand by it, even after decrying laziness in other posts. The laziness I object to is lazy thinking, and short-term focus. The laziness that I support is long term laziness. And that takes a bit of effort up front.
I believe it's a good thing, because it motivates good design and quality work. I like building things. Once I finish something, I want to move on to something new. I don't want to be bogged down by old stuff. The best way to achieve that is to make sure whatever I build doesn't break five minutes later.
I guess this has been ingrained in me because I've spent my professional career in various in-house IT departments. Whatever I put out there is being used by my work colleagues, people I meet every day in the office. If I do a poor job, it comes back to haunt me. I can't escape it.
So this comes out as a strong will to make sure my work is useable, does the job it's meant to do, and will run as trouble-free as possible for as long as possible.
What's more, when the inevitable change requests come, they should be driven by new business needs rather than the need to fix something that isn't working quite right, and the code should be a pleasure to unwrap and modify.
Do it once, do it right.
Yes, the right kind of laziness is a good thing.
Friday, April 12, 2013
K is for Kindness
Whereas joy (yesterday's post) was an outcome, an effect rather than a cause, kindness is most emphatically a state of mind to be actively cultivated.
Be kind to your users
What does that mean?
More specifically, what does that mean in concrete terms in software design?
Let's start by eliminating a few suspects from our inquiries. For a start, I am assuming that the software does what it is supposed to do, i.e. it functions correctly. Also, I'm assuming that it performs well, and is secure and robust. That is not kindness, that is just doing your job. Take it as given.
But if it does all those things, what's left?
Consistency: When you are driving somewhere you've never been before, and you approach an intersection, do you panic, wondering how traffic is going to behave? No, you should know what to expect from the layout and the traffic signs. Road signs are consistent and traffic follows standard rules. All you have to worry about is which turn to take. The same should be true when a user comes to a screen or starts typing data. Completing a transaction should not mean clicking "OK" on one screen, hitting a "Save" icon on another, and a drop-down menu on a third. Clicking an "Exit" option should not save your work on three screens, but discard unsaved changed on the fourth. Be kind. Be consistent.
Comprehensibility: I wouldn't expect to drive around Victoria and find road signs written in Russian, or speed limits posted as fractions of the speed of light. Nor should your users be presented with instructions requiring a degree in astrophysics to understand. This is a business application you're writing. The only language that has any right to be shown is the business language that the user talks.
Clarity: Cluttered screens, scrolling windows within scrolling windows, nested layers of menu bars and ribbons ... space shuttle pilots might be able to cope. Not all of us are space shuttle pilots.
Tolerance: Kindness to your users means being tolerant of mistakes. A person should be able to hit a wrong key or click the wrong button, and be able to recover from the error. Many applications have an "Undo" feature, a good example of tolerance. But if you've finished a transaction and there's nothing to "undo", there should still be easy ways to correct mistakes, and to cancel or reverse entries.
Escape routes: Would you get away with designing a building without any means to escape a fire? Every step of the way through your application, there should be a way out. And, yes, it should be consistent.
Enterprise applications I've worked on in the past had a control or function key that unfailingly returned you to a menu.
The same key.
On every. Single. Damned. Screen.
Be kind to your users
What does that mean?
More specifically, what does that mean in concrete terms in software design?
Let's start by eliminating a few suspects from our inquiries. For a start, I am assuming that the software does what it is supposed to do, i.e. it functions correctly. Also, I'm assuming that it performs well, and is secure and robust. That is not kindness, that is just doing your job. Take it as given.
But if it does all those things, what's left?
Consistency: When you are driving somewhere you've never been before, and you approach an intersection, do you panic, wondering how traffic is going to behave? No, you should know what to expect from the layout and the traffic signs. Road signs are consistent and traffic follows standard rules. All you have to worry about is which turn to take. The same should be true when a user comes to a screen or starts typing data. Completing a transaction should not mean clicking "OK" on one screen, hitting a "Save" icon on another, and a drop-down menu on a third. Clicking an "Exit" option should not save your work on three screens, but discard unsaved changed on the fourth. Be kind. Be consistent.
Comprehensibility: I wouldn't expect to drive around Victoria and find road signs written in Russian, or speed limits posted as fractions of the speed of light. Nor should your users be presented with instructions requiring a degree in astrophysics to understand. This is a business application you're writing. The only language that has any right to be shown is the business language that the user talks.
Clarity: Cluttered screens, scrolling windows within scrolling windows, nested layers of menu bars and ribbons ... space shuttle pilots might be able to cope. Not all of us are space shuttle pilots.
Tolerance: Kindness to your users means being tolerant of mistakes. A person should be able to hit a wrong key or click the wrong button, and be able to recover from the error. Many applications have an "Undo" feature, a good example of tolerance. But if you've finished a transaction and there's nothing to "undo", there should still be easy ways to correct mistakes, and to cancel or reverse entries.
Escape routes: Would you get away with designing a building without any means to escape a fire? Every step of the way through your application, there should be a way out. And, yes, it should be consistent.
Enterprise applications I've worked on in the past had a control or function key that unfailingly returned you to a menu.
The same key.
On every. Single. Damned. Screen.
Thursday, April 11, 2013
J is for Joy
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
Most of these posts are about habits of thought. This one is different.
Joy in this context isn't meant to be a habit (although it helps to take joy in your work). I am suggesting it is a result to be aware of. A temperature gauge. A way to recognize success.
When you are doing a good job, a sense of joy should be a natural outcome. And when you have produced something truly useful and useable, something that helps people accomplish their goals - whether it is to produce that killer presentation, or get a customer's order delivered on time, or detect a million-dollar fraud in progress - people should enjoy using your software.
When you enjoy working on a product, and people enjoy using it, you know you are doing something right.
The converse is also true. If you find yourself reluctant to open up that code, if people groan every time they need to use your product, it's a warning sign to stop and take a hard look at what you are doing.
Most of these posts are about habits of thought. This one is different.
Joy in this context isn't meant to be a habit (although it helps to take joy in your work). I am suggesting it is a result to be aware of. A temperature gauge. A way to recognize success.
When you are doing a good job, a sense of joy should be a natural outcome. And when you have produced something truly useful and useable, something that helps people accomplish their goals - whether it is to produce that killer presentation, or get a customer's order delivered on time, or detect a million-dollar fraud in progress - people should enjoy using your software.
When you enjoy working on a product, and people enjoy using it, you know you are doing something right.
The converse is also true. If you find yourself reluctant to open up that code, if people groan every time they need to use your product, it's a warning sign to stop and take a hard look at what you are doing.
Wednesday, April 10, 2013
I is for Insulation
Old office buildings used to be designed much like homes: stone walls, plastered ceilings, wooden floorboards. Spaces to fill with furniture and people. Flexibility and change were not part of the plan.
That changed in the middle of the last century, with things like raised floors and suspended ceilings becoming the norm. This allows lighting, electrical, and telecomms services to be moved around to where they're needed. Office buildings also tend to be designed such that the internal walls are non-structural, so they can also be taken down and rebuilt easily without affecting the overall integrity of the building.
In this example, different parts of a building's function are insulated from each other, so that changes can be made to one element without affecting others.
The same thinking is useful in software. Different functions in an application should be insulated from each other.
This kind of insulation shows up in a number of techniques. Subroutines and modular programming were early examples, followed by message queues, client server, and object orientation. These are not mutually exclusive, just different tools to divide systems up in different ways.
To be effective, this kind of thinking has to be introduced early on in the design process, and like any tool you have to know how to use it. A poorly-designed object-oriented system can be just as much of a maintenance nightmare as early sixties spaghetti code.
Whatever the technique, the objectives of good insulation are pretty much the same. Each part of the system has its own special job to do and doesn't need to worry about how any other part works. This breaks the coding work down into bite-sized chunks, each with clear objectives. This not only makes the coding easier, it makes everything easier to test. It also means you can make changes to a function and be confident that your changes will not have unintended consequences elsewhere.
Imagine the chaos at home if the simple act of plugging in the vacuum cleaner caused the toilets to stop flushing. Laughable? Maybe, but that is pretty much the state many poorly-insulated software applications find themselves in.
That changed in the middle of the last century, with things like raised floors and suspended ceilings becoming the norm. This allows lighting, electrical, and telecomms services to be moved around to where they're needed. Office buildings also tend to be designed such that the internal walls are non-structural, so they can also be taken down and rebuilt easily without affecting the overall integrity of the building.
In this example, different parts of a building's function are insulated from each other, so that changes can be made to one element without affecting others.
The same thinking is useful in software. Different functions in an application should be insulated from each other.
This kind of insulation shows up in a number of techniques. Subroutines and modular programming were early examples, followed by message queues, client server, and object orientation. These are not mutually exclusive, just different tools to divide systems up in different ways.
To be effective, this kind of thinking has to be introduced early on in the design process, and like any tool you have to know how to use it. A poorly-designed object-oriented system can be just as much of a maintenance nightmare as early sixties spaghetti code.
Whatever the technique, the objectives of good insulation are pretty much the same. Each part of the system has its own special job to do and doesn't need to worry about how any other part works. This breaks the coding work down into bite-sized chunks, each with clear objectives. This not only makes the coding easier, it makes everything easier to test. It also means you can make changes to a function and be confident that your changes will not have unintended consequences elsewhere.
Imagine the chaos at home if the simple act of plugging in the vacuum cleaner caused the toilets to stop flushing. Laughable? Maybe, but that is pretty much the state many poorly-insulated software applications find themselves in.
Tuesday, April 9, 2013
H is for Help
© Photographer Razvan Oncescu | Agency: Dreamstime.com
An important businessman visiting Seattle was being whisked to important business meetings by private helicopter.
Mid flight, thick fog descended suddenly on the city. The pilot swore. Nothing was visible outside and his navigation instruments chose that moment to malfunction. He admitted that they were lost. He inched forward in the fog, until at last the lights of a tall building loomed ahead. Approaching closer, they could see people working in the building.
The businessman rummaged in his briefcase for paper and pen, and held up a sign in the helicopter window which read, "Where are we?"
Quick as a flash, a man in the nearby building held up a sign which said, "You are in a helicopter."
The businessman swore, but the pilot smiled, wheeled the craft around, and ten minutes later landed at their destination.
"We were lost," the businessman said. "How did you know where we were?"
"When I saw their sign," the pilot replied, "I knew we were at the Microsoft building. Their help was technically accurate, but completely useless."
Many help systems are really lists of technical reference topics. They assume you already know about the topics being discussed and just want some esoteric command or bit of syntax that you'd forgotten. In other words, they are no help unless you already know an awful lot about what you are trying to do.
Help needs to address many audiences, at different levels of expertise, and needs to answer more than the bread-and-butter "How to" type questions.
Sadly, the less-than-helpful variety proliferates because it is the easiest to write.
Make sure your help is helpful.
Monday, April 8, 2013
G is for Generalization
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
Last time, I talked about future-proofing.
Generalization is a useful form of future-proofing. It involves thinking beyond what the business folks are telling you, and looking for ways to abstract ideas that might get stretched in future.
Opportunities for generalization often show up when your business users start listing off sets of possible values for data entities. Examples: "We've just got the head office here, and the store downtown." "We only sell this in red, white, and black."
Those are giveaway clues that you should have a table for those kinds of things, and not make assumptions that would limit future additions to the table.
Some opportunities are a bit more obscure, because they seem to be wedded to the way things work. Suppose a payroll system has a field on the employee record with a "W" for weekly-paid employees, and "M" for monthly. A short-sighted approach would end up with programs littered with decisions along the lines of 'If it's a "W" do this, if it's a "M" do that...'
Instead, a good generalization is to set up a table of pay cycles with the code (e.g. "W") along with some additional data to describe how employees of that type should be treated (e.g. "pay every 7 days, on a Friday").
The aim of this is to make sure that nowhere in your programming do you directly encode facts like 'If it's a "W" then today must be a pay day'. That question is always answered by looking up the pay cycle record and doing a calculation based on the associated data. And the aim is to make that calculation as generally-useful as possible. In the case of pay cycles, you might for example allow concepts such as "every x days" or "every x months" and some way of showing when the cycle ends. This might sound like a lot of effort, but the point is it is only going to be done once, in one place, so you can afford some extravagance here.
Why is that a good thing?
Imagine your business owner comes into your office one day and says, "Look, I know I said this would never happen (**), but management has just agreed to a new grade of staff, and - get this - they will be paid every two weeks!"
Do you want to be heading for coronary country imagining all those places where you encoded that M/W decision?
Or do you want to smile, add a third pay cycle to the table, and say "Done"?
(**) This is another giveaway phrase: The more strenuously a business user insists it won't happen, the more likely it will.
Last time, I talked about future-proofing.
Generalization is a useful form of future-proofing. It involves thinking beyond what the business folks are telling you, and looking for ways to abstract ideas that might get stretched in future.
Opportunities for generalization often show up when your business users start listing off sets of possible values for data entities. Examples: "We've just got the head office here, and the store downtown." "We only sell this in red, white, and black."
Those are giveaway clues that you should have a table for those kinds of things, and not make assumptions that would limit future additions to the table.
Some opportunities are a bit more obscure, because they seem to be wedded to the way things work. Suppose a payroll system has a field on the employee record with a "W" for weekly-paid employees, and "M" for monthly. A short-sighted approach would end up with programs littered with decisions along the lines of 'If it's a "W" do this, if it's a "M" do that...'
Instead, a good generalization is to set up a table of pay cycles with the code (e.g. "W") along with some additional data to describe how employees of that type should be treated (e.g. "pay every 7 days, on a Friday").
The aim of this is to make sure that nowhere in your programming do you directly encode facts like 'If it's a "W" then today must be a pay day'. That question is always answered by looking up the pay cycle record and doing a calculation based on the associated data. And the aim is to make that calculation as generally-useful as possible. In the case of pay cycles, you might for example allow concepts such as "every x days" or "every x months" and some way of showing when the cycle ends. This might sound like a lot of effort, but the point is it is only going to be done once, in one place, so you can afford some extravagance here.
Why is that a good thing?
Imagine your business owner comes into your office one day and says, "Look, I know I said this would never happen (**), but management has just agreed to a new grade of staff, and - get this - they will be paid every two weeks!"
Do you want to be heading for coronary country imagining all those places where you encoded that M/W decision?
Or do you want to smile, add a third pay cycle to the table, and say "Done"?
(**) This is another giveaway phrase: The more strenuously a business user insists it won't happen, the more likely it will.
Saturday, April 6, 2013
F is for Future-Proofing
The world changes. Any application you build has to cope with change. Future-proofing is a frame of mind that equips your application with the flexibility to cope.
Good, clean design, modularity, separation of concepts, and good quality code, all make applications easy to change when the need arises. But future-proofing aims to reduce the need to change the application in the first place.
The biggest no-no I see, and have campaigned against endlessly, is where data is embedded directly in the code.
Some examples are obvious and clearly the result of short-sighted laziness. A programmer writes a routine to calculate tax, and sticks the tax rate right in the middle of the calculation. Why do that? You know taxes will change. The rate should be stored somewhere where it can be changed without having to delve into the code.
Better still, it should be something that a business user can change without involving application support at all.
Less obvious examples are more difficult to spot. The company name at the top of every screen and report, for example. Honestly, have companies never been know to change names? Make it a variable. It's just as easy. In fact, it's probably easier than typing the same thing over and over.
The real killers, though, are the constants that are so ingrained you don't even mention them in your design at all! For instance, what about all those monetary values with $ signs in front - are you sure that is fixed and forever? Try asking all those European businesses what fun they had when Francs and Deutschmarks and Lira gave way to the Euro.
My advice is to challenge any hard-coded value. Is it really fixed? Can you ever envisage a situation where it could change?
This is easily summed up as ... Constants are best regarded as variables.
Good, clean design, modularity, separation of concepts, and good quality code, all make applications easy to change when the need arises. But future-proofing aims to reduce the need to change the application in the first place.
The biggest no-no I see, and have campaigned against endlessly, is where data is embedded directly in the code.
Some examples are obvious and clearly the result of short-sighted laziness. A programmer writes a routine to calculate tax, and sticks the tax rate right in the middle of the calculation. Why do that? You know taxes will change. The rate should be stored somewhere where it can be changed without having to delve into the code.
Better still, it should be something that a business user can change without involving application support at all.
Less obvious examples are more difficult to spot. The company name at the top of every screen and report, for example. Honestly, have companies never been know to change names? Make it a variable. It's just as easy. In fact, it's probably easier than typing the same thing over and over.
The real killers, though, are the constants that are so ingrained you don't even mention them in your design at all! For instance, what about all those monetary values with $ signs in front - are you sure that is fixed and forever? Try asking all those European businesses what fun they had when Francs and Deutschmarks and Lira gave way to the Euro.
My advice is to challenge any hard-coded value. Is it really fixed? Can you ever envisage a situation where it could change?
This is easily summed up as ... Constants are best regarded as variables.
Friday, April 5, 2013
E is for Error Messages
Translation: I've screwed up and you've just lost your afternoon's work.
Usually followed by that smug little "OK".
No, it's flippin' well not OK!
As an application developer, the best thing to do is to make sure these little beauties don't happen in the first place, of course. But, equally of course, computers will screw up from time to time, and when they do, the least you can do is make sure they do so gracefully.
So it's...
NO - to the cryptic techno-babble that the average end user won't even begin to comprehend.
NO - the end user will not want to debug the code. Don't even go there.
NO - the end user should not be left in limbo wondering what they should do next, or panicking that they've single-handedly brought the company system to its knees.
Things you can do
If possible, notify the help desk - behind the scenes - and log the technical details of the error somewhere a technical support person can get at them. The previous company I worked for used these measures very successfully. Often, by the time the user reported trouble, we were already working on a solution.
Keep the mumbo jumbo out of the visible message and instead tell the user what they should be doing next. Don't just leave them hanging. If you need them to stop work and report it, say so. If it's OK to log back in again and resume work, say so.
These are all talking about where the application chokes out of control. Then there's a whole class of errors that the end user will trip up over in the normal course of their work. Typing the wrong thing into a field, for example. These things should be detected by the application and the user informed in a way that is helpful and meaningful. Don't just spit out "Invalid input," say why it's not valid and what is expected instead.
Going the extra mile in handling errors will pay dividends in user satisfaction, and the extra effort in design and programming will repay you handsomely in the workload on your help desk.
Thursday, April 4, 2013
D is for Defensive design
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
© Photographer Illreality | Agency: Dreamstime.com
So far, we've been pretty high up in the stratosphere, talking about architecture, business requirement, and core concepts. In contrast, this post can apply at all levels of design and development, but it is most at home down in the weeds of the code.
You reach a point in your program where you have a decision to make. A fork in the road. If the answer to a certain question is "Yes" you do one thing, if it's "No" you do something else.
The average programmer will likely code something that says: If "Yes" do one thing, otherwise do something else.
Notice the difference?
Where did "No" vanish to?
"What's the difference?" Average Joe-Coder protests. "It can only be yes or no, so if it isn't yes, it must be no. I'm just being efficient."
Sure, but you're not being defensive.
A defensive programmer will write code along the lines of: If "yes" do one thing, if "No" do something else, otherwise tell a bloody human that someone's f****d up!
"That's mad," Average Joe-Coder says. "Why code for something that can never happen?"
Fair point, if that were a true statement. But, despite Joe's insistence, it will happen ... some day. Trust me. You're relying on logic to ensure the only possible answers are yes or no, and that logic has been written by human beings - with all that implies. And computers are physical machines. They glitch sometimes. What went onto the disk as a nice clean "Yes" or "No" might overnight morph into "Whatever".
And, you might remind Joe, the more unlikely it is to happen, the more important it is to tell someone when it does.
If a stranger wanders into a public library, nobody cares. It's expected behavior, not a problem.
If a stranger wanders unchecked into the loading dock of the local supermarket, it's not expected, but plausible. There's no locked doors to stop it happening. A member of staff will likely ask, "Can I help you?" and it's not a big deal.
If a stranger walks unchallenged into the Queen's bedroom at Buckingham Palace, you can bet it will be a big deal and the authorities will want to know about it. Because that should be impossible.
But it has happened.
Defensive design makes no assumptions.
Face the unthinkable. Decide how to handle it, and act accordingly.
© Photographer Illreality | Agency: Dreamstime.com
So far, we've been pretty high up in the stratosphere, talking about architecture, business requirement, and core concepts. In contrast, this post can apply at all levels of design and development, but it is most at home down in the weeds of the code.
You reach a point in your program where you have a decision to make. A fork in the road. If the answer to a certain question is "Yes" you do one thing, if it's "No" you do something else.
The average programmer will likely code something that says: If "Yes" do one thing, otherwise do something else.
Notice the difference?
Where did "No" vanish to?
"What's the difference?" Average Joe-Coder protests. "It can only be yes or no, so if it isn't yes, it must be no. I'm just being efficient."
Sure, but you're not being defensive.
A defensive programmer will write code along the lines of: If "yes" do one thing, if "No" do something else, otherwise tell a bloody human that someone's f****d up!
"That's mad," Average Joe-Coder says. "Why code for something that can never happen?"
Fair point, if that were a true statement. But, despite Joe's insistence, it will happen ... some day. Trust me. You're relying on logic to ensure the only possible answers are yes or no, and that logic has been written by human beings - with all that implies. And computers are physical machines. They glitch sometimes. What went onto the disk as a nice clean "Yes" or "No" might overnight morph into "Whatever".
And, you might remind Joe, the more unlikely it is to happen, the more important it is to tell someone when it does.
If a stranger wanders into a public library, nobody cares. It's expected behavior, not a problem.
If a stranger wanders unchecked into the loading dock of the local supermarket, it's not expected, but plausible. There's no locked doors to stop it happening. A member of staff will likely ask, "Can I help you?" and it's not a big deal.
If a stranger walks unchallenged into the Queen's bedroom at Buckingham Palace, you can bet it will be a big deal and the authorities will want to know about it. Because that should be impossible.
But it has happened.
Defensive design makes no assumptions.
Face the unthinkable. Decide how to handle it, and act accordingly.
Wednesday, April 3, 2013
C is for Core concepts
Yesterday, I talked about business needs. They are important, but to get a feel for the shape of an application, I find it helpful to distil the core concepts out of the mass of detail.
© Photographer Dana Rothstein | Agency: Dreamstime.com
Core concepts get to the heart of what is most important. It's essential to identify them because they will start shaping your solution very early on.
A good starting point is a high level data model. This highlights the most important things that the application deals with.
In an accounting application, for example, these might be accounts, transactions, and accounting periods. In a customer ordering system, core concepts might include customer, order, and inventory item.
At this point, you don't want to itemize every piece of data. The core concepts are the mountain peaks in the data landscape.
Another group of core concepts relate to function. What do you do with the things in your application? Again, you want to identify the really important things here, so maybe ask your business expert to describe in one sentence what is it the application does.
In a help desk system, the core of the system might be workflow - moving a call through different people's hands to fix an issue for the customer. A stock control system might revolve around movements in stock levels and forecasting future needs.
More subtle, and what people often overlook, are other qualities that might be important enough to affect your early architectural decisions. In a global application, maybe time zones, multi-lingual presentation and multi-currency accounting are important, whereas in an otherwise identical application for a local business these wouldn't matter. A booking system for a large airline or hotel chain might bring volume, performance, and high availability to the fore.
Core concepts may also arise from design choices. For example, security and privacy might be a primary business requirement. In response to this, you might decide that all data requests will go through an information access layer that enforces security policy. This design choice isn't a business requirement, but it is a crucial design concept that will fundamentally affect how the application is built.
So, picking out a handful of core concepts helps visualize what is most important, and helps de-clutter an unmanageable mass of information. You can then build the rest of your design around these concepts.
© Photographer Dana Rothstein | Agency: Dreamstime.com
Core concepts get to the heart of what is most important. It's essential to identify them because they will start shaping your solution very early on.
A good starting point is a high level data model. This highlights the most important things that the application deals with.
In an accounting application, for example, these might be accounts, transactions, and accounting periods. In a customer ordering system, core concepts might include customer, order, and inventory item.
At this point, you don't want to itemize every piece of data. The core concepts are the mountain peaks in the data landscape.
Another group of core concepts relate to function. What do you do with the things in your application? Again, you want to identify the really important things here, so maybe ask your business expert to describe in one sentence what is it the application does.
In a help desk system, the core of the system might be workflow - moving a call through different people's hands to fix an issue for the customer. A stock control system might revolve around movements in stock levels and forecasting future needs.
More subtle, and what people often overlook, are other qualities that might be important enough to affect your early architectural decisions. In a global application, maybe time zones, multi-lingual presentation and multi-currency accounting are important, whereas in an otherwise identical application for a local business these wouldn't matter. A booking system for a large airline or hotel chain might bring volume, performance, and high availability to the fore.
Core concepts may also arise from design choices. For example, security and privacy might be a primary business requirement. In response to this, you might decide that all data requests will go through an information access layer that enforces security policy. This design choice isn't a business requirement, but it is a crucial design concept that will fundamentally affect how the application is built.
So, picking out a handful of core concepts helps visualize what is most important, and helps de-clutter an unmanageable mass of information. You can then build the rest of your design around these concepts.
Tuesday, April 2, 2013
B is for Business needs
The purpose of software is to be useful to the people who use it.
Right?
So why is there so much crap out there?
All too often, software is designed with only minimal reference to the people actually using it. Worse still, a lot of software seems designed more for the aggrandizement of the writers than the use of the users - look how cool this feature is!
The point of software is not to show off how technically gifted you are. You can show me that by giving me something that does what I need it to do, reliably, without fuss, and without expecting me to know how it works.
And how can you possibly know what I need, unless you ask me?
The traditional software development lifecycle kicks off with Business Requirements. Good start! But gathering requirements isn't just about ticking off boxes full of dry jargon. To make software really useful, you need to dig beyond requirements. Requirements are what ivory tower dwellers say they need. A whole world apart from what the business, and the people in it, really need.
You need to dig. Keep probing. Keep asking "Why?" Ask at all levels, not just the bosses They don't have a clue. Sit with the people doing the job. Walk a mile in their shoes. See the world through their eyes.
For example, "The Requirements" might state that an order entry clerk needs access to a list of inventory. Great. Got enough to design the system? Off you go, then. You give them a spanking new screen that lays out the entire company's inventory any which way you can think of. By product category, by warehouse, search by name, by price, by color ... You spent days on it. You are proud of your job.
And the order entry clerks hate it.
"Why?" you ask them, "how unreasonable can you get? I put all this blood, sweat, and tears into this screen and you ungrateful chimps just don't appreciate the heartbreaking genius that went into it, with its predictive auto-completion and intelligent history-based searches."
"But," they say, "I already know the product number, I just need to see which is the nearest warehouse that has it in stock ... and I need to do that from within the order screen where I've already filled out half the order."
Oops. You didn't know that because you didn't talk to them about what they need.
The moment you catch yourself thinking, "Wow, everyone will love this new feature!" ... beware!
Right?
So why is there so much crap out there?
All too often, software is designed with only minimal reference to the people actually using it. Worse still, a lot of software seems designed more for the aggrandizement of the writers than the use of the users - look how cool this feature is!
The point of software is not to show off how technically gifted you are. You can show me that by giving me something that does what I need it to do, reliably, without fuss, and without expecting me to know how it works.
And how can you possibly know what I need, unless you ask me?
The traditional software development lifecycle kicks off with Business Requirements. Good start! But gathering requirements isn't just about ticking off boxes full of dry jargon. To make software really useful, you need to dig beyond requirements. Requirements are what ivory tower dwellers say they need. A whole world apart from what the business, and the people in it, really need.
You need to dig. Keep probing. Keep asking "Why?" Ask at all levels, not just the bosses They don't have a clue. Sit with the people doing the job. Walk a mile in their shoes. See the world through their eyes.
For example, "The Requirements" might state that an order entry clerk needs access to a list of inventory. Great. Got enough to design the system? Off you go, then. You give them a spanking new screen that lays out the entire company's inventory any which way you can think of. By product category, by warehouse, search by name, by price, by color ... You spent days on it. You are proud of your job.
And the order entry clerks hate it.
"Why?" you ask them, "how unreasonable can you get? I put all this blood, sweat, and tears into this screen and you ungrateful chimps just don't appreciate the heartbreaking genius that went into it, with its predictive auto-completion and intelligent history-based searches."
"But," they say, "I already know the product number, I just need to see which is the nearest warehouse that has it in stock ... and I need to do that from within the order screen where I've already filled out half the order."
Oops. You didn't know that because you didn't talk to them about what they need.
The moment you catch yourself thinking, "Wow, everyone will love this new feature!" ... beware!
Monday, April 1, 2013
A is for Architecture
Before we start this year's A to Z Challenge, participants have been asked to say a big (and surprise) Thank You to Arlee Bird at Tossing It Out for starting this Challenge.
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
To kick things off, I want to be clear about one thing. This is not a geek series about programming. It doesn't even say much about technology. It is about the process of software development from a human perspective, and some habits of thought that I believe lead to good, robust, and useful software.
This first post is about architecture, which I think encapsulates the whole series.
When you say "architect", most people think of buildings.
© Photographer Galina Barskaya | Agency: Dreamstime.com
No problem. What I'm talking about here is the essence of architectural thinking. That essence holds true in any field where your aim is to build something, be it a house, a three-course meal, or a piece of software. This essence, the thing that distinguishes an architect from a builder, is the use of deliberate planning and thinking skills to achieve a well-built end result that serves its purpose, is a pleasure to use, and which you are proud to be associated with.
In traditional building architecture, you may start with a blank slate, the big picture, and work things out from the top down. An empty field. What do you need to incorporate? Lay out the major elements and gradually fill in finer and finer detail.
Or you may start with one key idea and build out from there. The entrance hall. An imposing atrium. A courtyard with fountains. A shape.
Or you may start with some peculiar constraints. An unusually long and narrow plot. A steep hillside. The use of a particular material.
Wherever you start from, one thing that distinguishes good from poor architecture is that, although the dominant idea may be visible or even deliberately conspicuous, it won't dominate to the exclusion of all else. A breathtaking atrium may wow visitors to your headquarters, but the office space around it still needs to be useable. You don't get to skimp on ventilation, or disabled access, or fire safety, just because those pesky elements interfere with your grand vision.
Whatever your starting point, good architecture will arrive at a sound and coherent whole, where everything works and clearly belongs together.
The same applies in software development. You need to balance function (what it does) with a whole host of other constraints like useability, performance, security, auditability, availability, scalability, maintainability, and cost, to name a few.
A beautiful building doesn't happen by chance, and neither does beautiful software. In both arenas, there are deliberate and expert disciplines that need to be applied. These disciplines are your craft. You need to master them in order to produce good results reliably.
In other words, architecture is the art of doing things right by conscious design rather than purely by accident.
This year, for the A to Z Blogging Challenge I'm posting alphabetically on topics related to software development...
To kick things off, I want to be clear about one thing. This is not a geek series about programming. It doesn't even say much about technology. It is about the process of software development from a human perspective, and some habits of thought that I believe lead to good, robust, and useful software.
This first post is about architecture, which I think encapsulates the whole series.
When you say "architect", most people think of buildings.
© Photographer Galina Barskaya | Agency: Dreamstime.com
No problem. What I'm talking about here is the essence of architectural thinking. That essence holds true in any field where your aim is to build something, be it a house, a three-course meal, or a piece of software. This essence, the thing that distinguishes an architect from a builder, is the use of deliberate planning and thinking skills to achieve a well-built end result that serves its purpose, is a pleasure to use, and which you are proud to be associated with.
In traditional building architecture, you may start with a blank slate, the big picture, and work things out from the top down. An empty field. What do you need to incorporate? Lay out the major elements and gradually fill in finer and finer detail.
Or you may start with one key idea and build out from there. The entrance hall. An imposing atrium. A courtyard with fountains. A shape.
Or you may start with some peculiar constraints. An unusually long and narrow plot. A steep hillside. The use of a particular material.
Wherever you start from, one thing that distinguishes good from poor architecture is that, although the dominant idea may be visible or even deliberately conspicuous, it won't dominate to the exclusion of all else. A breathtaking atrium may wow visitors to your headquarters, but the office space around it still needs to be useable. You don't get to skimp on ventilation, or disabled access, or fire safety, just because those pesky elements interfere with your grand vision.
Whatever your starting point, good architecture will arrive at a sound and coherent whole, where everything works and clearly belongs together.
The same applies in software development. You need to balance function (what it does) with a whole host of other constraints like useability, performance, security, auditability, availability, scalability, maintainability, and cost, to name a few.
A beautiful building doesn't happen by chance, and neither does beautiful software. In both arenas, there are deliberate and expert disciplines that need to be applied. These disciplines are your craft. You need to master them in order to produce good results reliably.
In other words, architecture is the art of doing things right by conscious design rather than purely by accident.
Subscribe to:
Posts (Atom)