Eleven Tech Firms Honoured for Advancing Innovation

Eleven local technology companies were honoured Tuesday for the roles they have played in advancing innovation and entrepreneurship in Alberta.

The companies – who work in fields such as cloud computing, oil and gas, telecommunications and more – were declared winners at the 2014 TechRev Innovators awards night.

An initiative of Innovate Calgary, TechRev aims to increase awareness of and investment activity in the technology sector in the Calgary region. More than 600 companies were nominated this year, and the winners were selected based on factors like financial performance, operational growth, and market viability.

Peter Garrett, Innovate Calgary interim president, said the honourees represent the ingenuity and “can-do” spirit that has become synonymous with Calgary.

“The passion for innovation and perseverance to succeed is inspiring and we are proud to celebrate the achievements of the technology startup community,” he said.

Garrett added that while Calgary may be known as an oil and gas city, its technology community is not to be overlooked. Out of 165 technology companies recently surveyed by Innovate Calgary, 82 per cent released at least one new product or new version in the past year, 30 per cent expanded their international reach in the past year, and 64 per cent created new jobs in the past year.

All of the winning companies are using their technological know-how to bring unique solutions to their clients. In the case of Irricana-based Decisive Farming, which has grown its employee base from 17 people to 27 in the past year alone, those clients are agricultural producers. The company designs software and services that help farmers make decisions about precision agronomics, soil fertility, crop marketing, risk management, data management, and more.

CEO Remi Schmaltz said his advice to prospective entrepreneurs is to keep their finger on the pulse of change. In his case, that meant recognizing the way the arrival of high-speed Internet access in rural areas, combined with the popularity of smartphones, would revolutionize farming.

“Now that this technology’s cheap and it’s affordable, relatively speaking, farmers are adopting it quickly,” he said. “Farmers are busy, they wear many hats, and they need to be able to access their information.”

Bryan de Lottinville, CEO of charitable giving software company Benevity, said another key is getting core people on board and excited about your vision. He said this is critically important in Alberta, where it can be difficult to compete with the oil and gas sector on wages.

“The reality is most of the people who have come to us in the past, came to work at a discount to what they could be getting at one of the oil and gas companies,” de Lottinville said, adding his company now counts 20 Fortune 500 companies among its clients and has 43 employees – up from just 12 two years ago.

“They did it because they wanted to work at an early-stage opportunity that not only has prospects, but also has a bit of meaning for them. So you need to find key people who share your passion and values at the outset.”

Other winners at Tuesday’s event were medical technology company Calgary Scientific; Code Excellence, whose software aims to help eliminate potential system crashing code defects; Lumiant Corporation, which has developed a patentpending material called TitanMade that has the strength of steel alloys, but is less than half the weight; MRF Geosystems Corporation, a geographic information systems company with software products licensed to 6,000 customers; Nanalysis Corporation, which is developing a family of portable Nuclear Magnetic Resonance devices for the laboratory instrumentation space; Packers Plus, which develops technologies for oil and gas applications; Tactalis, which has created a tactile computer interface and online content store for people who are blind or visually impaired; TEKTELIC Communications, which designs telecom solutions to address the rapidly growing data requirements faced by service providers; and TetraSeis, a developer of seismic data processing technologies.

Original Article:
http://www.calgaryherald.com/business/Eleven+tech+firms+honoured+advancing+innovation/9496885/story.html

6 Opportunities and Risks of Application Portfolio Rationalization

Chris Crawford and Bob Kress of Accenture recently published an interesting interview in which they discussed the benefits, requirements and challenges associated with rationalizing an organization’s application portfolio. That got me to thinking about my experiences in this area over the years, and some of the lessons learned. These may seem like they should go without saying, but given number of failed, late, over-budget, and/or under-delivered IT projects that are still occurring, that’s apparently not the case

6 Opportunities and Risks of Application Portfolio Rationalization

Look at Opportunities to Rationalize Infrastructure, not Just Applications
This was a great point raised in the above interview. Application rationalization is frequently an important and worthwhile task to tackle, but it’s invariably very difficult. Typically, most applications get deeply embedded in an organization’s processes, often in subtle and difficult to identify ways. Infrastructure rationalization isn’t easy, but it does tend to be a little more transparent in its risks and potential impacts.

Have Executive-Level Buy-In, Understanding, Sponsorship and Trust
This is really critical. It’s imperative that IT have the backing and sponsorship of the business, at an executive level, before engaging in any large-scale rationalization project. In my experience, rationalization projects of any kind are difficult, high-impact and high-visibility. To succeed, the business, at an executive level, has to understand this, to understand the benefits, but also the risks, so they can provide the necessary input and support to deal with the inevitable conflicts and issues that arise over the course of these projects.

Start Small to Establish A Track Record
This is another important point. If your IT organization has a good and trusting relationship with the business, and there’s a commonly understood mission and set of objectives, then you can tackle the big projects. If not, find try to find something small, with a high probability of success, and work to establish a track record of delivering value to the business (this may not be a rationalization project, but if it is, make it a small one). By way of analogy, if you’ve been sitting on the couch for 10 years, and decide you’d like to run a marathon, then rather than putting on the shoes and hitting the start line, maybe start with some brisk walking and work up to the longer distances.

Be Pragmatic When Establishing Scope and Objectives
This point was also alluded to in the podcast – it’s important to be pragmatic when determining the scope and end objectives of your plan. It might be nice to have a single system for accomplishing task ‘X’ but if the business currently has 20 systems, and you’re able to reduce that to 3, but not 1, then this is still a significant improvement. Be willing to compromise to achieve realized successes rather than aiming for perfection and failing to achieve anything.

Demonstrate Progress, and Early Wins
Wherever possible, avoid big-bang style projects that require extended periods without visible, tangible progress and results. Try to identify milestones that are visible and meaningful to the business, and make sure that at least some of these milestones are delivered early in the project. Keep the business engaged in the process throughout. This is essential to maintaining the morale and commitment of both the business and IT project members over the course of what are often long and difficult projects.

Be Able to Demonstrate and Document the Value of the Project
As with the other items, this may seem obvious, but make sure there is shared agreement among all stakeholders and involved parties (business and IT) about the benefits that are expected from the rationalization project. Then make sure that you establish some quantifiable measurements of the ‘before’ state, so you’ll have something to compare the ‘after’ state to. This is very valuable in terms of both demonstrating the value achieved when the project is completed, as well as providing input for subsequent projects and increasing the organizational discipline in terms of being able to establish and measure the return on investment in IT projects. I know this is often difficult, but demonstrating an understanding of cost-benefit to the business, and a commitment to delivering tangible, quantifiable value wherever possible goes a long way toward establishing and maintaining a high-trust relationship with the business.

Rationalization projects – application and technical infrastructure, are almost never easy, but they can often generate significant immediate and ongoing improvements to the organizational bottom line. And since technology and the business environment is constantly, and ever more rapidly changing, these types of projects are likely to be with us for the foreseeable future. So make sure you know how to tackle them successfully, and maximize your chances of delivering positive benefit to the business bottom line.

Are there any other opportunites or risks that you see for application portfolio rationalization? Share your thoughts in the comments below.

Image Credit: Justin Ornellas

How Managing your Technical Debt will ensure your Software Quality

Given the ever-increasing attention to technical debt, it is no surprise that it is on the minds of many top level executives these days. Technical Debt is a wonderful metaphor developed by Ward Cunningham to help the managers, executives and non-technical folk understand that that development may be building up software liabilities along with software assets. There are a wide variety of different opinions on what technical debt really means for the business’s bottom line; however it is definitely worth measuring but like any metric, it needs to be put in the context as it applies to the business.

What does ‘technical debt’ actually mean?
Let’s start at the beginning. Technical debt can be thought of roughly as ‘work that needs to be done before a particular piece of software can be considered complete’. In software this is typically cleanup work that should be done to improve simplicity, robustness, or consistency in a given piece of software. Explicit technical debt can be incurred consciously and intentionally (eg: intentionally taking some shortcuts that reduce maintainability in a piece of software because it’s known that the component will have to be rewritten in the near future) or accidentally (a new developer isn’t familiar with the correct, or accepted method for solving a particular problem and implements a clumsy or brittle solution Basically, having a lot of ‘technical debt’ for a business means that the system starts to deteriorate and will become more difficult to maintain over time. If you think of it like financial debt, you have three different strategies to deal with technical debt.

  1. You can choose to continue to pay the interest (cope with the technical debt, having to deal with the extra effort in future development).
  2. You can pay down the technical debt in one lump sum by replacing the system. This is a risky and expensive choice.
  3. You can pay down the principal, thereby reducing the technical debt by incremental refactoring and redesign. Although it will cost you to pay down the ‘debt’ in the first place, there may be an opportunity to generate savings for the company in the long run.

Why Technical Debt is unavoidable.
The fact is all companies will accrue technical debt as it is unavoidable on any development project. There is relentless pressure on development teams to deliver new innovation to the market faster than ever before and this tends to lead to a continuing debt accumulation. After releasing the software, there is usually not enough time to ‘pay back’ the technical debt that the business accrued. There is only time to fix the critical defects (pay back the interest) as there is a new business requirement to be met.

Moreover, technical debt comes in two forms: explicit described earlier and implicit. Just like compound interest, the implicit form means technical debt will accrue over time. A company will accrue additional debt unless they find the time and resources to deal with increasing complexity and areas of poor structural quality in their software. The longer you defer repayment of the debt by delaying fixes, or the more you increase the debt by adding further workarounds, the harder it will be to pay it off. As this chart shows, it is more difficult to find the time to pay back the “principal” at once rather than continuous refactoring.

Figure 1 — Accumulation of technical debt over time. (Source: Jim Highsmith.)

The most important aspect of this metaphor is that you will have to pay down the technical debt at some point. Why? Because as Steve McConnell suggests “If the debt grows large enough, eventually the company will spend more on servicing its debt than it invests in increasing the value of its other assets”.

Good versus Bad Technical Debt
This might surprise you – not all technical debt is bad. Just like the real world, taking on debt isn’t always a bad thing and sometimes it is justified. For example, a business can benefit from it by writing software that will enable the business to meet their customer’s needs, gain market share and competitive edge.

However, you will want to make sure that your technical debt is ‘good debt’. Meaning repaying the technical debt will have to become part of your development process. The only way to do this is if you have visibility into the quality of your code, and the mechanisms to identify, measure, and control your technical debt.

Managing your Technical Debt
Does managing your accumulation of technical debt make sense? You bet it does. Managing the technical debt is a liability for those responsible for governing the costs and risks of software portfolio.

So how do you start? I would suggest understanding and reducing the existing technical debt in an agile way (iterative, prioritized, and focused).

1. Measure Technical Debt
You will need to take inventory of the debt you have acquired on an application. It is best to measure it using a quality model. By applying a quality model to your software, you can then calculate the amount of debt you have, and align it to your business needs. It’s about developing a common understanding of the existing technical problems for various levels of the team.

2. Establish Prioritization and Plans
Once you have established the amount of ‘technical debt’ your application has, the next step is to prioritize the amount of work that is needed to reduce the debt. Paying back debt needs to be treated as a standard component of your agile development. Debt repayment should be based on your business priorities. Application managers will have to allocate time and budget for both new development and ongoing quality improvement. However, this will be the biggest challenge, and requires support from the business at an executive level.

Attention: When you are prioritizing and planning repaying your debt it needs to be put into context. Ask yourself does the debt actually need to be paid off. For example, if the debt is part of a rewrite then it is smarter to leave it as is and address the feature in the redesign.

3. Track your Results
As with any development, you want to make sure that your remediation efforts reduce your technical debt. As mentioned above, repayment of the debt is continuous and should be done in sprints. IT Managers will want to periodically review their progress and set new targets.

4. Collaborate with other Stakeholders
It is important to share the results with different stakeholders with the IT organization and the business because strong intelligence will help your capacity to contain and minimize it. It will provide the business with a great understanding the cost and risk to the business if it is not controlled.

It is also imperative to create an awareness initiative on why managing technical debt is in everyone’s best interest. Managing technical debt starts with socializing the idea and then engraining it into the corporate culture. This will ensure you will have quality software.

Technical Debt is just a metaphor for the quality of your software portfolio
But at the end of the day, it’s important not to get lost ‘numbers’ and remember that technical debt is a simple a metaphor for the quality of your software. If you ignore managing the technical debt, it will have disastrous effects to your business.

If you would like to explore technical debt as it relates to software quality, I recommended listening to Thomas Cagley ‘s podcast with Ted Theodoropoulos from Acrowire.

*Image Credit: robanhk

Want Quality software? Follow a Software Quality Assurance Process.

In previous posts, I have written why the quality of your software is so important to your business, from software being easier to maintain to lowering your business risks. I do realize that I keep harping on the importance of software quality but it seems that some businesses still view software quality as a ‘nice to have’ – something that can be sacrificed for faster time to market or lower costs and are shocked when they later have to pay a heavy price for these sacrifices.

So how do you go about ensuring that your software is being developed and managed with high quality? Simple really: adopt a process and discipline mindset into your software development activities.

Quality assurance is about processes.
Firstly, let me start off by explaining what the software quality assurance process is. The sole objective of the software quality assurance process is to ensure that the software being developed is an asset to the company and that it generates a positive return on investment over time. Typically this is done by establishing processes that ensure the software being developed meets identified requirements, and adheres to corporate guidelines which include code quality policies, and then manage, monitor, and report on adherence throughout the software development lifecycle.

The whole point of having a software quality process in place is to manage the quality of your software and measure your software development, and like any good process, you need a “plan” of attack – a Software Quality Assurance Plan to be exact. To ensure quality, it is extremely important to make sure that a Software Quality Assurance Plan is associated and followed on all software development projects, across in-house development teams, outsourced development teams, and software provided by third party suppliers.

Based on the IEEE Standard for Software Quality Assurance Plans, the plans should have the following elements:

  1. Purpose;
  2. Reference documents;
  3. Management;
  4. Documentation;
  5. Standards, practices, conventions, and metrics;
  6. Reviews and audits;
  7. Test;
  8. Problem reporting and corrective action;
  9. Tools, techniques, and methodologies;
  10. Code control;
  11. Media control;
  12. Supplier control;
  13. Records collection, maintenance, and retention;
  14. Training;
  15. Risk management.

Testing is not quality assurance.
A bit of advice, you will need to start worrying if people start using “quality assurance” and “testing” as interchangeable terms. Testing is also a method used within the quality assurance process and every change should be tested. As explained before, a quality control process measures all products to verify they meet a standard of quality; meaning that everything that goes out must be tested.

Achieving the Software Quality Assurance Process.
The importance of having Software Quality Assurance Process in the development lifecycle cannot be emphasized enough. The process requires discipline and execution. These 2 simple characteristics will result in reliable, maintainable software. Yes, continuously ensuring quality does require buy in and direction that is driven from the top, but it is ultimately up to the software development organization to adopt and execute on that mindset. I’m not taking about just once, but time and again.

Establishing a quality-oriented mindset and process enables you to continuously measure and increase software quality and therefore to raise your software’s reliability, stability and usability. The business will reap the extensive benefits of better software quality

  • Pay less for endless bug fixing sessions and warranty issues;
  • Identify issues before they become critical;
  • Reduce risk, and eliminating rework;
  • Increase stability of the software, and raise customer satisfaction;
  • Gain greater levels of productivity, motivation and innovation across software delivery teams by enabling them to produce high quality software rather than fixing defects.

It is important to mention here that software quality improvement is an iterative process. You are not going to accomplish everything right away, start off small. Every small change will make a difference; include quality-oriented activities in all phases of your software development lifecycle. Remember, continuously ensuring quality will always cost you less in the long run.

*Image Credit: James Hupp

What We Can Learn From Amazon About the Importance of Software Quality

I recently read an interesting article describing how outages in Amazon’s cloud service, triggered by violent storms in the Eastern United States, were significantly aggravated and prolonged as a result of hidden bugs in their software that were only exposed when they tried to recover from the initial outages caused by the storms. I’m not going to take this as an opportunity to get up on a soapbox and preach about how ‘better testing could have prevented these problems’

First off, I have no detailed knowledge of Amazon Web Services’s current testing and QA environments. However, given the mission criticality of this arm of their business, their leadership position in cloud service provision and the reputation risk associated with service failures, I’m inclined to give them the benefit of the doubt and assume that they probably do some fairly rigorous testing of their software, and of their disaster recovery procedures and environments.g to take this as an opportunity to get up on a soapbox and preach about how ‘better testing could have prevented these problems’.

Given that, the take-away for me, the thought-provoking element of the article, is just how DIFFICULT it is to build highly robust, fault-tolerant software in today’s increasingly complex and interdependent technology environment. It’s apparent that the circumstances that exposed the bugs in Amazon’s failover and recovery management software were (at least apparently) the result of an extraordinary series of coincidental events that would have been extremely difficult to anticipate and test for. However, the end result of the outages was significant cost to Amazon and its customers, and likely a significant reduction in Amazon customers’ faith in the reliability and robustness of their cloud services.

So what are we to take away from this?
If we accept the hypothesis that Amazon is in fact diligent in stress-testing it’s cloud services, and that in spite of this, the likelihood of testing the actual series of events that eventually caused the recent problems was astronomically small, are we to conclude that the situation is hopeless? That no matter what we do, software will always be failure prone? And if that’s the case, is there any point to it all?

I think it’s fair, and completely realistic to expect that software will always have bugs and there will always be ways to make it fail. The increasing complexity of the software we use, and the diversity of the environments in which it’s required to operate all but guarantee that unforeseen circumstances will arise that expose flaws and bugs in even the most rigorously tested software. That being said, the enormous potential costs associated with these failures demonstrates that it is certainly worth our while to invest heavily in quality assurance and monitoring programs so as to ensure that such events are minimized as much as possible.

This includes standard items such as having a continuously monitored and robust testing program, and taking advantage of the many automation tools that provide analysis and detection of potential problems early in the development process, before they are exposed in production situations.

As our reliance on software and technology increases, and the interdependency of our technological infrastructure increases, the global, systemic importance of the quality of those components also increases. The more companies understand this, and invest in appropriate up-front risk-mitigation procedures to minimize software defects, the fewer painful object lessons we will have to learn in real-time to drive this point home.

How do your own testing and QA practices measure up? Are you confident in the quality of software your team/company develops? Are you sure it won’t be you who’s on the firing line, diagnosing critical issues and trying to come up with emergency solutions after the next big storm or power outage?

I Don’t Get It – Why is Custom Code not better Monitored?

The Royal Bank of Scotland Group (RBS) recent software glitch is yet another addition to the ERP failures hall of shame. Let us not forget the Victorian Order of Nurses , or many more calamities in ERP implementation adventures. This particular software glitch has led to people not getting paid, deposits not showing up in the bank accounts, manually having to generate thousands of invoices, clogged QA cycles, and more consequences of software failure after go live, even in tightly controlled development environments.

Why is this still happening?
I know that we can empathize with the RBS team and their customer but it seems that this software glitch reveals much about Britain’s banks and the worldwide consumer banking industry.

For example, consider a brand new 15,000 line procedure in a recent source code scan we did for a significant enhancement to an ERP system. (That that it was 15,000 lines long is, firstly, a problem.) This procedure has at least one conditional statement that is nested 8 deep, and has around 755 high severity coding violations like case statements without the WHEN OTHERS condition, or missing exception handling, or no checking of return codes. Moreover, this procedure is poorly documented (14% comment lines) and has a very high cyclomatic complexity value.
Difficult to test, difficult to maintain, prone to error.

Probability of the developer leaving for greener pastures at exactly the wrong time: high. Probability of this procedure clogging up QA and UAT: also high. Cost of dealing with the issues after unit testing is “done” – probably dramatic. Chances of pushing this implementation towards the hall of shame: not insignificant (unfortunately).

This sloppiness does not need to exist in our industry today. Simple corporate application of automatic source code quality surveillance coupled with IDE’s for developers to help them identify and remove issues. Manual code reviews are ineffective and expensive. I have not met any programmer who does not want to do a quality job. All they need is direction and tools. Code quality monitoring tools like CodeExcellence, combined with developer oriented aids will help introduce a consistent application of best practices across the board.

Why do think that companies still comprise on their code quality?

*Image credit: StewC via photo pin cc

101 Awesome Computer Programming Quotes [New Ebook]

Have you ever thought about how much software has changed the world in the last 30 years? I know that I am constantly affected by it in my daily life, as I am sure you are as well. To celebrate the significance of software and the amazing thought leaders’ insight and words of wisdom, we put together a new Ebook with 101 Awesome Computer Programming Quotes.

The thing that I love about these 101 Awesome Computer Programming Quotes especially from these great minds such as Jeff Atwood, Bill Gates, and Edsger W. Dijkstra is that each of them has said something interesting and in an interesting way as they as they ponder the significance of computers and software on our world. The quote will grab your attention, evoke images in your mind and conveys a sense of the speaker’s personality.

To wet your appetite, here is a preview of some of the featured quotes:

  1. “Controlling complexity is the essence of computer programming.”– BRIAN KERNIGAN (Tweet This Quote)
  2. “Deleted code is debugged code.”– JEFF SICKEL (Tweet This Quote)
  3. “Normal people believe that if it ain’t broke, don’t fix it. Engineers believe that if it ain’t broke, it doesn’t have enough features yet.” – SCOTT ADAMS (Tweet This Quote)
  4. “Software: do you write it like a book, grow it like a plant, accrete it like a pearl, or construct it like a building?” – JEFF ATWOOD (Tweet This Quote)
  5. “I think it’s a new feature. Don’t tell anyone it was an accident.”– Larry Wall (Tweet This Quote)
  6. “When done well, software is invisible.” – BJARNE STROUSTRUP (Tweet This Quote)
  7. “A documented bug is not a bug; it is a feature.” – JAMES P. MACLENNAN (Tweet This Quote)
  8. “A computer lets you make more mistakes faster than any invention in human history–with the possible exceptions of handguns and tequila.” – MITCH RADCLIFFE (Tweet This Quote)
  9. “Much to the surprise of the builders of the first digital computers, programs written for them usually did not work.” – RODNEY BROOKS (Tweet This Quote)
  10. “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”– Bill Gates (Tweet This Quote)

You can read the rest of these awesome quotes by downloading the EBook!

What quotes would you add to the list? Feel free to let me know in the comments section below.

6 Ways to Slice & Dice Software for Better Quality

We’ve already made the case for software quality, and we sure hope you’ve bought in. Still in a need of a little refresher on what is software quality? According to the IEEE, software quality is ‘the degree to which a system, component, or process meets specified requirements’ or ‘the degree to which a system, component, or process meets customer or user needs or expectations’. It is that simple.

Alright, now that you undoubtedly know what software quality is, you’re probably asking yourself – “Is there some framework that I can apply to my company’s needs?” The good news is that there is already a resource for you to use. The ISO Standard 9126 has sliced and diced software quality into 6 characteristics to tackle the definition of software quality. Each of these characteristics that the software needs to fulfill will improve or increase the level of software quality. All you need to do is fulfill the requirement as it relates to your software and business needs.

6 Characteristics of Software Quality:

  • Functionality is probably one of the most important characteristics as it is the very reason for the software to be written; to provide a service or an output produced by the software. Functions are those that will satisfy the stated requirements or customer needs or expectations.
  • Reliability takes into account the maturity (in terms of failure in the system), fault tolerance and recoverability of the software. For example, “How often does the application freeze, or crash?” Or, its ability to tolerate extreme conditions and maintain its level of performance such as systems failures or limited network resources.
  • Efficiency is a set of variables (speed, space, and network usage) that bears on the relationship between the level of performance of the software and the amount of resources used, under stated conditions. E.g. “How quickly does the application provide the service or how much RAM is taken by the application?”
  • Maintainability is a very important characteristic as it is the software’s ability to be easily maintained. It is probably the hardest to quantify. Maintainability is a set of attributes that affect the effort needed to make a specific change such as provide a fix, debug code, and how quickly can a developer understand the code.
  • Portability is the set of variables that affect software’s ability to be transferred from one environment to another such as running on multiple operating systems (Windows, Linux, Tomcat)
  • Usability is an important factor of software quality and probably the most subjective characteristic. The major attributes that are associated with usability are:
    • how easily the user understands the concept of the software,
    • how easily, efficiently and without making errors does the user use the software,
    • how easily the user learns how to work with the software and how well this knowledge is retained.

All of these characteristics will contribute to your overall software quality but it is about knowing your software and fulfilling the relevant quality characteristic to your application.

In pursuit of software quality
I am sorry to tell you this, but there is no magic wand to wave to increase the quality of your software. You will have to apply a software quality management process to ensure that your software has the desired quality your business’ needs. Some of the quality system activities include:

  • Auditing of the projects
  • Review of the quality system
  • Development of standards
  • Quality Application reports for top management

The pursuit of software quality is an ongoing process that requires discipline to ensure “quality” standards are incorporated into your daily activities. By adopting a quality assurance mindset, the impact and benefits to your organization can be dramatic.

Get ready for reduced costs, greater efficiency, and better performance, just to name a few.

5 Popular ‘What People Think I Do’ Memes About Software Programming

It’s Friday or TGIF, as some call it. I am sure that you’re just counting the hours to the weekend at this point. Why not start your weekend earlier with some “What People Think I Do/What I Really Do” memes. If you are like me, you have all seen the “What People Think I Do/What I Really Do” meme series come through your Twitter or Facebook channels relating to computer programming or devleoping profession.

But unlike me, you probably don’t print them off and use them for your cubicle wall art. I realize that I just raised my ‘geek flag’ but Internet memes are super funny! So to help you make it through the rest of the day, I offer you some of my favorite “What People Think I Do/What I Really Do” montage that made me smile. I hope you get a chuckle or two out of them.

Have you seen any great examples that I haven’t included? Link us to them in the comments below and we might add them to the gallery.

I have credited where I could find the original source. If you know the source for any un-credited images, shout it out in the comments section and I’ll happily add it in.

Notable Examples

Programmer

From: familygeeks.com

Computer Programmer

From: Know Your Meme

Female Programmer

From: Know Your Meme

Web Developer

From: Know Your Meme

IT Software Tester

From: Know Your Meme

Software Quality – The Romantic and The Classicist

Some people think that Zen and the Art of Motorcycle Maintenance (ZAMM) is rubbish; others think it is a life changing work with profound implications. I read the book eons ago but was reminded by my son that it is about the nature of quality among other things (like taking a 10 year old on an ill-advised motorcycle trip across the country, for one). While I am most certain that when he wrote ZAMM, Robert Pirsig did not have software quality in mind, it is nevertheless interesting to contemplate his notion of a classical as opposed to a romantic view of quality. Of course, I am not the first, nor will I be the last to quote from this work, and at the risk of completely misrepresenting Pirsig, neither am I the only person to refer to his romantic/classical references to quality as it applies to software.

A romantic, forever living in the moment, will not be concerned about his motorcycle if it is working right now. If the software works, nothing else matters. “Let the user find the defects”. The classical view, however, is concerned about the future as well as the present (and the past). The classicist in a software team will be obsessed about technical debt because he knows that not checking the oil every time you fuel up is just asking for trouble. The software must be pristine before getting shipped. No technical debt. Period.

We have all known programmers on this spectrum. The purist who insists on documenting everything, testing everything, and insisting on following coding standards to the letter. And the “practical” guy who is more concerned about hitting the deadlines and doing whatever he can do make the software work. In some cases, the hero! Obviously, and as always, the practical middle ground rules the day.

This is naturally a caricature. Pirsig challenges the reader about the nature of quality. Is it objective? Not really – there are no instruments to measure quality. Or is it subjective? In which case quality is in the eye of the beholder and maybe even meaningless.

In software development, fortunately, we don’t have to gaze at the naval quite so deeply. With visibility and governance tools, static code analysis, test coverage, and most importantly, an understanding of the real impact of technical debt in an organization, all personality types on this spectrum can be happy and well served. We can all agree that as far as the quality of a piece of software is concerned, the minimum standard is the adherence to at least the most critical of quality best practises. Adherence to naming conventions, for instance, may not matter as much as making sure that database statements are well written.

We can help the classicist – who has a deep desire to adhere to all coding standards – by focusing on the most important if a project schedule is challenged, and documenting the rest. At least he will rest assured that the small landmines are documented and will be dealt with somehow: intentional technical debt – sometimes necessary to hit the market windows – but never a good idea if not done with eyes wide open. The romantic, on the other hand, can also look at the same standards and rest assured that his practical bent is valued, but not at the risk of explosions on go-live.

No matter what you think of Pirsig’s view of quality as a metaphysical concept, when we roll up the sleeves and build complex software, we don’t have to spend too much time naval gazing, and wondering about the ultimate nature of quality.

Sometimes the first step is the most important. Just putting a ‘quality fence‘ around the source code so the team adheres to basic, well known best practices will eliminate most of your headaches. It will prevent the romantics from straying too far from best practices but still help the schedules; it will help the obsessed classicists from their relentless pursuit of an ideal. It will get the job done, eliminate wasted cycles in QA and make the customer happy.

What do you think? Are you a classicist? A romantic? Or somewhere in the middle?

*Featured image courtesy of Amazon