A recent talk on Big Data illustrated one of the conundrums that seems to be currently plaguing the industry. The talk addressed HOW to deal with Big Data assuming that it had already been gathered but not WHY it was gathered in the first place. Fortunately an Economist article from last week “Off the Map” provided an excellent answer to the question of why collect it for a certain reason, but not in general, as a best practice. This questionable practice is similar to that of some of the past fads that have come through the IT industry. We spend a lot of time stating how to do do something without really asking why it is being done. Automated testing tools are one example as are some of the testing methodologies that sweep through from time to time. Another one is a methodology that promises to do testing better or faster as long as it is worth doing. The worst (but simplest) example of over-gathering data was a QA Manager at an insurance company who insisted on something in the order of 50 mandatory fields to be filled out for each defect raised. Few of these defaulted and most had to be selected from lists. Certainly the mass of generated data could be mined in many different ways but the impact on defect creation was substantial and lead to the usual attempts to bypass the system.
In the case of Big Data, some of the attitude is to collect it ‘because we can’ with no necessary end view in mind. Quality Assurance always takes a more sustainable view and confirms the reasons for any particular action before embedding it into general use. Several weeks ago we mentioned how Quality Assurance supported Green Initiatives. This is directly in line with that sentiment. We do not pile up statistics or data without a reason for its need and a planned usage.
Leave a Reply