Subject matter experts (SMEs) serve important roles on a project and are especially pivotal during the testing phase. In this week's column, Dion Johnson explores how SMEs positively and negatively affect testing and what you can do to make sure you have the right amount of SMEs on your testing team.
To SME or not to SME, that is the question. This line, borrowed from a famous Shakespearian work and altered, is as equally poetic and perplexing as its original incarnation from centuries ago. The original statement posing the question, "To be or not to be," was in reference to human existence and almost seemed as though it didn't need to be asked. Why wouldn't someone want to 'be'? Why wouldn't someone want to exist?
Our software testing incarnation of Shakespeare's query, "To SME or not to SME," is in reference to the existence of subject matter experts (SMEs). This question also seems unnecessary, because it seems pretty apparent that being a SME is a good thing. Why wouldn't someone want to be a SME on the application under test (AUT) or the tools used to implement and test that application?
To SME
Effectively testing an application typically requires a certain degree of application and environmental expertise. For example, I worked on a project that combined Web, Web services, and Java to create a front-end application that utilized a database that sat on a UNIX server. This system also interfaced with several applications not directly accessible during testing; therefore, the test team had to use and help maintain Java-based simulators to test the primary system's interfaces and outputs. The testers clearly needed a wide range of skills. By the time I joined the team, there were already a few SMEs in various areas who were also responsible for test development and execution. I began my tenure on the team by reading the vague requirements documents then ramping up on the simulators, tools, and technology pertinent to the system, as well as the existing test bed. The interesting thing that I found from the existing test bed was that in contrast to the high degree of environmental complexity, the tests themselves were remarkably simple and seemed mainly to exercise the most basic scenarios. The team had very bright people, but team members seemed to spend so much time and effort on being SMEs on the application and its associated tools, technologies, and systems that little effort was put into truly probing the various application components. This is a classic example of not being able to see the forest for the "SMEs."
Not to SME
Too much focus on becoming a SME has its problems, but too little focus on gathering knowledge can be equally, if not more, problematic. I worked on a project on which testers, in order to gather greater insight on the system, sought out members of the development and database teams to help fill in gaps left by existing project documentation. The project managers concluded, however, that such information gathering improperly influenced the development and implementation of the application tests, thus they completely cut off direct communications between the test team and the development and database teams. This "brilliant" move, while well intentioned, also served to bring testing to a crawl. Given the vague nature of the documentation and the lack of detailed information coming from the user on how the system should operate, there were few avenues for sufficiently increasing system knowledge and expertise except the other teams that worked on building them. By severing direct communication, test team requests for information regarding specific system rules were often ignored, overlooked, misunderstood, or seriously delayed. So activities that should have taken minutes often took days or even weeks.The Happy SME-dium
Since too much focus on being a SME is bad and so is too little effort, obviously the answer falls somewhere in between. A happy medium is to introduce a greater separation of duties. In general, I find it is a bad idea to try to stifle a resource's request for knowledge and expertise. At the same time, it is good to reign in rogue resources who either unintentionally veer off course or secretly wish to perform a completely different job function from the one they've been assigned. One way to provide a greater separation of duties is to assign one or more resources whose sole job is to function as a SME in one or more areas. For example, have a resource who is solely responsible for being the AUT SME. The test manager is often best in this role, because he can answer questions relative to the overall system functionality in between all of the meetings that he has to go to. Another way to successfully provide separation of duties is to form an independent automated test team. This is an ideal solution, but teams often struggle in making that leap because they find it difficult to justify not having all of their testers actively testing new functionality. In complex environments, however, this move would be invaluable. Made up of one or more members, this automated test team's sole responsibility would be tool support or administration. All questions related to the development, configuration, and implementation of tools used for testing (including simulators, functional test tools, test management tools, terminal emulators, database tools, etc.) would be the responsibility of the automated test team. If everyone in the test group is interested in being a SME, perhaps the team can rotate resources in and out of the SME position.
Greater separation of duties ensures that no one is punished for attempting to gain greater knowledge and expertise, while still ensuring that each testing activity yields more effective results.