Mobile Testers refer to “Mobile Testing” as testing the functionality of a mobile application on a phone or tablet. But, have you ever thought about what happens when you are performing a function while the battery is being charged? Let’s stop and think about “Mobile Testing” in a different way. Mobile Testing encompasses mobile application testing, mobile device testing, mobile phone testing, mobile system testing. So let’s make some distinctions to what each mean.
Mobile Application Testing is testing the application on a mobile device. What kind of testing does this entail? Is it strictly GUI testing which is making sure the application is functional according to the requirements? Or is testing the application’s functional performance on a mobile phone the expectation? The clarification of what testing is to be done between you and your stakeholders most often is not defined. Software testers require some guidance on expectations from the stakeholders. Requirements help but requirements for mobile applications usually only consider the functionality of a software application without considering how the application works within an entire system.
Mobile Device Testing is testing hardware and operating system. Does the Operating System install? Does the device power on? Do the LED lights work as expected? Does the battery charge when the AC Adapter is plugged into the phone? Distinguish the type of mobile device: completely independent mobile device communicating via cell network or even radio waves. Examples include tablets, mobile phones, PDMs, monitors (like what you see in hospital rooms) etc.
Mobile Phone Testing is doing any testing on a mobile phone. This kind of testing could mean performance testing, network communication testing, GUI functional testing, downloading/installation of application tests. When people discuss Mobile Testing, often times the testing is in reference to a mobile application which works on a mobile phone and disregards other types of mobile devices.
Mobile System Testing – combination of all testing types described above. Testing a mobile application on a mobile device could entail tests which include the application retaining clock information after the battery of the device was drained and recharging. Here, the test case is inclusive of operating system functionality, the clock, the charger, and then if the software receives the information based on what the operating system receives.
Ever do any testing of software behavior while the mobile device is charging? Ever thought to do so? Why would changing a hardware condition affect the behavior of the software? Functional application testing is important but, the application is a part of an entire system and not just a singular entity. The application is completely dependent on the entire system, to function within the system. Temperature, memory, CPU Speed, Database searches are all affected when a device is charging and these tests are continually not considered.
If you thought testing the software behavior while charging the device was not part of a software tester’s job, well, surprise, anything which could affect software behavior is a test to be considered. Examples of kinds of results you are looking for are: network communication uninterrupted, optimal internal database searches, memory performance, among other tests.
To understand the connection of charging the device and software behavior, the software tester should be educated in more than the functionality of the GUI belonging to the application under test. What I mean, is understand what kinds of connections your software application make to your operating system and to other software applications. Does your software application utilize a clock or record time? If so, then your application communicates to the operating system. Basically, think two applications are now under test. Now, your application’s functions and your applications interactions with another application are two different software test types and two different thought processes for test case generation. If your application stores data, manipulates data, repackages data and then sends data out to another device or server, you’re dealing with more than one application so your testing must be more system integration testing rather than singular application testing.
Testing functionality while charging the device and witnessing effects of software behavior is often overlooked. A discovery of strange behavior while engaging functionality with a low battery level changed the way the software behaved. What evolved were a series of tests based on various battery levels and conditions of software application engagement.
One particular test was noticing how hot my device felt to the touch after charging the device from a completely drained battery. I also noticed with the heat, my software started to behave erratic. The data my device had analyzed and repackaged for network travel appeared unrecognizable once the data reached its destination. Sometimes data would not get to its end destination due to the device itself powering down completely and not recover.
My battery level variables were: 0%, 10%, 20%, 50%, 60%, 75% and 90%. Conditions had to also be tested with each of these levels of battery charge. These conditions included: software application installed but not running, software application installed and running while not engaging directly with the GUI and CPU speed was low, software application installed and running with GUI engaged and CPU speed at its highest rate, and, no software application installed but operating system was installed and Windows Explorer was engaged.
I began tests by checking the clock, LED lights making sure all worked as expected and would use the application to create communication with and without the GUI being engaged. What I found was when the application was engaged, the CPU speed increased and created even more heat. When the battery generates heat up to 70C degrees, even 65C degrees, the cell modem can be damaged as well as other components contained within the device itself. It’s a small contained space, and there is very little opportunity for the heat to dissipate.
With these tests, we see the direct effect hardware has on software and in turn software affects hardware behavior. Here’s why it’s critical for a software tester to understand how hardware engages with its operating system which affects software behavior and vice versa. Software testers must now create test cases beyond the software application itself.
In our software we put in a check, to report the battery temperature every 5 minutes as well as CPU speed. Between my operating system’s log and the application’s log, I was able to test what was happening when. I could use variable temperatures to conduct tests and see what happened with the network communication. I could also see how fast the CPU was spinning when storing data or retrieving data all the while keeping watch of the temperature.
How we were able to control the temperature was to take the recorded temperature and apply an If / Then conditional where If the recorded temp > than 60C degrees, then the cell modem would shut down and not generate further heat by trying to transmit data. Of course the software application needed to find a way to turn the cell modem back on, so an additional If / Then condition was added. If the cell modem = off AND the recorded temp is < 55C degrees, then turn the cell modem back on for communication to continue. These conditions evolved into actual requirements which of course became part of my regression testing.
Through these experiences, can you see how important conducting tests where charging a mobile device affects software behavior? But how do you develop such tests? How do you think about creating tests which are appropriate to include in your testing? Your original requirements contain nothing specific about software behavior and charging the mobile device. The only expectation is, charging the device does not interrupt the software application’s functional behavior.
Ah, there’s your requirement: Charging the mobile device (a hardware condition) shall not interrupt the software application’s functional behavior. Wow, that requirement is written very generic and further requirements should be written to be more specific. But this hardware condition and others are often overlooked.
Tests you develop can generate new specific requirements. A bit backwards, yes, but again, those who develop requirements do not realize how hardware conditions can affect software behavior. Remember mobile system testing is more than just testing the functions of a software application. Requirements writing usually places focus on the software application itself and not the entire system.
When you don’t have specific requirements for how the software should function when charging, why not take all the functional requirements and add “while the device is charging” at the end? Okay, you’ve got to add test cases and make sure you add to the regression test suite. As with every new release of software, these tests will need to be re-run. You confirm sending data will continue even if a phone call occurs … while charging. What about searching for data when a phone call comes in and you’re charging your device? What happens? What should happen?
Know how your application interacts with other applications on your phone and when connecting to another device/server. This knowledge and your testing give you the ability to form expected benchmarks of functional behavior. These benchmarks can then be compared when charging the device and performing the same functions. Should you only consider charging the device and perform functions from the application? Oh but there’s more…
You might want to consider tests where you have an almost fully charged but still charging battery and engage the software application. Review behavior, does it meet expectations? Now change the variable of the amount of charge your device’s battery has. Try engaging your application with a 50% charged battery and also make sure to check how the software application will behave on a low battery charge. And let’s throw another test in there, what happens to your application while charging from a dead battery? Do not let this test go by as you will find some very interesting results during this test. Be sure to try various functions with this condition.
Have you considered the rate of charge or how long it takes to charge your device from a dead battery? Why is this important? You’re not testing the device itself, right? So why even consider this test? Generally, the rate of charge for any device from a dead battery to complete charge should be about 4 hours. If your device charges faster and you engage the applications, you speed up the CPU. When you speed up the CPU or processor, you create more heat. And as I discussed, heat can cause physical damage as well as unexpected or bad behavior in your application. If the battery is charging too fast or under 2 hours, the heat generated would create dangerous temperatures for software to be engaged. Data will be lost or never received or sent, the software could start generating exception errors. Data could be compromised; modules working with other modules could be damaged where the functionality isn’t immediately apparent.
If you’re testing a software application which uses various drivers for LED lights, the clock, Bluetooth and more, you should consider adding these variables to the hardware condition of charging tests. Does your application utilize LED lights for notification purposes? Does your application record timing and therefore access the device’s clock? Does the Bluetooth driver stop working at any point while charging? The functionality of these drivers all need to be tested with the software application and charging the device.
Know what to expect for software behavior. Know if your application accesses drivers and understand what the behavior is based on while accessing those drivers. Include the hardware condition of charging the device to all your functional tests. These types of tests and results are very different from testing applications on the desktop and why we mobile software testers must consider test cases going beyond GUI and functional tests.
About the Author
JeanAnn Harrison: Formally a Lead Quality Assurance Engineer at CardioNet, Inc providing ambulatory cardiac monitoring service for physicians’ patients. Jean Ann was the software quality assurance lead on the next generation mobile heart monitor device and has been the lead on all embedded software testing at CardioNet. Jean Ann’s background also includes a variety of projects of large multi-configured applications for client/server, web, Unix and mainframe systems. Her experience is primarily manual testing with occasional automation and a strong focus on building quality into design.Constantly working to perfect her craft, Jean Ann attends and presents at conferences, takes various courses, networks and actively participates in software testing forums. She believes software testing takes daily practice to contribute to a project’s success. Jean Ann currently works as a Project Manager for Project Realms, Inc.