Jennifer Bonine: We are back with our next interview. We have Jason Arbon with us. Jason, I'm so glad you're here.
Jason Arbon: I'm glad I'm here.
Jennifer Bonine: I know. I love talking to Jason. Jason, you are my "futuristic, where everyone's going, what we all should be thinking about" person.
Jason Arbon: Slash crazy person, I guess.
Jennifer Bonine: Yeah. I love it. But I love that. That's what makes me happy about talking to you every time we do this. I'm glad you are here.
Let's talk about AI—for those out there, artificial intelligence, what's happening, machine learning. Some of the stuff you're doing. We can't forget to talk about Appdiff, because when I talk to organizations and companies, and I talk to a lot of them, I always say if you haven't checked this out yet, you need to check this out. Right?
Jason Arbon: Cool. Yeah, yeah.
Jennifer Bonine: It's something people should be looking at. If you've never heard of it, go check it out. Take a look at it. There's a lot of buzz here at the conference from people around what it does and what it is. Wherever you want to start, either AI, what you're talking about, or Appdiff and what that does.
Jason Arbon: Sure. Do you want to talk about AI and software testing?
Jennifer Bonine: Yeah.
Jason Arbon: I think it's the most important thing going on, is that ... also, I'm glad you don't hold me accountable to my predictions.
Jennifer Bonine: I know, right?
Jason Arbon: In the past. Don't watch the old ones.
Jennifer Bonine: I know, right? Go watch those now.
Jason Arbon: AI and software testing, I guess the biggest thing ... People think there's robots going to be walking around and bossing us around soon. I don't think that's going to happen in the future. The thing that testers don't realize is that AI is perfectly suited to replace testing activities. The reason is, fundamentally, AI is just a way to train software or let software train itself. If you have a bunch of input data and you have a bunch of output data, examples, all you need is the input and the output. If you have those things, guess what you can do? Train a machine to do it. That's literally the fundamental thing about machine learning. What do testers do? Quick quiz. They come up with test inputs.
Jennifer Bonine: What expected results, yeah. Absolutely.
Jason Arbon: What are they documenting all day? They're actually documenting the data in their test queue databases of how to replace themselves. Here's my test inputs and here's the expected outputs. The only question, really, is how much of that data do you need to train an application or train a bot to test an application. Really, of all the professions that's most in need of help of automation, it's also the most ripe one for automation with AI. AI is not just this mysterious thing. It's actually a really ... it's a tool. It's perfectly suited, I think, for software testing, and people are waking up to that idea generally.
Jennifer Bonine: In that space, around Appdiff and what it does, can we talk a little bit about how that works and what it does around those bots?
Jason Arbon: Yeah. With Appdiff, the fundamental idea is you just upload your app, your binary. You just upload it to the cloud and then our robots wake up. They look at it and then they go, I'm going to test it this way. They figure out how to test it and then they spin up a thousand copies themselves, or more, in parallel in the cloud. They go through and they walk through the application like an end-user would. We've trained it how to walk through the app like a tester and like an end-user. It explores the entire app. It'll find bugs, find crashes, find error dialogs, find all that kind of stuff, and build a quick report. Gets all that data back to you within an hour. That's on an app that it's never seen before, which is pretty awesome, from a test automation perspective.
If I could pick on people, or you can read Dorothy [Graham]'s website for two hours and then pick a tool. I was just joking with her that I should build a testing website called Antipatterns. Testing antipatterns. Just automation antipatterns. The reason, I think I've realized—listen to her talk—I think the reason I'm doing this AI robot stuff is because I'm such a horrible test automation engineer.
Jennifer Bonine: That you're like, "I just need these robots to do it."
Jason Arbon: I need a machine to do it for me.
Jennifer Bonine: Just go do this for me and you guys will figure it out.
Jason Arbon: Exactly.
Jennifer Bonine: That is interesting, because I did see something where they've used—obviously Watson, and using Watson in the medical field for predicting diagnoses and things like that, where there's patterns. Where they've said almost in all cases the robot or Watson, the AI, does better than the human.
Jason Arbon: It does better than the individual. Exactly. You always get me all worked up.
Jennifer Bonine: I know, I get you all jazzed up and excited.
Jason Arbon: The cool thing that people don't realize is that you have an app team today. You hope you make a great testing hire. You hope you get a great testing member, which there are out there.
Jennifer Bonine: Right? I know.
Jason Arbon: The problem is that that person wakes up and they think about test cases. They start thinking about, I'm going to add ... like, there's email field. They add their email field. Then they add their mom's email field. Then they add an email without the address, without the @ sign. That's a crazy test case. Once that's been done once, that pattern is repeated on every test team in the world. Everyone thinks it's original ideas and original content. Just like Watson trains on the input and output of hundreds or thousands of radiologists that evaluate an X-ray or an MRI. We need, and we are ... what we're trying to build at Appdiff is a super brain where the test case is written only once. Every time we find something new, we add it. It can run on every single app.
Jennifer Bonine: Then leverage that knowledge.
Jason Arbon: The beautiful thing is that this virtual brain that we're building right now will be smarter than me. It'll be smarter than any app team that's out there just by definition of having a lot of data. Just like the Watson stuff.
Jennifer Bonine: Right. That's what they've seen. If you look at facts and results, it is smarter. I think sometimes we get nervous. Do you ever see people who, when you do this—because I'm imagining your demoing this for companies and you show it to them—they go, "But I want to tell it what to do. I want to be in charge."
Jason Arbon: I should've talked to you before we did a soft launch. I was naive, by the way. I just thought, you just build this thing and it does all these tests, and we do sixty thousand test cases for you. They go, "What about the fifteen that are in my spreadsheet?" They absolutely want control over that. That's why what we did over the holidays is add the ability to do exactly that, which is that if you have your magical set test cases that are app-specific, you can go in and just click on the screenshot of whatever page you want in your application. Then you just type in the input value you want. If you want to say, "I want to verify this value," you just type it in over here. Super easy. The funny thing is that testers don't realize that they can't get around red. The bots will generate thousands or tens of thousands of test cases. That's a lot more than what the average tester delivers. They want their twenty to thirty or hundred test cases in there first.
That's actually what we're focusing on a lot now, is to almost go backwards, which is exhaustively test the application and then let people come in and manually add their specific test cases into the bot.
Jennifer Bonine: Which is funny. We see with technology today just a pattern, so to speak, in general of almost, technology moves faster than we're able to adapt. The holdback is not that we can't do it. It's not that it's not possible with the technology, with the intelligence, with the architecture. With everything that we have, we have the capability, but we're not ready.
Jason Arbon: Right. People are intimidated by it, especially—we call it AI. It's intimidating, and it's not traditional software programming. A lot of testers are intimidated by coding. AI is even more intimidating. Ironically, testers, though, shouldn't be, because what they do for a profession is they come up with a bunch of inputs and have a bunch of outputs. Guess what you do? You just put it in this black box called a machine learning tool. You just train it. Then out comes a function that does what your testing job is.
There's a guy in performance who's talking about doing AI for performance in my class yesterday. He just looks at all the data of the performance of all the machines, servers, in his firm. He puts that in the thing and he says K means clustering. It's intimidating, but all it does is group things. If you look at all the performance of all the servers ...
Jennifer Bonine: Which makes sense, clustering.
Jason Arbon: Exactly. Exactly.
Jennifer Bonine: I get it.
Jason Arbon: Like See's Candy or something. When you cluster them, when you divide it into two groups, it'll automatically sort out the servers that are behaving similarly, which are as expected. Then the couple nutty vowels that are flaky or super slow or something just stand out. You have to look through the list to see which ones are bad and automatically can partition that. There's a lot of examples, I think, of AI into quality and testing. They're starting to emerge this year. I think next year it'll be a lot more.
It's a lot of hard work. There's nothing really magic with AI. It's just you have to sit down, learn it, and apply it like any other tool.
Jennifer Bonine: Maybe one of those terms, for folks out there who are intimidated by it and maybe feel like they're not ready for that or, oh my gosh, I'm going to just get out of this field before it comes.
Jason Arbon: People were asking me that, by the way. Literally people in the hallways like, "What skills should I learn when I retrain?" I'm like, not yet. Relax.
Jennifer Bonine: It's not happening tomorrow, so no panic. We want no math panic out there. Right now, anyways. Where should they go just to get started, so they're not overwhelmed by it, for resources?
Jason Arbon: The best way to do it is to go to Appdiff and then run your credit card.
Jennifer Bonine: Start using it.
Jason Arbon: I'm not very commercial at all. The best thing is there's actually a series by a guy named Andrew—I never say his last name right—at Stanford. If you type in "Stanford Andrew AI," it's a beautiful series on just ... it takes you from the basics. There's some math he throws on the whiteboard. Don't be intimidated by that. Just listen to what he's saying. It's beautifully approachable. You'll understand as a tester, without knowing how to implement it, you understand how to use it. The best one of all is this talk by him. It's called "AI Is the New Electricity." I think that's what it's called. It's a beautiful hour-and-a-half video. Just sit down and watch that and you'll understand how approachable the space is. You'll really understand what's going on with AI and what it can do.
Don't listen to me. Who am I to talk about AI? I just use it as a tool. He's actually a Stanford prof that started the Google Brain project. I believe he started that and he worked on their AI stuff. He knows what he's talking about. I don't.
Jennifer Bonine: Amazing.
Jason Arbon: I just use it as a tool.
Jennifer Bonine: Hopefully we've ignited at least some interest in going and checking it out. Not just being shy and wavering, afraid of AI as a term. Go take a look at it. Check out some of the things Jason said. If they want to get in touch with you—we already ran out of time. Can you believe it?
Jason Arbon: Just Appdiff.
Jennifer Bonine: Appdiff.
Jason Arbon: Go there and I'm happy to chat with anyone. I'm going to be really crazy. If you want to email me, the best way to get a hold of me is [email protected]. I'm happy to jump on a Skype with anybody that's even half interested in this stuff and explain it, talk to them about it.
Jennifer Bonine: Awesome.
Jason Arbon: The last thing I want to say is that during the AI class yesterday, there were actually a few empty seats. There were a couple of empty seats in the back. I was out in the foyer and trying to get people to come in. The first woman I talked to, I said, "How are you going to miss out on the future of testing? What other class could be more interesting? What could actually be better than that?" Then she laughed a little bit. She giggled a little bit and then she said, "Are you giving it?" I said, yeah. Then she laughed more and kept walking.
Jennifer Bonine: Oh, no.
Jason Arbon: I don't know. Maybe the problem is mine. I think that AI is going to be a real topic and a real tool for testers moving forward.
Jennifer Bonine: Oh, no. Yeah, I agree. Check that out, Jason's talk. Jason, thank you for being here with me.
Jason Arbon: Thanks, Jennifer. I will see you again.
Jennifer Bonine: I hope we see you again.
Jason Arbon: Yeah. Maybe we'll have dinner or something.
Jennifer Bonine: Yeah. Sounds perfect.
Jason Arbon: All right. Cool. My wife is going to see that. Thanks, Jennifer.
Jason Arbon is the CEO of Appdiff, which is redefining how enterprises develop, test, and ship mobile apps with zero code and zero setup required. He was formerly the director of engineering and product at Applause.com/uTest.com, where he led product strategy to deliver crowdsourced testing via more than 250,000 community members and created the app store data analytics service. Jason previously held engineering leadership roles at Google and Microsoft and coauthored How Google Tests Software and App Quality: Secrets for Agile App Teams.
User Comments
Good AI Introduction to Testers. Thanks both. Appdiff is impressive. Thanks again. :-)