Foreword: Introduction to Artificial Intelligence for Security Professionals
Foreword: Introduction to Artificial Intelligence for Security Professionals



Available at Barnes & Noble
Foreword by Stuart McClure
My first exposure to applying a science to computers came at the University of Colorado, Boulder, where, from 1987–1991, I studied Psychology, Philosophy, and Computer Science Applications. As part of the Computer Science program, we studied Statistics and how to program a computer to do what we as humans wanted them to do. I remember the pure euphoria of controlling the machine with programming languages, and I was in love.
In those computer science classes we were exposed to Alan Turing and the quintessential “Turing Test.” The test is simple: ask two “people” (one being a computer) a set of written questions, and use the responses to make a determination. If the computer is indistinguishable from the human, then it has “passed” the test. This concept intrigued me. Could a computer be just as natural as a human in its answers, actions, and thoughts? I always thought, Why not?
Flash forward to 2010, two years after rejoining a tier 1 antivirus company. I was put on the road helping to explain our roadmap and vision for the future. Unfortunately, every conversation was the same one I had been having for over twenty years: we need to get faster at detecting malware and cyberattacks. Faster, we kept saying. So instead of monthly signature updates, we would strive for weekly updates. And instead of weekly, we would fantasize about daily signature updates. But despite millions of dollars driving toward faster, we realized that there is no such thing. The bad guys will always be faster. So what if we could leapfrog them? What if we could actually predict what they would do before they did it?
Since 2004, I had been asked quite regularly on the road, “Stuart, what do you run on your computer to protect yourself?” Because I spent much of my 2000s as a senior executive inside a global antivirus company, people always expected me to say, “Well of course, I use the products from the company I work for.” Instead, I couldn’t lie. I didn’t use any of their products. Why? Because I didn’t trust them. I was old school. I only trusted my own decision making on what was bad and good.
So when I finally left that antivirus company, I asked myself, “Why couldn’t I train a computer to think like me—just like a security professional who knows what is bad and good? Rather than rely on humans to build signatures of the past, couldn’t we learn from the past so well that we could eliminate the need for signatures, finally predicting attacks and preventing them in real time?”
And so Cylance was born.
My Chief Scientist, Ryan Permeh, and I set off on this crazy and formidable journey to completely usurp the powers that be and rock the boat of the establishment—to apply math and science into a field that had largely failed to adopt it in any meaningful way. So with the outstanding and brilliant Cylance Data Science team, we achieved our goal: protect every computer, user, and thing under the sun with artificial intelligence to predict and prevent cyberattacks.
So while many books have been written about artificial intelligence and machine learning over the years, very few have offered a down-to-earth and practical guide from a purely cybersecurity perspective. What the Cylance Data Science Team offers in these pages is the very real-world, practical, and approachable instruction of how anyone in cybersecurity can apply machine learning to the problems they struggle with every day: hackers.
So begin your journey and always remember, trust yourself and test for yourself.
Available at Barnes & Noble
Foreword by Stuart McClure
My first exposure to applying a science to computers came at the University of Colorado, Boulder, where, from 1987–1991, I studied Psychology, Philosophy, and Computer Science Applications. As part of the Computer Science program, we studied Statistics and how to program a computer to do what we as humans wanted them to do. I remember the pure euphoria of controlling the machine with programming languages, and I was in love.
In those computer science classes we were exposed to Alan Turing and the quintessential “Turing Test.” The test is simple: ask two “people” (one being a computer) a set of written questions, and use the responses to make a determination. If the computer is indistinguishable from the human, then it has “passed” the test. This concept intrigued me. Could a computer be just as natural as a human in its answers, actions, and thoughts? I always thought, Why not?
Flash forward to 2010, two years after rejoining a tier 1 antivirus company. I was put on the road helping to explain our roadmap and vision for the future. Unfortunately, every conversation was the same one I had been having for over twenty years: we need to get faster at detecting malware and cyberattacks. Faster, we kept saying. So instead of monthly signature updates, we would strive for weekly updates. And instead of weekly, we would fantasize about daily signature updates. But despite millions of dollars driving toward faster, we realized that there is no such thing. The bad guys will always be faster. So what if we could leapfrog them? What if we could actually predict what they would do before they did it?
Since 2004, I had been asked quite regularly on the road, “Stuart, what do you run on your computer to protect yourself?” Because I spent much of my 2000s as a senior executive inside a global antivirus company, people always expected me to say, “Well of course, I use the products from the company I work for.” Instead, I couldn’t lie. I didn’t use any of their products. Why? Because I didn’t trust them. I was old school. I only trusted my own decision making on what was bad and good.
So when I finally left that antivirus company, I asked myself, “Why couldn’t I train a computer to think like me—just like a security professional who knows what is bad and good? Rather than rely on humans to build signatures of the past, couldn’t we learn from the past so well that we could eliminate the need for signatures, finally predicting attacks and preventing them in real time?”
And so Cylance was born.
My Chief Scientist, Ryan Permeh, and I set off on this crazy and formidable journey to completely usurp the powers that be and rock the boat of the establishment—to apply math and science into a field that had largely failed to adopt it in any meaningful way. So with the outstanding and brilliant Cylance Data Science team, we achieved our goal: protect every computer, user, and thing under the sun with artificial intelligence to predict and prevent cyberattacks.
So while many books have been written about artificial intelligence and machine learning over the years, very few have offered a down-to-earth and practical guide from a purely cybersecurity perspective. What the Cylance Data Science Team offers in these pages is the very real-world, practical, and approachable instruction of how anyone in cybersecurity can apply machine learning to the problems they struggle with every day: hackers.
So begin your journey and always remember, trust yourself and test for yourself.
Available at Barnes & Noble
Foreword by Stuart McClure
My first exposure to applying a science to computers came at the University of Colorado, Boulder, where, from 1987–1991, I studied Psychology, Philosophy, and Computer Science Applications. As part of the Computer Science program, we studied Statistics and how to program a computer to do what we as humans wanted them to do. I remember the pure euphoria of controlling the machine with programming languages, and I was in love.
In those computer science classes we were exposed to Alan Turing and the quintessential “Turing Test.” The test is simple: ask two “people” (one being a computer) a set of written questions, and use the responses to make a determination. If the computer is indistinguishable from the human, then it has “passed” the test. This concept intrigued me. Could a computer be just as natural as a human in its answers, actions, and thoughts? I always thought, Why not?
Flash forward to 2010, two years after rejoining a tier 1 antivirus company. I was put on the road helping to explain our roadmap and vision for the future. Unfortunately, every conversation was the same one I had been having for over twenty years: we need to get faster at detecting malware and cyberattacks. Faster, we kept saying. So instead of monthly signature updates, we would strive for weekly updates. And instead of weekly, we would fantasize about daily signature updates. But despite millions of dollars driving toward faster, we realized that there is no such thing. The bad guys will always be faster. So what if we could leapfrog them? What if we could actually predict what they would do before they did it?
Since 2004, I had been asked quite regularly on the road, “Stuart, what do you run on your computer to protect yourself?” Because I spent much of my 2000s as a senior executive inside a global antivirus company, people always expected me to say, “Well of course, I use the products from the company I work for.” Instead, I couldn’t lie. I didn’t use any of their products. Why? Because I didn’t trust them. I was old school. I only trusted my own decision making on what was bad and good.
So when I finally left that antivirus company, I asked myself, “Why couldn’t I train a computer to think like me—just like a security professional who knows what is bad and good? Rather than rely on humans to build signatures of the past, couldn’t we learn from the past so well that we could eliminate the need for signatures, finally predicting attacks and preventing them in real time?”
And so Cylance was born.
My Chief Scientist, Ryan Permeh, and I set off on this crazy and formidable journey to completely usurp the powers that be and rock the boat of the establishment—to apply math and science into a field that had largely failed to adopt it in any meaningful way. So with the outstanding and brilliant Cylance Data Science team, we achieved our goal: protect every computer, user, and thing under the sun with artificial intelligence to predict and prevent cyberattacks.
So while many books have been written about artificial intelligence and machine learning over the years, very few have offered a down-to-earth and practical guide from a purely cybersecurity perspective. What the Cylance Data Science Team offers in these pages is the very real-world, practical, and approachable instruction of how anyone in cybersecurity can apply machine learning to the problems they struggle with every day: hackers.
So begin your journey and always remember, trust yourself and test for yourself.