ITSPmagazine coverage, podcasts, webcasts, articles, and all our happenings during RSA Conference 2019 will be made possible by the generosity of our sponsors. We are ever so grateful for your support.
Have a story to share and want to join us for the journey? We invite you to discover the benefit of the full coverage sponsorship and let us know if you are interested in joining us for our adventures. We look forward to another exciting conference.
By Fred Wilmot
I enjoy finding gems, talented new opinions, and maybe innovative, unpolished ideas at conferences. RSA Conference 2019 is a dissemination point for executives, policy makers and practitioners collecting ideas, best practices and validated methods to take home and use in your own house.
In the deluge of what is ML/AI in the marketplace, it’s hard to differentiate what are valid best practices, and where to start if you are any of the above-mentioned three personas.
In the case of understanding the use of ML/AI successfully in the market, techniques and methods that amplify your ability to respond or gain insight, or justifying a spend that doesn’t seem to fit into the budget, the following sessions can improve your 2019 capabilities or influence your organization’s maturity to your benefit, wherever you are in the street fight we are all in.
From security leadership and GRC/policy focus on model governance and validation, to the street fighters trying to make a difference, have a look at my itinerary picks for RSAC 2019:
THEME: Security Leadership
How ML/AI may make a difference and jump the tracks or be a force multiplier.
Tuesday, Mar 05 | 11:00 A.M. - 11:50 A.M.
While CISOs have great opportunities to share ‘from the trenches’, some of the interest here for the leaders is hedged ML/AI bets and ‘what works/what does not’. Have some use cases as you start the show to validate in your mind, what is relevant or important.
Speaker: Grant Bourzikas, Chief Information Security Officer (CISO) & Vice President of McAfee Labs Operations, McAfee
Tuesday, Mar 05 | 01:00 P.M. - 01:50 P.M.
Robots will leverage AI-Bot swarming intelligence to team with virtual AI-bots and humans. Robots are a mobile pod of IoT “sensors and video” to enable mobile roaming telepresence/monitoring and become a member of your security team. Relevancy to control systems, unmanaged environments, compute capability for signals analysis, could be useful to extend for physical threats, and OT.
Speaker: Thomas Caldwell, President, CTO, League of AI
Wednesday, Mar 06 | 01:30 P.M. - 02:20 P.M.
As part of the semi-annual Information Security Risk Reviews at Microsoft, the team has devised unique use cases of machine learning utilization within the cybersecurity risk assessment domain. This talk will focus on these machine learning models and techniques, with an emphasis on the practical scenarios and case studies.
Speaker: Bugra Karabey, Senior Risk Manager, Microsoft
THEME: Street Fighters
Models, methodologies, applications and extensions to what I am doing; innovation and revelation.
Tuesday, Mar 05 | 01:00 P.M. - 01:50 P.M.
There is a popular and accurate narrative that AI and ML is more suited to offensive than defensive applications, though this sounds like an Ahab story to find examples like finding the fewest number of systems needed to compromise specific application or system data. As we say, Offense informs Defense. Check out what skills show up here.
Speaker: Etienne Greeff, CTO, SecureData Europe
Speaker: Wicus Ross, Lead Researcher, SecureData Labs
Friday, Mar 08 | 09:50 A.M. - 10:40 A.M.
Blackbox interpretability is an emerging field of study in AI and ML. ML often incorporates millions of features, making model decision interpretation nearly impossible. Interpretability promises answers to those questions, but the bad news is that attackers can also use it to identify weak spots in your defenses. Learn how you can use it to identify when attackers have discovered your secret sauce.
Speaker: Greg Ellison, Data Scientist, Microsoft
Speaker: Holly Stewart, Principal Research Manager, Microsoft
Wednesday, Mar 06 | 08:00 A.M. - 08:50 A.M.
Many organizations have adopted machine learning and data analytics to help them identify security anomalies. However, mere identification isn’t good enough in a world where Petya and other modern attacks can take down 15,000 servers in a single organization in under two minutes. Organizations need to look at model-driven security to response to emerging threats and attacks.
Speaker: Kurt Lieber, VP, CISO IT Infrastructure, Aetna
Tuesday, Mar 05 | 03:40 P.M. - 04:30 P.M.
We must accept we don’t know how AI systems make decisions and embedded Big Data bias is often the root of error in security systems. Machine learning relies on datasets fed into algorithms to execute the system’s learning. This talk will trace the various paths that sources of biases take through AI-powered systems and examine a proposed a framework to measure and eliminate bias in data and algorithms.
Speaker: Clarence Chio, Co-founder, CTO, Unit21
Speaker: Winn Schwartau, Founder, Winn Schwartau, LLC
Friday, Mar 08 | 08:30 A.M. - 09:20 A.M.
Machine learning has been widely discussed in various areas. However, there is not much discussion about intrusion detection in large scale enterprise networks. This talk will propose a method based on statistical learning. The main idea is to identify unknown threats by modeling behaviors at different attack stages, and some tricks in performing pre-filter data and conducting post-correlate alarms.
Speaker: Tao Zhou, Senior Staff Algorithm Engineer, Alibaba Group
Analytics, Intelligence & Response | Machine Learning | Classroom | Sponsor Special Topics
As your workloads move to the cloud and your perimeter expands, cybercriminals are ready to exploit the new vulnerabilities that these trends expose. How can your overloaded security teams keep pace with these innovations in cybercrime? This session will explore how to effectively leverage machine learning and automation to accelerate and simplify incident response.
Speaker: Nick Bilogorskiy, Cybersecurity Strategist, Juniper Networks
Tuesday, Mar 05 | 02:20 P.M. - 03:10 P.M.
Artificial intelligence (AI) models are built with a type of machine learning called deep neural networks (DNNs), which are similar to neurons in the human brain. DNNs make the machine capable of mimicking human behaviors like decision making, reasoning and problem solving. This presentation will discuss the security, ethical and privacy concerns surrounding this technology.
Speaker: Anthony J. Ferrante, Global Head of Cybersecurity and Senior Managing Director, FTI Consulting, Inc.
About Fred Wilmot
Fred brings more than 20 years of cyber security expertise to the fight. Currently, he leads security engineering at Devo, integrating today’s threats and countermeasures into Devo’s platform. Most recently, he acted as both CEO and CTO at PacketSled building both a cloud and on-premise product for breach responders and service providers to automate risk mitigation and investigation. Previously, he served as Vice President, Solutions Engineering at Context Relevant, where he implemented a real-time transaction fraud platform for financial markets, weaponizing security use cases with data science automation and machine learning.
During Fred’s tenure at Splunk, he was responsible for the company’s ascension to a market leader in the security industry, placing the company in the Gartner SIEM magic quadrant. As the founder and director of the global security practice, Fred prototyped innovation in the field, and built platform applications that were utilized in responding to some of the most major breaches in Internet history.
Prior to Splunk, Fred has held numerous security positions with major brands including Symantec, Disney, T-Mobile, and IBM. Fred attended the US Naval Academy and holds a BS in Mathematics and History from FSU.