Machine learning and Deep learning are undergoing exponential growth these days. Businesses and Academia are really hyped about trying out Deep learning to solve their previously unsolved problems. But just like any other technology, AI comes with awesome applications topped with some serious implications.
Testing these applications for security and privacy issues require domain knowledge and a very unique approach. In this webinar, we will discuss a proven approach to perform a security assessment on AI applications.
About the Speaker:
Nikhil Joshi is an AI Security Researcher at Payatu. He has orchestrated methodologies to pen-test Machine Learning applications against ML-specific vulnerabilities and loves to explore new ways to hack ML-powered applications.
Parallelly Nikhil's research is focused on security implications in Deep Learning applications such as Adversarial Learning, Model stealing attacks, Data poisoning, etc.
Nikhil is a speaker/trainer at multiple conferences like Nullcon, Troopers, HITB, PhDays, IEEE, and Troopers. He is an active member of local Data Science and Security groups and has delivered multiple talks and workshops.
Being an Applied Mathematics enthusiast, recent advances in Machine Learning and its applications in security, behavioural science, and telecom are of major interest to Nikhil.