Gaurav Puri – Security & Integrity Engineer at Meta: Navigating the Future of Security and Integrity Engineering

4 months ago 70

In this interview, we explore the journey and insights of Gaurav Puri, a seasoned security and integrity engineering specialist at Meta. From pioneering machine learning models to tackling misinformation and security threats, this expert shares pivotal moments and strategies that shaped their career. The interview also approaches their unique methods to balance platform safety and user privacy, the evolving role of AI in cybersecurity, and the proactive shift towards embedding security in the design phase. Discover how continuous learning and community engagement drive innovation and resilience in the dynamic field of security engineering.

Can you describe a pivotal moment in your career that led you to specialize in security and integrity engineering?

A pivotal moment in my career that led me to specialize in security and integrity engineering was my extensive experience working in fraud detection and credit risk for leading FinTech firms like PayPal and Intuit. At these companies, I developed and deployed numerous machine learning models aimed at detecting adversarial actors on their platforms.

During my tenure at PayPal, I spearheaded the development of innovative ML and machine fingerprinting solutions for fraud detection. These groundbreaking techniques significantly improved the platform’s ability to identify and mitigate fraudulent activities. Similarly, at Intuit, I established a comprehensive fraud risk framework for QuickBooks Capital and contributed to building the first credit model using the accounting data.

These experiences honed my skills in risk analysis, data science, and machine learning, and fueled my passion for addressing adversarial challenges in digital environments. However, I realized that I wanted to leverage my expertise beyond the realm of FinTech and contribute to solving broader civic problems that impact society.

This aspiration led me to an opportunity at Meta, where I could apply my skills to critical issues such as misinformation, health misinformation, and various forms of abuse including spam, phishing, and inauthentic behavior. At Meta, I have been able to work on high-impact projects such as identifying and mitigating misinformation during the US 2020 elections, removing COVID-19 vaccine hesitancy content, and enhancing platform safety across Facebook and Instagram.

By transitioning to Meta, I have been able to expand the scope of my work from financial security to broader societal issues, driving meaningful change and contributing to the integrity and safety of online communities.

How has your background in data science and machine learning influenced your approach to combating misinformation and security threats at Meta?

My background in data science and machine learning has profoundly influenced my approach to combating misinformation and security threats at Meta. My extensive experience in developing and deploying machine learning models for fraud detection and credit risk in the FinTech industry provided me with a strong foundation in risk analysis, pattern recognition, and adversarial threat detection.

At PayPal and Intuit, I honed my skills in building robust machine learning models to detect and mitigate fraudulent activities. This involved creating complex algorithms and data pipelines capable of identifying suspicious behavior and reducing false positives. These experiences taught me the importance of precision, scalability, and adaptability in handling dynamic and evolving threats.

Transitioning to Meta, I applied these principles to tackle misinformation and various security threats on the platform. My approach is heavily data-driven to analyze vast amounts of data and detect patterns indicative of malicious activities.

How do you balance the need for platform safety from phishing and spam with maintaining user privacy and freedom of expression?

While building solutions we ensure we are able to precisely identify bad actors on the platform and not hurt the voice of the people. We also provide options for people to appeal

What difference you see in your career as a Security Engineer vs your previous roles as Machine Learning Data Scientist?

In my career transition from a Machine Learning Data Scientist to a Security Engineer, I’ve observed significant differences, particularly in the approach to building secure code and features. As a Security Engineer, the shift left mindset has fundamentally influenced how security is integrated from the design stage, contrasting sharply with the traditional practices I encountered in my previous roles.

In the past, as a Machine Learning Data Scientist, my primary focus was on developing and optimizing models to combat threats, often addressing security concerns reactively. Security measures were typically implemented after the core functionalities were developed, leading to a cycle of detecting and patching vulnerabilities post-deployment. This reactive approach, while effective to an extent, often resulted in higher costs and more complex fixes due to late-stage interventions.

Transitioning to a Security Engineer role, I have embraced a shift left approach, embedding security considerations right from the initial design phase. This proactive stance means that security is no longer an afterthought but a foundational element of the development lifecycle. In practice, this involves thorough threat modeling during the design phase, identifying potential vulnerabilities early, and ensuring that security requirements are integral to the architectural blueprint.

Design reviews have also become a critical component of the development process. These reviews ensure that security principles, such as least privilege and defense in depth, are embedded in the architecture. The collaborative nature of these reviews, involving security experts, developers, and other stakeholders, ensures that security is a shared responsibility and that potential risks are mitigated before they manifest in the final product.

In essence, the shift left mindset has transformed my approach to security, emphasizing early integration, continuous monitoring, and collaborative efforts to build robust and secure systems. This proactive and preventive approach contrasts with the reactive measures of my previous roles, ultimately leading to more secure and resilient products.

Can you explain Shift Left Defense in Depth to someone not familiar with security background?

Imagine you and your friends are planning to build a fort in your backyard. Instead of building the fort first and then thinking about how to protect it, you start thinking about safety and protection right from the beginning. You consider where the fort should be built, what materials you need, and how to make it strong and safe before you even start building.

Now, once your fort is built, you want to make sure it’s really secure. You don’t just put up one fence around it; you add several layers of protection. Here’s how you do it:

  1. Outer Layer: You put up a fence around the whole yard. This fence is your first line of defense to keep strangers or animals from getting close to your fort.
  2. Middle Layer: Inside the fence, you dig a moat or set up some bushes. This makes it harder for anyone who gets past the fence to reach the fort.
  3. Inner Layer: Right around the fort itself, you place some strong walls and maybe even a lock on the fort door. This is your last line of defense to keep your fort safe.

In your opinion, what are the next big challenges in cybersecurity that tech companies need to prepare for in the coming years?

1. Adversarial Attacks: Attackers are increasingly using adversarial techniques to manipulate AI and machine learning models, leading to incorrect outputs or system breaches. It has become easier for attack to leverage AI to create fake content.

  1. Protecting LLMs from adversarial attacks designed to manipulate their outputs.
  2. Navigating the complex landscape of global data privacy regulations, such as GDPR, CCPA, and emerging laws, requires continuous adaptation and compliance efforts.
  3. Implementing robust content moderation to prevent misuse of LLMs in generating inappropriate or harmful content.
  4. Quantum computers could break traditional encryption methods, necessitating the development of quantum-resistant cryptographic algorithms. We need to prepare now by securing sensitive data against future quantum decryption threats is crucial.

How do you see the role of machine learning/ AI evolving in the field of cybersecurity and threat modeling?

  1. Dynamic Threat Models- Traditional threat models can be static and slow to adapt. AI enables continuous learning from new data, allowing threat models to evolve and stay current with emerging threats.
  2. AI-driven tools can automate threat hunting processes, identifying hidden threats and vulnerabilities that may not be detected by traditional methods.
  3. Can automate code reviews, and bug finding
  4. AI can analyze behavioral signals and content data and help in optimizing data operational and customer support cost

What inspired you to get involved with academic and AI communities, and how do these engagements enhance your professional work?

  1. My passion for continuous learning and staying at the forefront of technological advancements has always driven me. Engaging with academic and AI communities provides an opportunity to immerse myself in the latest research, trends, and innovations.
  2. I am inspired by the potential to apply academic research and AI innovations to solve real-world problems, particularly in areas like cybersecurity, misinformation, and fraud detection.
  3. Engaging with academic and AI communities helps build a strong professional network of researchers, academics, and industry experts.
  4. Teaching and mentoring also reinforce my own understanding and keep me grounded in fundamental principles while exposing me to fresh ideas and perspectives.
  5. Judging AI/ML hackathons enables me to evaluate innovative projects and inspire young talent, while also learning from the creative solutions presented by participants.

How do you foster a culture of innovation and continuous improvement within your team at Meta?

  1. Encourage a culture where failure is seen as a valuable learning experience. Emphasize the importance of iterating quickly based on lessons learned.
  2. Conduct post-mortem analysis on both successful and unsuccessful projects to identify key takeaways and areas for improvement.
  3. Organize internal hackathons and innovation challenges to stimulate creativity and problem-solving.
  4. Host regular brainstorming sessions where team members can propose new ideas and solutions without fear of judgment.
Read Entire Article