โ ๏ธ Disclaimer: This video is for educational and awareness purposes only. We do not encourage or promote any unethical activities. All demonstrations and discussions are done in a controlled lab environment to spread knowledge and security best practices.
________________________________________
๐๏ธ Podcast Episode Overview:
๐จ AI Dataset Poisoning in Cybersecurity | How Hackers Attack Machine Learning Models | Featuring SOC IR Expert Mohammad Ahmad | MITRE ATLAS AML.T0020
In this explosive episode of Cyber Mind Space, we uncover one of the most dangerous and underexplored threats in cybersecurity and artificial intelligence โ AI dataset poisoning.
Joined by Mohammad Ahmad, a leading SOC & Incident Response expert at Trustwave, we expose how attackers manipulate training datasets to poison machine learning models and bypass modern AI-driven phishing detection systems.
๐ Poisoned data = compromised AI.
๐ง Corrupt inputs = broken trust in ML systems.
๐ฏ In this episode, you'll learn:
What is dataset poisoning in machine learning and how it impacts AI security
How attackers inject backdoors, model bias, and undetectable vulnerabilities
Step-by-step analysis of a real Kaggle phishing detection dataset
Tactics used in adversarial machine learning, model evasion, and data manipulation
Direct mapping with MITRE ATLAS technique AML.T0020 (Poison Training Data)
๐ก This episode is ideal for:
Cybersecurity professionals
AI/ML engineers & researchers
Threat hunters & red teamers
SOC analysts & blue teams
Ethical hackers & penetration testers
๐ง Stay ahead of the curve in:
AI in cybersecurity, data poisoning attacks, machine learning model security, adversarial AI, SOC operations, phishing protection, AI threat modeling, and AI dataset poisoning, machine learning security, adversarial machine learning, data poisoning attacks, phishing detection bypass, MITRE ATLAS AML.T0020, poisoned datasets, SOC expert, cybersecurity podcast, red teaming AI, Kaggle phishing dataset, artificial intelligence in cybersecurity, training data attacks, model evasion, AI security risks
๐ Subscribe now for expert-led breakdowns on AI, threat intelligence, red teaming, and the dark side of machine learning.
________________________________________
๐ Study Material & Dataset Used:
We used the Web Page Phishing Detection Dataset from Kaggle to demonstrate how even public datasets can be potential attack vectors in ML pipelines.
๐ Dataset URL:
๐ www.kaggle.com/datasets/shash...
๐ MITRE ATLAS Technique Referenced:
AML.T0020 โ Poison Training Data
Learn more: atlas.mitre.org/techniques/AM...
________________________________________
๐จโ๐ป About the Guest โ Mohammad Ahmad:
๐ Master's in Cybersecurity
๐ก๏ธ SOC & IR Expert at Trustwave
๐ LinkedIn: / m-ahmad95
๐ฌ He actively drives security operations excellence and threat detection across enterprises globally.
________________________________________
๐ Stay Connected with Cyber Mind Space โ Learn, Discuss & Dominate Cybersecurity!
๐ข Telegram Channel (Updates & Resources):
t.me/cybermindspace
๐ฌ Telegram Group (Ask & Network):
t.me/+LJvMwjAE6yA5YWQ1
๐ธ Instagram (Reels & Daily Tips):
/ cyber_mind_space
๐ฅ YouTube (Podcasts, Lives & Tutorials):
/ @cybermindspace
๐ LinkedIn (Professional Profile):
/ almadadali
๐ป GitHub (Tools & Scripts):
github.com/ALMADADALI
๐ฒ WhatsApp Channel (Cyber Alerts):
whatsapp.com/channel/0029VbAz...
๐ฃ๏ธ Discord Server (Voice & Community Chat):
/ discord
๐ป Snapchat (Cyber Moments):
/ cybermindspace
๐ฆ Twitter (X) โ Cybersecurity Thoughts:
x.com/cybermindspace?s=21
๐ Official Website: cybermindspace.com/
#CyberMindSpace #AIpoisoning #PhishingDetection #DatasetPoisoning #MITREATLAS #BugBounty #EthicalHacking #AIhacking #Cybersecurity #CyberThreats #MachineLearningSecurity #SOC #Malware #AIattack #MLPoisoning #DataPoisoning #MLSecurity #MohammadAhmad #computer #computerscience #computersecurity
ใณใกใณใ