The International Conference on Secure and Trustworthy Artificial Intelligence Systems (ICSTAIS 2026) invites original research contributions that advance the state-of-the-art in secure, trustworthy, and resilient AI systems. As artificial intelligence becomes increasingly pervasive in critical applications, ensuring the security, reliability, and trustworthiness of these systems has emerged as a paramount concern for researchers, practitioners, and policymakers worldwide.
ICSTAIS 2026 provides a premier international platform for researchers, academicians, industry professionals, and graduate students to present cutting-edge research, share innovative solutions, and foster collaborative discussions on the convergence of artificial intelligence and cybersecurity. The conference emphasizes both theoretical foundations and practical applications, encouraging submissions that bridge the gap between research and real-world implementation.
Scope and Topics of Interest
We solicit high-quality original research papers, survey papers, and position papers addressing various aspects of secure and trustworthy AI systems. Topics of interest include, but are not limited to:
Track 1: Foundations of Secure and Trustworthy AI
- AI model verification, validation, and formal methods
- Secure AI lifecycle management and DevSecOps for ML
- Ethical frameworks, AI governance, and regulatory compliance
- Trust metrics and assessment methodologies for AI systems
- Certification and standardization of AI security practices
Track 2: AI for Cybersecurity
- Machine learning and deep learning for intrusion detection and prevention
- AI-driven threat intelligence and automated threat hunting
- Intelligent security incident response and digital forensics
- Anomaly detection in network traffic and system behavior
- AI-enhanced malware detection and classification
Track 3: Cybersecurity for AI Systems
- Adversarial machine learning and defense mechanisms
- Model theft, inversion, and membership inference attacks
- Data poisoning and backdoor attacks on AI systems
- Secure deployment of AI in cloud, edge, and IoT environments
- Privacy-preserving machine learning architectures
Track 4: Detection and Mitigation of Deepfakes and Synthetic Media
- Advanced deepfake generation and detection techniques
- Forensic analysis of manipulated multimedia content
- AI-powered disinformation detection and mitigation
- Social engineering threats amplified by synthetic media
- Legal and ethical implications of deepfake technology
Track 5: Adversarial AI and Robust Machine Learning
- Adversarial example generation and defense strategies
- Certified defenses and provable robustness guarantees
- Robustness testing and red-teaming methodologies
- Secure transfer learning and domain adaptation
- Federated learning security and privacy
Track 6: Privacy-Preserving AI
- Differential privacy in machine learning systems
- Secure multiparty computation for collaborative AI
- Homomorphic encryption for privacy-preserving inference
- Federated learning with privacy guarantees
- Synthetic data generation for privacy protection
Track 7: Trust, Explainability, and Accountability in AI
- Explainable AI for security-critical applications
- Interpretability techniques for black-box AI models
- Bias detection, fairness, and algorithmic accountability
- Human-AI interaction in security contexts
- Trust calibration and uncertainty quantification
Track 8: AI in Critical Infrastructure Protection
- AI applications in securing smart grids, healthcare, transport, and finance
- Cyber-physical security and AI-based industrial control protection
- AI resilience in emergency and disaster scenarios
Track 9: AI and Post-Quantum Cybersecurity
- Quantum-resistant cryptography for AI communications
- Quantum computing threats to current AI security
- Quantum machine learning and its security implications
- Quantum-enhanced cybersecurity protocols
- Post-quantum cryptographic implementations in AI systems
Track 10: Emerging Trends and Future Research in Secure AI
- AI for cyber risk assessment and management
- Bio-inspired and neuromorphic security models
- Sustainable and green AI security frameworks
- AI ethics and responsible disclosure practices
- Future challenges in AI security and privacy
Submission Guidelines
Papers must be submitted through the Microsoft CMT system. Please follow these guidelines:
- Submissions must be original and not published or under review elsewhere
- Papers should be formatted according to Springer SIST style
- Full papers: 11-15 pages
- Short papers: 5-8 pages
- Survey papers: 10-12 pages
- All submissions will undergo rigorous peer review by at least three experts
Important Dates
| Paper Submission Deadline |
November 15, 2025 |
| Notification of Acceptance |
December 1, 2025 |
| Camera-Ready Submission |
December 16, 2025 |