About ACAI

We live in an exciting world of artificial intelligence (AI) techniques and developments. Indeed, recent results with deep learning would look like magic only a few years ago. For instance, ChatGPT showcases the amazing potential of deep learning and attracts the general audience's attention. As such, the statement that AI is new electricity does not sound like an exaggeration.
The improvements in AI also bring consequent improvements in all domains that utilize AI. AI has become an emerging technology for assessing security and is becoming more relevant in cryptography. Initially, AI was mostly used in topics like implementation attacks, physically unclonable functions, and hardware Trojans. More recently, we have seen increased interest in topics like machine learning-based cryptanalysis, the security of machine learning, and how to use concepts from cryptography to improve it.
While these applications and improvements observed are noteworthy, the AI community is progressing even faster. Due to the need for faster and more error-free solutions, we expect the interplay between AI and cryptography only to increase. Such improvements can come from the developments in AI, but also by realizing what already developed techniques can be used in cryptography. Moreover, since AI has become pervasive, the security of AI opens new challenges. It would be interesting to understand how to use well-utilized techniques in cryptography to make AI more secure.
The goal of the workshop is to gather researchers from academia and industry that work on various aspects of cryptography and AI to share their experiences and discuss how to strengthen the collaboration.

Call for Abstract Submission

Talks can be about recent unpublished results, works in progress as well as results recently published in other venues.

Submissions are welcome on all technical aspects of AI and cryptography and security, but not limited to:

  • - Deep learning-based cryptanalysis (e.g., neural net distinguishers)
  • - AI techniques for implementation attacks
  • - AI-assisted design of cryptographic primitives and protocols
  • - AI-driven attacks on cryptographic protocols (e.g., searchable symmetric encryption)
  • - Cryptographic countermeasures for security and privacy of AI systems
  • - Security of machine learning models
  • - AI and cryptography in the industry
  • - Explainability and interpretability of machine learning models

Submissions must include the name of the speaker, a title and an extended abstract (up to 2 pages). Contributors can send their proposal to lejla@cs.ru.nl and stjepan.picek@ru.nl.

Submission

We encourage researchers working on all aspects of AI and cryptography to take the opportunity and use ACAI to share their work and participate in discussions. The contributors are invited to submit their proposals by sending an email to lejla@cs.ru.nl and stjepan.picek@ru.nl. All submitted proposals must follow the original LNCS format with a limit of up to 2 pages (excluding references). The proposals should be submitted in PDF format. Since there are no formal proceedings planned for this workshop, contributed talks may also be about works that have been recently published or that the authors intend to submit to other conferences.

Every accepted submission must have at least one author registered for the workshop.

Important dates (AoE)

Abstract submission deadline: June 15, 2023

Notification to authors: July 1, 2023

Workshop date: Aug 20, 2023

Registration

Workshop registration goes through the Crypto registration process. Check this page for further information.

Keynotes

SALSA and PICANTE: Machine learning-based attacks on LWE with sparse binary secrets

Kristin Lauter, Meta AI, USA

Learning with Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC key exchange standardized by NIST is based on module LWE, and current publicly available PQ Homomorphic Encryption (HE) libraries are based on ring LWE. The security of LWE-based PQ cryptosystems is critical, but certain implementation choices could weaken them. One such choice is sparse binary secrets, desirable for PQ HE schemes for efficiency reasons.
This talk will discuss our efforts to develop machine learning-based attacks against LWE schemes with sparse binary secrets. Our initial work, SALSA, demonstrated a proof of concept machine learning-based attack on LWE with sparse binary secrets in small dimensions (n<129) and low Hamming weights (h<5). Our more recent work, PICANTE, recovers secrets in much larger dimensions (up to n=350) and with larger Hamming weights (roughly n/10, and up to h=60 for n=350). We achieve this dramatic improvement via a novel preprocessing step, which allows us to generate training data from a linear number of eavesdropped LWE samples (4n) and changes the distribution of the data to improve transformer training. We also improve the secret recovery methods of SALSA and introduce a novel cross-attention recovery mechanism allowing us to read off the secret directly from the trained models. While PICANTE does not threaten NIST's proposed LWE standards, it demonstrates significant improvement over SALSA and could scale further, highlighting the need for future investigation.

Kristin Estella Lauter is an American mathematician and cryptographer whose research areas are number theory, algebraic geometry, cryptography, coding theory, and machine learning. She is particularly known for her work on Private AI and Homomorphic Encryption, for standardizing and deploying Elliptic Curve Cryptography, and for introducing Supersingular Isogeny Graphs as a foundational hard problem for Post Quantum Cryptography. She is currently the Director of Research Science for Meta AI Research (FAIR) North America, directing FAIR Labs in Seattle, Menlo Park, Pittsburgh, New York, and Montreal. She is also an Affiliate Professor at the University of Washington. She served as President of the Association for Women in Mathematics (AWM) from 2015 –2017, and as the Polya Lecturer for the Mathematical Association of America (MAA) for 2018-2020.
Lauter received her BA, MS, and Ph.D degrees in mathematics from the University of Chicago, in 1990, 1991, and 1996, respectively. She was a T.H. Hildebrandt Research Assistant Professor at the University of Michigan (1996-1999), a Visiting Scholar at Max Planck Institut fur Mathematik in Bonn, Germany (1997), and a Visiting Researcher at Institut de Mathematiques Luminy in France (1999). From 1999-2021, she was a researcher at Microsoft Research, leading the Cryptography and Privacy research group for as Partner Research Manager from 2008—2021.
She is a co-founder of the Women in Numbers (WIN) network, a research collaboration community for women in number theory, and she was the lead PI for the AWM NSF Advance Grant (2015-2020) to create and sustain research networks for women in all areas of mathematics. She serves on the National Academies Committee on Applied and Theoretical Statistics (CATS), and on the Scientific Board for the Isaac Newton Institute in Cambridge, UK. She has served on the Advisory Board of the Banff International Research Station, the Council of the American Mathematical Society (2014-2017), and the Executive Committee of the Conference Board of Mathematical Sciences (CBMS).
In 2008 Lauter and her coauthors were awarded the Selfridge Prize in Computational Number Theory. She was elected to the 2015 Class of Fellows of the American Mathematical Society (AMS) "for contributions to arithmetic geometry and cryptography as well as service to the community." She was an Inaugural Fellow of the AWM (2017). In 2020, Lauter was elected as a Fellow of both the Society for Industrial and Applied Mathematics (SIAM) and the American Association for the Advancement of Science (AAAS). In 2021, she was elected as an Honorary member of the Real Sociedad Matemática Española (RSME).

Automated Cryptographically-Secure Private Computing: From Logic and Mixed-Protocol Optimization to Centralized and Federated ML Customization

Farinaz Koushanfar, University of California San Diego, USA

Over the last four decades, much research effort has been dedicated to designing cryptographically-secure methods for computing on encrypted data. However, despite the great progress in research, adoption of the sophisticated crypto methodologies has been rather slow and limited in practical settings. Presently used heuristic and trusted third party solutions fall short in guaranteeing the privacy requirements for the contemporary massive datasets, complex AI algorithms, and the emerging collaborative/distributed computing scenarios such as blockchains.
In this talk, we outline the challenges in the state-of-the-art protocols for computing on encrypted data with an emphasis on the emerging centralized, federated, and distributed learning scenarios. We discuss how in recent years, giant strides have been made in this field by leveraging optimization and design automation methods including logic synthesis, protocol selection, and automated co-design/co-optimization of cryptographic protocols, learning algorithm, software, and hardware. Proof of concept would be demonstrated in the design of the present state-of-the-art frameworks for cryptographically-secure deep learning on encrypted data. We conclude by discussing the practical challenges in the emerging private robust learning and distributed/ federated computing scenarios as well as the opportunities ahead.

Farinaz Koushanfar is the Henry Booker Scholar Professor of ECE at the University of California San Diego (UCSD), where she is also the founding co-director of the UCSD Center for Machine-Intelligence, Computing & Security (MICS). Her research addresses several aspects of secure and efficient computing, with a focus on hardware and system security, robust machine learning under resource constraints, intellectual property (IP) protection, as well as practical privacy-preserving computing. Dr. Koushanfar is a fellow of the Kavli Frontiers of the National Academy of Sciences and a fellow of IEEE/ACM. She has received a number of awards and honors including the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, the ACM SIGDA Outstanding New Faculty Award, Cisco IoT Security Grand Challenge Award, MIT Technology Review TR-35, Qualcomm Innovation Awards, Intel Collaborative Awards, Young Faculty/CAREER Awards from NSF, DARPA, ONR and ARO, as well as several best paper awards.

Don’t ChatGPT Me!: Towards Unmasking the Wordsmith

Ahmad-Reza Sadeghi, TU Darmstadt, Germany

The hype surrounding Large Language Models (LLMs) has captivated countless individuals, fostering the belief that these models possess an almost magical ability to solve diverse problems. While LLMs, such as ChatGPT, offer numerous benefits, they also raise significant concerns regarding misinformation and plagiarism. Consequently, identifying AI-generated content has become an appealing area of research. However, current text detection methods face limitations in accurately discerning ChatGPT content. Indeed, our assessment of the efficacy of existing language detectors in distinguishing ChatGPT-generated texts reveals that none of the evaluated detectors consistently achieves high detection rates, as the highest accuracy achieved was 47%.
In this talk, we present our research work to develop a robust ChatGPT detector, which aims to capture distinctive biases in text composition present in human and AI-generated content and human adaptations to elude detection. Drawing inspiration from the multifaceted nature of human communication, which starkly contrasts the standardized interaction patterns of machines, we employ various techniques, including physical phenomena such as Doppler Effect, to address these challenges. To evaluate our detector, we use a benchmark dataset encompassing mixed prompts from ChatGPT and humans, spanning diverse domains. Lastly, we discuss open problems that are currently engaging our attention.

Ahmad-Reza Sadeghi is a professor of Computer Science and the head of the System Security Lab at Technical University of Darmstadt, Germany. He has been leading several Collaborative Research Labs with Intel since 2012, and with Huawei since 2019.
He has studied both Mechanical and Electrical Engineering and holds a Ph.D. in Computer Science from the University of Saarland, Germany. Prior to academia, he worked in R&D of IT-enterprises, including Ericsson Telecommunications. He has been continuously contributing to security and privacy research field. He was Editor-In-Chief of IEEE Security and Privacy Magazine, and has been serving on a variety of editorial boards such as ACM TODAES, ACM TIOT, and ACM DTRAP.
For his influential research on Trusted and Trustworthy Computing he received the renowned German “Karl Heinz Beckurts” award. This award honors excellent scientific achievements with high impact on industrial innovations in Germany. In 2018, he received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and for pioneering contributions in content protection, mobile security and hardware-assisted security. In 2021, he was honored with Intel Academic Leadership Award at USENIX Security conference for his influential research on cybersecurity and in particular on hardware-assisted security.

Invited Talks

CryptOpt: Verified Compilation with Randomized Program Search for Cryptographic Primitives

Chitchanok Chuengsatiansup, The University of Melbourne, Australia

Cryptography has been extensively used to protect digital information on a wide range of devices. Therefore, the correctness, efficiency, and portability of cryptographic software are of utmost importance. While relying on a compiler-based code generation achieves portability, the efficiency of the produced code usually underperforms compared to the code written directly in assembly. On the other hand, writing code manually achieves high performance while costing experts' time, particularly when the target platform has changed. Regardless, either approach may still produce incorrect code.

This talk presents CryptOpt, a verified compilation code generator that produces efficient code tailored to the architecture it runs on. On the optimization side, CryptOpt applies randomized search through the space of assembly program. On the formal-verification side, CryptOpt connects to the Fiat Cryptography framework and extends it with a new formally verified program-equivalence checker. The benchmark shows that CryptOpt produces fastest-known implementations of finite-field arithmetic for both Curve25519 and the Bitcoin elliptic curve secp256k1 for the relatively new Intel 12th and 13th generations.

Chitchanok Chuengsatiansup is a Senior Lecturer at the School of Computing and Information Systems, The University of Melbourne. Her research area covers cryptographic optimization, efficient implementation, and side-channel analysis. She was among the finalists of the Google Hash Code, the winners of the global competition iDASH Healthcare Privacy Protection Challenge, and the contributors of the lattice-based key encapsulation mechanism NTRU Prime submitted to the NIST Post-Quantum Cryptography Standardization Project. As an early- career researcher, she has been awarded competitive research funding such as Google Research Scholar and Defence Innovation Partnership Collaborative Research Fund.

Prior to joining The University of Melbourne, she was a Lecturer at The University of Adelaide, Australia, and a postdoctoral researcher at Inria and ENS de Lyon, France. Before that, she conducted her PhD study at Eindhoven University of Technology, The Netherlands. She was awarded a prestigious Japanese Government Scholarship (Monbukagakusho) for her Master's study and obtained the Master degree in Computer Science from the Graduate School of Information Science and Technology, The University of Tokyo. Her undergraduate study was at Chulalongkorn University, Thailand, where she received the Bachelor degree of Engineering program in Computer Engineering with first class honors.

Peek Into Black Box: Demystifying Side Channel Leakage with Explainable Deep Learning

Shivam Bhasin, Nanyang Technological University, Singapore

Deep neural networks (DNNs) have recently emerged as a powerful technique to evaluate cryptographic implementations against side-channel analysis (SCA). DNN have specially shown promising performance against protected implementations forcing designers to rethink countermeasures, However, the black box nature of DNN prevents explainability and interpretability of results.
In this talk, we take a look at explainability/interpretability of DNN. Firstly, we highlight that explainability/interpretability can mean different things in different settings. We briefly show how ablation can be used to understand how different layers on DNN deal with countermeasures. Next, we approach explainability/interpretability of DNN for SCA from a feature selection view point. We propose the use of interpretable neural network called Truth Table Deep Convolutional Neural Network (TT-DCNN), obtaining the rules and decisions that the neural networks learned when retrieving the secret key from the cryptographic primitive (i.e., exact formula). As a result, we can pinpoint the critical rules that the neural network uses to locate the exact Points of Interest (PoIs). We show that TT-DCNN is able to learn the exact masking countermeasure in a best-case setting. Next, we target generic black-box models through a novel technique called Key Guessing Occlusion (KGO) that acquires a minimal set of sample points required by the DNN for key recovery, enabling evaluators to know where to refine their cryptographic implementation.

Dr. Shivam Bhasin is a Principal Research Scientist and Programme Manager (Cryptographic Engineering) at Centre for Hardware Assurance, Temasek Laboratories, Nanyang Technological University Singapore. He received his PhD in Electronics & Communication from Telecom Paristech in 2011, Advanced Master in Security of Integrated Systems & Applications from Mines Saint-Etienne, France in 2008. Before NTU, Shivam held position of Research Engineer in Institut Mines-Telecom, France. He was also a visiting researcher at UCL, Belgium (2011) and Kobe University (2013). His research interests include embedded security, trusted computing and secure designs. He has co-authored several publications at recognized journals and conferences. Some of his research now also forms a part of ISO/IEC 17825 standard.

Contributed Talks

A Cipher-Agnostic Neural Training Pipeline with Automated Finding of Good Input Differences

Emanuele Bellini, David Gerault, Anna Hambitzer, and Matteo Rossi

On Evaluation of Artificial Intelligence Methods for Laser Fault Injection Parameter Search

Marina Krček

Improve Dependable Security in Blockchain Consensus: An Interdisciplinary Approach of Reinforcement Learning and Mechanism Design

Xinyu Tian, Zesen Zhuang, and Luyao Zhang

Program

The program starts at 09:10 am, PST (Pacific Standard Time: UTC - 8h).

TIME
PST (UTC-8h)
SESSION/TITLE
09:10 - 09:15 Introductory remarks
09:15 - 10:15 Keynote talk 1: Don’t ChatGPT Me!: Towards Unmasking the Wordsmith
Ahmad-Reza Sadeghi, TU Darmstadt, Germany
10:15 - 10:45 Coffee break
10:45 - 11:30 Invited talk 1: CryptOpt: Verified Compilation with Randomized Program Search for Cryptographic Primitives
Chitchanok Chuengsatiansup, The University of Melbourne, Australia
11:30 - 12:15 Invited talk 2: Peek Into Black Box: Demystifying Side Channel Leakage with Explainable Deep Learning
Shivam Bhasin, Nanyang Technological University, Singapore
12:15 - 12:45 On Evaluation of Artificial Intelligence Methods for Laser Fault Injection Parameter Search
Marina Krček
12:45 - 14:00 Lunch break
14:00 - 14:45 Keynote talk 2: SALSA and PICANTE: Machine learning-based attacks on LWE with sparse binary secrets
[with WAC6 workshop]
Kristin Lauter, Meta AI, USA
15:00 - 16:00 Keynote talk 3: Automated Cryptographically-Secure Private Computing: From Logic and Mixed-Protocol Optimization to Centralized and Federated ML Customization
Farinaz Koushanfar, University of California San Diego, USA
15:25 - 15:55 Coffee break (Overlap with Keynote 3)
16:00 - 16:30 A Cipher-Agnostic Neural Training Pipeline with Automated Finding of Good Input Differences
Emanuele Bellini, David Gerault, Anna Hambitzer, and Matteo Rossi
16:30 - 17:00 Improve Dependable Security in Blockchain Consensus: An Interdisciplinary Approach of Reinforcement Learning and Mechanism Design
Xinyu Tian, Zesen Zhuang, and Luyao Zhang
17:00 - 17:05 Farewell

Organizers

Moti Yung

Principal Research Scientist

Google