Papers updated in last 31 days (Page 2 of 455 results)

Last updated:  2025-06-02
Using the Schur Product to Solve the Code Equivalence Problem
Michele Battagliola, Rocco Mora, and Paolo Santini
Given two linear codes, the Code Equivalence Problem asks to find (if it exists) an isometry between them. A special case is the Permutation Equivalence Problem (PEP), where the isometry is a permutation. The hardness of PEP is crucially dependent on the hull of a code, that is, the intersection between a code and its dual. For random codes, with large probability the hull is trivial, i.e., has dimension $h = 0$: this allows for efficient attacks. However, when the hull is large enough, all known attacks take exponential time and PEP is deemed hard. In this paper we study how the so-called Schur product between linear codes can be employed to solve PEP. The main idea is to transform a given PEP instance by computing powers of the given codes. We show that, while squaring a pair of equivalent codes preserves the equivalence, the new pair of codes have trivial hull with high probability. This allows to identify many new weak instances of PEP, namely: whenever $h<\sqrt{2n}$ With some technical caveats, our solver runs in average polynomial time. As a concrete application, we consider the updatable encryption scheme proposed by Albrecht, Benčina and Lai at Eurocrypt 2025. All the recommended instances fall into the range of weak PEP instances we identify in this paper, hence are susceptible to our attack. We successfully recover the secret permutation for one of the instances claiming 128 bits of security. As a fix, instances with hull dimension $h>\sqrt{2n}$ shall be employed.
Last updated:  2025-06-02
Improved differential cryptanalysis of SPEEDY
Tim Beyne and Addie Neyt
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key recovery attack with a data complexity of $2^{183}$ and a time complexity of $2^{185}$. The memory complexity varies: in the chosen-plaintext setting, it is $2^{156}$, whereas in the chosen-ciphertext setting, it is $2^{36}$.
Last updated:  2025-06-02
Leader Election with Poly-logarithmic Communication Per Party
Amey Bhangale, Chen-Da Liu-Zhang, Julian Loss, Kartik Nayak, and Sravya Yandamuri
The leader election problem requires a set of $n$ parties, out of which up to $t$ can be Byzantine, to elect a leader uniformly at random such that no two parties disagree on the elected leader and an honest leader is elected with constant probability. The Scalable Leader Election protocol published in SODA'2006 is an important breakthrough in solving this problem efficiently for all but $o(1)$ of the parties. They achieve a protocol for $t < (\frac{1}{3} - \epsilon)n$ (for $\epsilon = o(1)$) in the full-information setting such that every party only sends polylog \(n\) bits. In this paper, we revisit their work and show that there are subtleties in the protocol that are not dealt with in the analysis. In particular, two mechanisms related to ``silencing'' parties and dealing with ``bad nodes'' are at odds with each other, which is why the existing analysis is insufficient. We present these concerns in detail and subsequently present a modification to their protocol with a corresponding analysis to solve leader election with the desired metrics.
Last updated:  2025-06-02
Efficient and Generic Methods to Achieve Active Security in Private Information Retrieval and More Advanced Database Search
Reo Eriguchi, Kaoru Kurosawa, and Koji Nuida
Motivated by secure database search, we present secure computation protocols for a function $f$ in the client-servers setting, where a client can obtain $f(x)$ on a private input $x$ by communicating with multiple servers each holding $f$. Specifically, we propose generic compilers from passively secure protocols, which only keep security against servers following the protocols, to actively secure protocols, which guarantee privacy and correctness even against malicious servers. Our compilers are applied to protocols computing any class of functions, and are efficient in that the overheads in communication and computational complexity are only polynomial in the number of servers, independent of the complexity of functions. We then apply our compilers to obtain concrete actively secure protocols for various functions including private information retrieval (PIR), bounded-degree multivariate polynomials and constant-depth circuits. For example, our actively secure PIR protocols achieve exponentially better computational complexity in the number of servers than the currently best-known protocols. Furthermore, our protocols for polynomials and constant-depth circuits reduce the required number of servers compared to the previous actively secure protocols. In particular, our protocol instantiated from the sparse Learning Parity with Noise (LPN) assumption is the first actively secure protocol for multivariate polynomials which has the minimum number of servers, without assuming fully homomorphic encryption.
Last updated:  2025-06-02
List Decoding in Private Information Retrieval: Formal Definition and Efficient Constructions
Reo Eriguchi, Kaoru Kurosawa, and Koji Nuida
Multi-server Private Information Retrieval (PIR) is a cryptographic primitive that allows a client to retrieve an item of a database shared by multiple servers without revealing the index. This paper addresses the problem of error correction in multi-server PIR, enabling the client to obtain the item even when some servers provide incorrect responses. In a dishonest-majority setting where the majority of servers may introduce errors, it is known that the client can no longer uniquely determine the correct value. Previous approaches in this setting have typically settled for relaxed guarantees that the client can only reject incorrect values. However, these guarantees are substantially weak, as they only indicate the presence of errors without providing any information about the desired item. In this paper, we explore a more natural alternative called list-decodable PIR, which ensures that the client receives a small list of candidate values one of which is correct. We provide the first formal definition of list-decodable PIR and study its basic properties including a fundamental lower bound on the number of servers and the difficulty of simulation-based security definitions. We propose generic constructions of list-decodable PIR from any semi-honest PIR protocols, each offering different trade-offs. Our constructions improve upon the communication complexity of the only previously known protocol satisfying our definition. Furthermore, they achieve communication complexity comparable to that of the currently best known semi-honest PIR protocols.
Last updated:  2025-06-02
Zero-Knowledge Polynomial Commitment in Binary Fields
Benjamin E. Diamond
In recent work, Diamond and Posen ('24) introduce a polynomial commitment scheme for large binary fields, adapting BaseFold (CRYPTO '24). In this note, we devise a zero-knowledge variant of Diamond and Posen's scheme. Our construction reprises a few ideas from Aurora (EUROCRYPT '19). We carry through those ideas in characteristic 2, and moreover show that they're compatible with BaseFold.
Last updated:  2025-06-02
Unveiling Privacy Risks in Quantum Optimization Services
Mateusz Leśniak, Michał Wroński, Ewa Syta, and Mirosław Kutyłowski
As cloud-based quantum computing services, such as those offered by D-Wave, become more popular for practical applications, privacy-preserving methods (such as obfuscation) are essential to address data security, privacy, and legal compliance concerns. Several efficient obfuscation methods have been proposed, which do not increase the time complexity of solving the obfuscated problem, for quantum optimization problems. These include {\em sign reversing}, {\em variable permutation}, and the combination of both methods assumed to provide greater protection. Unfortunately, sign reversing has already been shown to be insecure. We present two attacks on variable permutation and the combined method, where it is possible to efficiently recover the deobfuscated problem, particularly when given access to the obfuscated problem and its obfuscated solution, as a cloud-based quantum provider would have. Our attacks are in the context of an optimization problem of cryptanalysis of the Trivium cipher family, but our approach generalizes to other similarly structured problems. Our attacks are efficient and practical. Deobfuscating an optimization problem with \( n \) variables obfuscated with the combined method has a complexity of $O(n^2)$ compared to the complexity of $O\left( n \cdot n! \cdot 2^n \right)$ of the brute force attack. We provide an implementation of our attack; using a commodity laptop, our attack using the full Trivium cipher takes less than two minutes if optimized. We also present possible countermeasures to mitigate our attacks and bring attention to the need for further development in this area.
Last updated:  2025-06-02
How to Make Any Computational Secret Sharing Scheme Adaptively Secure
George Lu and Brent Waters
Secret sharing (SS) is a foundational cryptographic primitive with diverse applications, including secure multiparty computation and conditional disclosure of secrets. While traditional schemes have primarily emphasized information-theoretic security, recent advancements have increasingly leveraged computational assumptions to achieve more efficient constructions and support broader access policies. Despite these successes, most existing computational secret sharing (CSS) schemes are limited to a static security model, where adversaries must commit to their choice of corrupted participants at the outset. A critical challenge in CSS lies in achieving adaptive security, where adversaries can dynamically select participants to corrupt, better reflecting real-world threat models. In this paper, we present a novel transformation that converts any statically secure CSS scheme into an adaptively secure one while preserving the original access policy and computational assumptions, providing a framework for bridging the gap between static and adaptive security. Our construction introduces a multiplicative share size overhead of $O(n^2)$ where $n$ is the number of parties. Additionally, we explore trade-offs in efficiency and security, offering more efficient adaptive CSS constructions for specific, restricted policy classes. This work addresses key limitations in the current landscape of CSS and paves the way for broader adoption of adaptively secure secret sharing in cryptographic applications.
Last updated:  2025-06-02
Registered ABE and Adaptively-Secure Broadcast Encryption from Succinct LWE
Jeffrey Champion, Yao-Ching Hsieh, and David J. Wu
Registered attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data (like standard ABE), but without needing a central trusted authority. In a key-policy registered ABE scheme, users choose their own public and private keys and then register their public keys together with a decryption policy with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key which serves as the public key for an ABE scheme. Currently, we can build registered ABE for restricted policies (e.g., Boolean formulas) from pairing-based assumptions and for general policies using witness encryption or indistinguishability obfuscation. In this work, we construct a key-policy registered ABE for general policies (specifically, bounded-depth Boolean circuits) from the $\ell$-succinct learning with errors (LWE) assumption in the random oracle model. The ciphertext size in our registered ABE scheme is $\mathsf{poly}(\lambda, d)$, where $\lambda$ is a security parameter and $d$ is the depth of the circuit that computes the policy circuit $C$. Notably, this is independent of the length of the attribute $\mathbf{x}$ and is optimal up to the $\mathsf{poly}(d)$ factor. Previously, the only lattice-based instantiation of registered ABE uses witness encryption, which relies on private-coin evasive LWE, a stronger assumption than $\ell$-succinct LWE. Moreover, the ciphertext size in previous registered ABE schemes that support general policies (i.e., from obfuscation or witness encryption) scales with $\mathsf{poly}(\lambda, |\mathbf{x}|, |C|)$. The ciphertext size in our scheme depends only on the depth of the circuit (and not the length of the attribute or the size of the policy). This enables new applications to identity-based distributed broadcast encryption. Our techniques are also useful for constructing adaptively-secure (distributed) broadcast encryption, and we give the first scheme from the $\ell$-succinct LWE assumption in the random oracle model. Previously, the only lattice-based broadcast encryption scheme with adaptive security relied on witness encryption in the random oracle model. All other lattice-based broadcast encryption schemes only achieved selective security.
Last updated:  2025-06-02
Improved Private Simultaneous Messages Protocols for Symmetric Functions with Universal Reconstruction
Koji Nuida
Private Simultaneous Messages (PSM) is a kind of secure multiparty computation with minimal interaction pattern and minimal security requirement. A PSM protocol is said to be with universal reconstruction for a given function family if the algorithm of the referee (the output party) is independent of a function to be computed and the referee cannot infer the function from a protocol execution. In a recent work by Eriguchi and Shinagawa (EUROCRYPT 2025), they proposed a compiler to obtain a PSM protocol for symmetric functions from PSM protocols with universal reconstruction for symmetric functions with smaller domains. They also constructed the latter PSM protocols with universal reconstruction, by which the former PSM protocol achieves communication complexity better than the previously known protocols. In this paper, we construct the latter PSM protocols with universal reconstruction for symmetric functions more efficiently; the communication complexity is exponentially (in the input range) smaller than the protocols by Eriguchi and Shinagawa. As a consequence, we also obtain a PSM protocol (and also an ad-hoc PSM protocol and a robust PSM protocol) for symmetric functions that is more efficient than their protocol. Technically, a main ingredient of their protocols is a linear and injective encoding of histograms for the input elements, and our improvement is realized by finding a more efficient encoding of the histograms.
Last updated:  2025-06-01
Attribute-Based Encryption for Circuits of Unbounded Depth from Lattices: Garbled Circuits of Optimal Size, Laconic Functional Evaluation, and More
Yao-Ching Hsieh, Huijia Lin, and Ji Luo
Although we have known about fully homomorphic encryption (FHE) from circular security assumptions for over a decade [Gentry, STOC ’09; Brakerski–Vaikuntanathan, FOCS ’11], there is still a significant gap in understanding related homomorphic primitives supporting all *unrestricted* polynomial-size computations. One prominent example is attribute-based encryption (ABE). The state-of-the-art constructions, relying on the hardness of learning with errors (LWE) [Gorbunov–Vaikuntanathan–Wee, STOC ’13; Boneh et al., Eurocrypt ’14], only accommodate circuits up to a *predetermined* depth, akin to leveled homomorphic encryption. In addition, their components (master public key, secret keys, and ciphertexts) have sizes polynomial in the maximum circuit depth. Even in the simpler setting where a single key is published (or a single circuit is involved), the depth dependency persists, showing up in constructions of 1-key ABE and related primitives, including laconic function evaluation (LFE), 1-key functional encryption (FE), and reusable garbling schemes. So far, the only approach of eliminating depth dependency relies on indistinguishability obfuscation. An interesting question that has remained open for over a decade is whether the circular security assumptions enabling FHE can similarly benefit ABE. In this work, we introduce new lattice-based techniques to overcome the depth-dependency limitations. - Relying on a circular security assumption, we construct LFE, 1-key FE, 1-key ABE, and reusable garbling schemes capable of evaluating circuits of unbounded depth and size. - Based on the *evasive circular* LWE assumption, a stronger variant of the recently proposed *evasive* LWE assumption [Wee, Eurocrypt ’22; Tsabary, Crypto ’22], we construct full-fledged ABE and predicate encryption (PE) schemes for circuits of unbounded depth and size. Our LFE, 1-key FE, and reusable garbling schemes achieve almost optimal succinctness (up to polynomial factors in the security parameter). Their ciphertexts and input encodings have sizes linear in the input length, while function digest, secret keys, and garbled circuits have constant sizes independent of circuit parameters (for Boolean outputs). In fact, this gives the first constant-size garbled circuits without relying on indistinguishability obfuscation. Our ABE and PE schemes offer short components, with master public key and ciphertext sizes linear in the attribute length and secret key being constant-size.
Last updated:  2025-06-01
Silentium: Implementation of a Pseudorandom Correlation Generator for Beaver Triples
Vincent Rieder
Secure Multi-Party Computation is a privacy-enhancing technology that allows several parties to securely compute on distributed private data. In the line of the well established SPDZ protocol, the by far most expensive task is the generation of Beaver triples in the so called offline phase. Silentium is our implementation of an actively secure offline phase in the form of a Pseudorandom Correlation Generator for Beaver triples (Bt-PCG, Boyle et al. CRYPTO 2020), which, as any PCG, is designed to have low communication. Compared to previous offline phases, their Bt-PCG reduces the communication costs by three orders of magnitude. However, so far efficiency was only estimated. With Silentium, we demonstrate that their Bt-PCG can achieve even better running times than state-of-the-art offline phase implementations in the MP-SPDZ library. To actually achieve such a performance, Silentium comprises a systematic parallelization strategy and implementation-friendly decomposition scenarios of the Bt-PCG into structured modules. Looking forward for large-scale applications on the cloud, Silentium is designed to be versatile to support hardware acceleration in future.
Last updated:  2025-06-01
Nearly Optimal Parallel Broadcast in the Plain Public Key Model
Ran Gelles, Christoph Lenzen, Julian Loss, and Sravya Yandamuri
Parallel Byzantine broadcast (PBC) (also known as Interactive Consistency), is a fundamental problem in distributed computing and cryptography which asks that all parties reliably distribute a message to all other parties. We give the first communication-efficient protocol for PBC in the model with plain public keys (i.e., no trusted dealer) which achieves security against an adaptive adversary that can corrupt up to $t<n/2$ parties. Our protocol runs in total communication complexity $O(n^2\ell\log(n)+n\kappa^2\log^4(n))$ bits to succeed with probability $1-2^{-\kappa}$, where $\ell$ is the length of a message. All prior protocols either rely on a trusted setup or require at least $O(n^3)$ communication complexity. As a stepping stone, we present a binary consensus protocol with the same resilience and success probability that sends $O(n^2\kappa\log(n)+n\kappa^2\log^3(n))$ bits. We achieve these results based on a highly efficient gossip procedure that implements echo operations at low cost, and might prove useful in deriving further efficient protocols relying on simple cryptographic tools.
Last updated:  2025-06-01
Two-Round 2PC ECDSA at the Cost of 1 OLE
Michael Adjedj, Constantin Blokh, Geoffroy Couteau, Arik Galansky, Antoine Joux, and Nikolaos Makriyannis
We present a novel protocol for two-party ECDSA that achieves two rounds (a single back-and-forth communication) at the cost of a single oblivious linear function evaluation (OLE). In comparison, the previous work of Boneh et al.~(EUROCRYPT 2025) achieves two rounds but requires expensive zero-knowledge proofs on top of the OLE. We demonstrate this by proving that in the generic group model, any adversary capable of generating forgeries for our protocol can be transformed into an adversary that finds preimages for the ECDSA message digest function (e.g., the SHA family). Interestingly, our analysis is closely related to, and has ramifications for, the `presignatures' mode of operation---Canetti et al.~(CCS 2020), Groth and Shoup (EUROCRYPT 2022). Motivated by applications to embedded cryptocurrency wallets, where a single server maintains distinct, shared public keys with separate clients (i.e., a star-shaped topology), and with the goal of minimizing communication, we instantiate our protocol using Paillier encryption and suitable zero-knowledge proofs. To reduce computational overhead, we thoroughly optimize all components of our protocol under sound cryptographic assumptions, specifically small-exponent variants of RSA-style assumptions. Finally, we implement our protocol and provide benchmarks. At the 128-bit security level, the signing phase requires approximately 50ms of computation time on a standard linux machine, and 2KB of bandwidth.
Last updated:  2025-06-01
Adaptive TDFs from Injective TDFs
Xinyu Mao and Hongxu Yi
Adaptive trapdoor functions (ATDFs) and tag-based ATDFs (TB-ATDFs) are variants of trapdoor functions proposed by Kiltz, Mohassel, and O’Neill (EUROCRYPT 2010). They are both sufficient for constructing chosen-ciphertext secure public-key encryption (CCA-secure PKE), and their definitions are closely related to CCA-secure PKE. Hohenberger, Koppula, and Waters (CRYPTO 2020) showed that CCA-secure PKE can be constructed from injective TDFs; however, the relations among TDF, ATDF, and TB-ATDF remain unclear. We provide black-box constructions of ATDFs and TB-ATDFs from injective TDFs, answering the question posed by Kiltz, Mohassel, and O’Neill (EUROCRYPT 2010). Our results indicate that ATDF, TB-ATDF, and TDF are equivalent under mild restrictions.
Last updated:  2025-05-31
How Does Satoshi Set His Clock? Full Analysis of Nakamoto Consensus
Juan A. Garay, Aggelos Kiayias, and Nikos Leonardos
Nakamoto consensus, arguably the most exciting development in decentralized computation in the last few years, is in a sense a recasting of the traditional state machine replication problem in an unauthenticated setting, where furthermore parties come and go without warning. The protocol relies on a cryptographic primitive known as proof of work (PoW) which is used to throttle message passing. Importantly, the PoW difficulty level is appropriately adjusted throughout the course of the protocol execution relying on the blockchain’s timekeeping ability. While the original formulation was only accompanied by rudimentary analysis, significant and steady progress has been made in abstracting the protocol’s properties and providing a formal analysis under various restrictions and protocol simplifications. Still, a full analysis of the protocol that includes its PoW target value recalculation and, notably, its timestamp adjustment mechanism, which equip it to operate in its intended setting of bounded communication delays, imperfect clocks and dynamic participation, has remained open. (Specifically, the protocol allows incoming block timestamps in the near future, as determined by a protocol parameter, and rejects blocks that have a timestamp in the past of the median time of a specific number of blocks on-chain [namely, 11].) The gap is that Nakamoto’s protocol fundamentally depends on the blockchain itself to be a consistent timekeeper that should advance roughly on par with real time. In order to tackle this question we introduce a new analytical tool we call `hot-hand executions,' which capture the regular occurrence of high concentration of honestly generated blocks, and correspondingly put forth and prove a new blockchain property called `concentrated chain quality,' which may be of independent interest. Utilizing these tools and techniques we demonstrate that Nakamoto’s protocol achieves, under suitable conditions, consistency, liveness as well as (consistent) timekeeping.
Last updated:  2025-05-31
UPKE and UKEM Schemes from Supersingular Isogenies
Pratima Jana and Ratna Dutta
Forward-secure public key encryption (FS-PKE) is a key-evolving public-key paradigm that ensures the confidentiality of past encryptions even if the secret key is compromised at some later point in time. However, existing FS-PKE schemes are considerably complex and less efficient compared to standard public-key encryption. Updatable public-key encryption (UPKE), introduced by Jost et al. (Eurocrypt 2019), was designed to achieve forward security in secure group messaging while maintaining efficiency. However, existing UPKE constructions either lack post-quantum security or do not support an unbounded number of updates. We focus on isogeny-based cryptosystems due to their suitability for handling an unbounded number of updates in long-term secure messaging. Existing isogeny-based UPKE schemes lack strong security guarantees and formal security proofs. They do not support asynchronous key updates and require sender-receiver coordination. In this work, we present two isogeny-based UPKE schemes. The first scheme, UhPKE, extends Moriya et al.’s hash-based public key encryption scheme hPKE to support key updates while the second scheme USimS is an updatable version of Fouotsa et al.’s public key encryption scheme simplified sigamal (SimS). The scheme UhPKE relies on the commutative supersingular isogeny Diffie-Hellman(CSSIDH) assumption and achieves indistinguishability under chosen randomness and chosen plaintext attack (IND-CR-CPA). The scheme USimS derives its security under the hardness of the CSSIDH problem and the commutative supersingular isogeny knowledge of exponent (CSSIKoE) problem. It is the first isogeny-based UPKE scheme that exhibits indistinguishability under chosen randomness and chosen ciphertext attack (IND-CR-CCA). The security of UhPKE and USimS is established by proving that their underlying schemes, hPKE and SimS are circular secure and leakage resilient (CS + LR). We emphasized that our constructions support an unlimited number of key updates while retaining the efficiency of their underlying public key encryption schemes. Besides, proposed UPKEs enable asynchronous key updates, allowing senders to update the public key independently. More affirmatively, UhPKE and USimS offer improved storage, computation and communication efficiency compared to existing UPKE schemes. Furthermore, we extend and refine the security notion of the updatable key encapsulation mechanism (UKEM) introduced by Haidar et al. (Asiacrypt 2023)from the bounded number of updates to the unbounded number of updates. We present the first post-quantum secure UKEM that does not rely on zero-knowledge proofs. More precisely, we introduce two UKEM schemes which are the first of their kind in the isogeny setting. Our first scheme, UKEM1, is derived from our UhPKE and achieves IND-CR-CPA security. Our second construction, UKEM2, is based on our USimS scheme and achieves IND-CR-CCA security. We provide security for our UKEMs in our proposed enhanced security framework that supports an unbounded number of key updates. More positively, our UKEMs not only support unlimited key updates but also enable independent encapsulation and decapsulation key updates without requiring sender-receiver synchronization similar to our UPKEs. Both UKEM1 and UKEM2 exhibit compact storage and communication costs with minimal size ciphertexts while their computational efficiency differs in decapsulation and key updates where UKEM2 incurs an additional discrete logarithm computation in the decapsulation phase, but potentially offering stronger IND-CR-CCA security in contrast to UKEM1 which is IND-CR-CPA secure.
Last updated:  2025-05-31
Adaptively Secure Three-Round Threshold Schnorr Signatures from DDH
Renas Bacho, Sourav Das, Julian Loss, and Ling Ren
Threshold signatures are one of the most important cryptographic primitives in distributed systems. Of particular interest is the threshold Schnorr signature, a pairing-free signature with efficient verification, compatible with standardized EdDSA (non-threshold) signature. However, most threshold Schnorr signatures have only been proven secure against a static adversary, which has to declare its corruptions before the protocol execution. Many existing adaptively secure constructions require either secure erasures or non-standard assumptions, such as the algebraic group model or hardness of the algebraic one-more discrete logarithm problem. The latest adaptively secure threshold Schnorr signature schemes under standard assumptions require five rounds of communication to create a single signature, limiting its practicality. In this work, we present Gargos, a three-round, adaptively secure threshold Schnorr signature scheme based on the hardness of the decisional Diffie-Hellman (DDH) problem in the random oracle model (ROM). Our protocol supports full corruption threshold $t < n$, where $t$ is the signing threshold and $n$ is the total number of signers. We achieve our result with an enhanced proof technique that enables us to eliminate two rounds of communication from the recent Glacius scheme (Eurocrypt 2025). We believe our techniques are of independent interest to further research in improving the round complexity of multi-party signing protocols.
Last updated:  2025-05-31
Reviving a Grover based Quantum Secret Sharing Scheme
Debajyoti Bera and Santanu Majhi
Secret-sharing schemes allow a dealer to split a secret into multiple “shares” and distribute them individually among many parties while mandating certain constraints on its reconstruction. Such protocols are usually executed over a secure communication channel since an eavesdropper, after intercepting all the shares, is expected to be able to reconstruct the secret. Leveraging the unique properties of quantum channels, several quantum protocols have been designed for secret sharing. However, almost all of them detect the presence of an eavesdropper by statistical analysis of the outcome of multiple rounds, or simply require a secure channel of communication. We mathematically analyse the correctness and security properties of a quantum-search based secret-sharing framework proposed by Hsu (2003) (and attacked by Hao et al. (2010)) that was proposed as an alternative that works over public channels and does not require multiple rounds.We show how to improve the original protocol to be more resistant towards eavesdropping and other attacks; however, we also prove that complete security against an eavesdropper is not possible in this framework. Our tight characterization will be helpful towards the construction of more quantum secret sharing schemes based on the same framework.
Last updated:  2025-05-31
Quantum-resistant secret handshakes with dynamic joining, leaving, and banishment: GCD revisited
Olivier Blazy, Philippe Gaborit, Philippe Krejci, and Cristina Onete
Secret handshakes, introduced by Balfanz et al. [3], allow users associated with various groups to determine if they share a common affiliation. These protocols ensure crucial properties such as fairness (all participants learn the result simultaneously), affiliation privacy (failed handshakes reveal no affiliation information), and result-hiding (even participants within a shared group cannot infer outcomes of unrelated handshakes). Over time, various secret-handshake schemes have been proposed, with a notable advancement being the modular framework by Tsudik and Xu. Their approach integrates three key components: group signature schemes, centralized secure channels for each group, and decentralized group key-agreement protocols. Building upon this modularity, we propose significant updates. By addressing hidden complexities and revising the security model, we enhance both the efficiency and the privacy guarantees of the protocol. Specifically, we achieve the novel property of Self distinction—the ability to distinguish between two users in a session without revealing their identities—by replacing the group signature primitive with a new construct, the List MAC. This primitive is inherently untraceable, necessitating adjustments to the original syntax to support stronger privacy guarantees. Consequently, we introduce the Traitor Catching paradigm, where the transcript of a handshake reveals only the identity of a traitor, preserving the anonymity of all other participants. To showcase the flexibility and robustness of our updated framework, we present two post-quantum instantiations (a hash-based one and another based on lattices). Our approach not only corrects prior limitations but also establishes a new benchmark for privacy and security in secret handshakes.
Last updated:  2025-05-31
Scalable Multiparty Computation from Non-linear Secret Sharing
Sanjam Garg, Abhishek Jain, Pratyay Mukherjee, and Mingyuan Wang
A long line of work has investigated the design of scalable secure multiparty computation (MPC) protocols with computational and communication complexity independent of the number of parties (beyond any dependence on the circuit size). We present the first unconditionally-secure MPC protocols for arithmetic circuits over {\em large fields} with total computation $\mathcal{O}(|C|\log|F|)$, where $|C|$ and $|F|$ denote the circuit and field size, respectively. Prior work could either achieve similar complexity only in {\em communication}, or required highly structured circuits, or expensive circuit transformations. To obtain our results, we depart from the prior approach of share packing in linear secret-sharing schemes; instead, we use an ``unpacking'' approach via {\em non-linear} secret sharing.
Last updated:  2025-05-31
TEAKEX: TESLA-Authenticated Group Key Exchange
Qinyi Li, Lise Millerjord, and Colin Boyd
We present a highly efficient authenticated group key exchange protocol, TEAKEX, using only symmetric key primitives. Our protocol provides proven strong security, including forward secrecy, post-compromise security, and post-quantum security. For online applications we claim that TEAKEX is much simpler and more efficient than currently available alternatives. As part of our construction we also give a new abstract security definition for delayed authentication and describe its instantiation with the TESLA protocol.
Last updated:  2025-05-30
Glitch-Stopping Circuits: Hardware Secure Masking without Registers
Zhenda Zhang, Svetla Nikova, and Ventzislav Nikov
Masking is one of the most popular countermeasures to protect implementations against power and electromagnetic side channel attacks, because it offers provable security. Masking has been shown secure against d-threshold probing adversaries by Ishai et al. at CRYPTO'03, but this adversary's model doesn't consider any physical hardware defaults and thus such masking schemes were shown to be still vulnerable when implemented as hardware circuits. To addressed these limitations glitch-extended probing adversaries and correspondingly glitch-immune masking schemes have been introduced. This paper introduces glitch-stopping circuits which, when instantiated with registers, coincide with circuits protected via glitch-immune masking. Then we show that one can instantiate glitch-stopping circuits without registers by using clocked logic gates or latches. This is illustrated for both ASIC and FPGA, offering a promising alternative to conventional register-based masked implementations. Compared to the traditional register-based approach, these register-free solutions can reduce the latency to a single cycle and achieve a lower area cost. We prove and experimentally confirm that the proposed solution is as secure as the register-based one. In summary, this paper proposes a novel method to address the latency of register-based hardware masking without jeopardising their security. This method not only reduces the latency down to one clock, but also improves the areas costs of the implementations.
Last updated:  2025-05-30
Rate-1 Non-Interactive Arguments for Batch-NP and Applications
Lalita Devadas, Rishab Goyal, Yael Kalai, and Vinod Vaikuntanathan
We present a rate-$1$ construction of a publicly verifiable non-interactive argument system for batch-$\mathsf{NP}$ (also called a BARG), under the LWE assumption. Namely, a proof corresponding to a batch of $k$ NP statements each with an $m$-bit witness, has size $m + \mathsf{poly}(\lambda,\log k)$. In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size $m \cdot \mathsf{poly}(\lambda,\log k)$ (Choudhuri, Jain, and Jin, STOC 2021, following Kalai, Paneth, and Yang 2019). We show how to use our rate-$1$ BARG scheme to obtain the following results, all under the LWE assumption in the standard model: - A multi-hop BARG scheme for $\mathsf{NP}$. - A multi-hop aggregate signature scheme. - An incrementally verifiable computation (IVC) scheme for arbitrary $T$-time deterministic computations with proof size $\mathsf{poly}(\lambda,\log T)$. Prior to this work, multi-hop BARGs were only known under non-standard knowledge assumptions or in the random oracle model; aggregate signatures were only known under indistinguishability obfuscation (and RSA) or in the random oracle model; IVC schemes with proofs of size $\mathsf{poly}(\lambda,T^{\epsilon})$ were known under a bilinear map assumption, and with proofs of size $\mathsf{poly}(\lambda,\log T)$ were only known under non-standard knowledge assumptions or in the random oracle model.
Last updated:  2025-05-30
On Factoring and Power Divisor Problems via Rank-3 Lattices and the Second Vector
Yiming Gao, Yansong Feng, Honggang Hu, and Yanbin Pan
We propose a deterministic algorithm based on Coppersmith's method that employs a rank-3 lattice to address factoring-related problems. An interesting aspect of our approach is that we utilize the second vector in the LLL-reduced basis to avoid trivial collisions in the Baby-step Giant-step method, rather than the shortest vector as is commonly used in the literature. Our results are as follows: - Compared to the result by Harvey and Hittmeir (Math. Comp. 91 (2022), 1367–1379), who achieved a complexity of $O\left( \frac{N^{1/5} \log^{16/5} N}{(\log \log N)^{3/5}} \right)$ for factoring a semiprime $N = pq$, we demonstrate that in the balanced $p$ and $q$ case, the complexity can be improved to $O\left( \frac{N^{1/5} \log^{13/5} N}{(\log\log N)^{3/5}} \right).$ - For factoring sums and differences of powers, i.e., numbers of the form $N = a^n \pm b^n$, we improve Hittmeir's result (Math. Comp. 86 (2017), 2947–2954) from $O(N^{1/4} \log^{3/2} N)$ to $O\left( {N^{1/5} \log^{13/5} N} \right).$ - For the problem of finding $r$-power divisors, i.e., finding all integers $p$ such that $p^r \mid N$, Harvey and Hittmeir (Proceedings of ANTS XV, Res. Number Theory 8 (2022), no.4, Paper No. 94) recently directly applied Coppersmith's method and achieved a complexity of $O\left(\frac{N^{1/4r} \log^{10+\epsilon} N}{r^3}\right)$. By using faster LLL-type algorithm and sieving on small primes, we improve their result to $O\left(\frac{N^{1/4r} \log^{7+3\epsilon} N}{(\log\log N-\log 4r)r^{2+\epsilon}}\right)$. The worst case running time for their algorithm occurs when $N = p^r q$ with $q = \Theta(N^{1/2})$. By focusing on this case and employing our rank-3 lattice approach, we achieve a complexity of $O\left(\sqrt{r} N^{1/4r} \log^{5/2} N \right).$ In conclusion, we offer a new perspective on these problems, which we hope will provide further insights.
Last updated:  2025-05-30
Rate-1 Statistical Non-Interactive Zero-Knowledge
Pedro Branco, Nico Döttling, and Akshayaram Srinivasan
We give the first construction of a rate-1 statistical non-interactive zero-knowledge argument of knowledge. For the $\mathsf{circuitSAT}$ language, our construction achieves a proof length of $|w| + |w|^\epsilon \cdot \mathsf{poly}(\lambda)$ where $w$ denotes the witness, $\lambda$ is the security parameter, $\epsilon$ is a constant less than 1, and $\mathsf{poly}(\cdot)$ is a fixed polynomial that is independent of the instance or the witness size. The soundness of our construction follows from the sub-exponential hardness of either the LWE assumption, or the $O(1)$-$\mathsf{LIN}$ assumption on prime-order groups with efficiently computable bilinear maps, or the DDH assumption. Previously, Gentry et al. (Journal of Cryptology, 2015) achieved NIZKs with statistical soundness and computational zero-knowledge with the aforementioned proof length by relying on (circular-secure) Learning with Errors assumption.
Last updated:  2025-05-30
Detecting Rogue Decryption in (Threshold) Encryption via Self-Incriminating Proofs
James Hsin-yu Chiang, Bernardo David, Tore Kasper Frederiksen, Arup Mondal, and Esra Yeniaras
Keeping decrypting parties accountable in public key encryption is notoriously hard since the secret key owner can decrypt any arbitrary ciphertext. Threshold encryption aims to solve this issue by distributing the power to decrypt among a set of parties, who must interact via a decryption protocol. However, such parties can employ cryptographic tools such as Multiparty Computation (MPC) to decrypt arbitrary ciphertexts without being detected. We introduce the notion of (threshold) encryption with Self-Incriminating Proofs, where parties must produce a self-incriminating proof of decryption when decrypting every ciphertext. In the standard public key encryption case, the adversary could destroy these proofs, so we strengthen our notion to guarantee that the proofs are published when decryption succeeds. This creates a decryption audit trail, which is useful in scenarios where decryption power is held by a single trusted party (e.g., a Trusted Execution Environment) who must be kept accountable. In the threshold case, we ensure that at least one of the parties who execute the decryption protocol will learn a self-incriminating proof, even if they employ advanced tools such as MPC. The fact that a party learns the proof and may leak it at any moment functions as a deterrent for parties who do not wish to be identified as malicious decryptors (e.g., a commercial operator of a service based on threshold encryption). We investigate the (im)possibility and applications of our notions while providing matching constructions under appropriate assumptions. In the threshold case, we build on recent results on Individual Cryptography (CRYPTO 2023).
Last updated:  2025-05-30
Low-Latency Dynamically Available Total Order Broadcast
Sravya Yandamuri, Nibesh Shrestha, LUCA ZANOLINI, and Kartik Nayak
This work addresses the problem of Byzantine Fault-Tolerant (BFT) Total-Order Broadcast (TOB) in a dynamically available setting, where parties can transition between online and offline states without knowing the number of active parties. Existing dynamically available protocols rely on a synchronous network assumption, which means their latency remains tied to the pessimistic network delay $\Delta$, even when the actual network delay is $\delta << \Delta$. This raises the question of whether a dynamically available BFT TOB protocol can maintain safety and liveness under synchrony while committing blocks at a rate closer to the actual network delay. We answer this question affirmatively by designing the first dynamically available BFT TOB protocol that can commit blocks at the rate of $O(\Delta_{ideal})$ where $\Delta_{ideal} < 2\delta$.
Last updated:  2025-05-30
Constructing Committing and Leakage-Resilient Authenticated Encryption
Patrick Struck and Maximiliane Weishäupl
The main goal of this work is to construct authenticated encryption (AE) that is both committing and leakage-resilient. As a first approach for this we consider generic composition as a well-known method for constructing AE schemes. While the leakage resilience of generic composition schemes has already been analyzed by Barwell et al. (Asiacrypt'17), for committing security this is not the case. We fill this gap by providing a separate analysis of the generic composition paradigms with respect to committing security, giving both positive and negative results: By means of a concrete attack, we show that Encrypt-then-MAC is not committing. Furthermore, we prove that Encrypt-and-MAC is committing, given that the underlying schemes satisfy security notions we introduce for this purpose. We later prove these new notions achievable by providing schemes that satisfy them. MAC-then-Encrypt turns out to be more difficult due to the fact that the tag is not outputted alongside the ciphertext as it is done for the other two composition methods. Nevertheless, we give a detailed heuristic analysis of MAC-then-Encrypt with respect to committing security, leaving a definite result as an open task for future work. Our results, in combination with the fact that only Encrypt-then-MAC yields leakage-resilient AE schemes, show that one cannot obtain AE schemes that are both committing and leakage-resilient via generic composition. As a second approach for constructing committing and leakage-resilient AE, we develop a generic transformation that turns an arbitrary AE scheme into one that fulfills both properties—though only a slightly weakened form of leakage resilience. The transformation relies on a keyed function that is both binding, i.e., it is hard to find key-input pairs that result in the same output, leakage-resilient pseudorandom and unpredictable.
Last updated:  2025-05-30
Fuzzy Extractors are Practical: Cryptographic Strength Key Derivation from the Iris
Sohaib Ahmad, Sixia Chen, Luke Demarest, Benjamin Fuller, Caleb Manicke, Alexander Russell, and Amey Shukla
Despite decades of effort, a chasm existed between the theory and practice of device-level biometric authentication. Deployed authentication algorithms rely on data that overtly leaks private information about the biometric; thus systems rely on externalized security measures such as trusted execution environments. The authentication algorithms have no cryptographic guarantees. We close this chasm. We introduce a key derivation system with 105 bits of entropy and a 91% true accept rate for the iris. Our system advances 1) the feature extraction from the iris and 2) the fuzzy extractor used to derive keys. The fuzzy extractor builds on sample-then-lock (Canetti et al., Journal of Cryptology 2021). We 1) Introduce a new method of sampling that achieves a better TAR versus entropy tradeoff when features have different quality, 2) Correct their security proof, showing the minimum of min-entropy of subsets is the relevant security measure, and 3) Tighten their concrete analysis, nearly doubling security under reasonable assumptions. Our final feature extractor incorporates ideas from the new sampling method to produce features optimized for the sample-then-lock construction. The only statistical assumption needed to show security of our system is necessary: the accuracy of min-entropy estimation. Our quantitive level of security is well above prior work. Simhadri et al. (ISC, 2019) report $32$ bits on the iris, but they have a bug in their analysis (that we fix) that reduces their strength. Zhang et al.'s (ePrint 2021/1559) system achieves $45$ bits on the face but assumes independence between biometrics and the used error-correcting code, an assumption that cannot be easily verified. Other prior work assumes that bits of biometrics are i.i.d., an assumption that is demonstrably false. (Or that all correlation is pairwise between features (Hine et al., TIFS 2023).) Irises used to evaluate TAR and security are class disjoint from those used for training and collecting statistics (the open dataset regime).
Last updated:  2025-05-30
Cool + Cruel = Dual
Alexandr Karenin, Elena Kirshanova, Julian Nowakowski, Eamonn W. Postlethwaite, and Fernando Virdia
Recently [Wenger et al.~IEEE S\&P 2025] claimed that the `Cool and Cruel' (C+C) approach to solving LWE with sparse secrets [Nolte et al.~AFRICACRYPT 2024] outperforms other approaches, in particular the well established primal attack. In this work we show that i.~C+C is an instantiation of a known dual attack [Albrecht, EUROCRYPT 2017], ii.~experimental evidence that the primal attack can outperform C+C in similar regimes to those studied by Wenger et al. and iii.~both theoretical justification and experimental evidence that C+C is a consequence of a basis profile called the Z-shape. To prove i.~we introduce a framework for dimension reduction in bounded distance decoding problems that may be of independent interest. For ii.~we provide an open source implementation of the primal attack that is properly parametrised for short, sparse ternary secret LWE and guesses portions of the secret, along with an error analysis for a rounded variant of LWE that proves useful for practical cryptanalysis. Given iii.~we falsify a claim of Nolte et al.
Last updated:  2025-05-30
A Plausible Attack on the Adaptive Security of Threshold Schnorr Signatures
Elizabeth Crites and Alistair Stewart
The standard notion of security for threshold signature schemes is static security, where the set of corrupt parties is assumed to be fixed before protocol execution. In this model, the adversary may corrupt up to t−1 out of a threshold of t parties. A stronger notion of security for threshold signatures considers an adaptive adversary, who may corrupt parties dynamically based on its view of the protocol execution, learning the corrupted parties’ secret keys as well as their states. Adaptive security of threshold signatures has become an active area of research recently due to ongoing standardization efforts. Of particular interest is full adaptive security, the analogue of static security, where the adversary may adaptively corrupt a full t−1 parties. We present a plausible attack on the full adaptive security of threshold Schnorr signature schemes with public key shares of the form $pk_i = g^{sk_i},$ where all secret keys $sk_i$ lie on a polynomial. We show that a wide range of threshold Schnorr signature schemes, including all variants of FROST, Sparkle, and Lindell’22, cannot be proven fully adaptively secure without modifications or assuming the hardness of a search problem that we define in this work. We then prove a generalization that extends below t−1 adaptive corruptions.
Last updated:  2025-05-30
Post-Quantum Multi-Message Public Key Encryption from Extended Reproducible PKE
Hongxiao Wang, Ron Steinfeld, Markku-Juhani O. Saarinen, Muhammed F. Esgin, and Siu-Ming Yiu
A multi-message multi-recipient Public Key Encryption (mmPKE) enables batch encryption of multiple messages for multiple independent recipients in one go, significantly reducing costs, particularly bandwidth, compared to the trivial solution of encrypting each message individually. This capability is especially critical in the post-quantum setting, where ciphertext length is typically significantly larger than the corresponding plaintext. In this work, we first observe that the generic construction of mmPKE from reproducible PKE proposed by Bellare et al. (PKC ’03) does not apply in the lattice-based setting because existing lattice-based PKE schemes do not fit the notion of reproducible PKE. To this end, we first extend their construction by proposing a new variant of PKE, named extended reproducible PKE (XR-PKE), which enables the reproduction of ciphertexts via additional hints. However, standard lattice-based PKE schemes, such as Kyber (EuroS&P '18), do not readily satisfy the XR PKE requirements. To construct XR-PKE from lattices, we introduce a novel technique for precisely estimating the impact of such hints on the ciphertext security while also establishing suitable parameters. This enables us to instantiate the first CPA-secure mmPKE and Multi-Key Encapsulation Mechanism (mmKEM) from the standard Module Learning with Errors (MLWE) lattice assumption, named mmCipher-PKE and mmCipher-KEM, respectively. We then extend our works to the identity-based setting and construct the first mmIBE and mmIB-KEM schemes. As a bonus contribution, we explore generic constructions of adaptively secure mmPKE, achieving security against adaptive corruption and chosen-ciphertext attacks. We also provide an efficient implementation and thorough evaluation of the practical performance of our mmCipher. Our results show that mmCipher provides significant bandwidth and computational savings in practice, compared to the state-of-the-art. For example, for 1024 recipients, our mmCipher-KEM achieves a 23~45 times reduction in bandwidth overhead, reaching within 4~9% of the plaintext length (near optimal bandwidth), while also offering a 3~5 times reduction in computational cost.
Last updated:  2025-05-30
Insecurity of One Ring Signature Scheme with Batch Verification for Applications in VANETs
Zhengjun Cao and Lihua Liu
We show that the Negi-Kumar certificateless ring signature scheme [Wirel. Pers. Commun. 134(4): 1987-2011 (2024)] is insecure against forgery attack. The signer's public key $PK_j$ and secret key $PSK_j$ are simply invoked to compute the hash value $H_{2_j}=h_5(m_j\|PSK_j\|PK_j\|t_j)$, which cannot be retrieved by the verifier for checking their dependency. The explicit dependency between the public key and secret key is not properly used to construct some intractable problems, such as Elliptic Curve Discrete Logarithm (ECDL), Computational Diffie-Hellman (CDH), and Decisional Diffie-Hellman (DDH). An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. The findings in this note could be helpful for the newcomers who are not familiar with the designing techniques for certificateless ring signature.
Last updated:  2025-05-30
On the UC-(In)Security of PAKE Protocols Without the Random Oracle Model
Naman Kumar and Jiayu Xu
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to jointly establish a cryptographic key, where the only information shared in advance is a low-entropy password. The first efficient PAKE protocol whose security does not rely on the random oracle model is the one by Katz, Ostrovsky and Yung (KOY, EUROCRYPT 2001). Unfortunately, the KOY protocol has only been proven secure in the game-based setting, and it is unclear whether KOY is secure in the stronger Universal Composability (UC) framework, which is the current security standard for PAKE. In this work, we present a thorough study of the UC-security of KOY. Our contributions are two-fold: 1. We formally prove that the KOY protocol is not UC-secure; 2. We then show that the UC-security of KOY holds in the Algebraic Group Model, under the Decisional Square Diffie-Hellman (DSDH) assumption. Overall, we characterize the exact conditions under which KOY is UC-secure. Interestingly, the DSDH assumption is stronger than DDH under which KOY can be proven game-based secure, which reveals some subtle gaps between the two PAKE security notions that have never been studied.
Last updated:  2025-05-30
Kerblam — Anonymous Messaging System Protecting Both Senders and Recipients
Yanxue Jia, Debajyoti Das, Wenhao Zhang, and Aniket Kate
While popular messaging apps already offer end-to-end confidentially, end-to-end metadata privacy is still far from being practical. Although several meta-data hiding systems have been developed and some like Tor have been popular, the proposed solutions lack in one or more aspects: the Tor network is prone to easy low-resourced attacks, and most others solely focus on anonymity for senders or receivers but do not both. Some recent solutions do consider end-to-end anonymity, however, they put significant restrictions on how users use the system. Particularly, the receivers must stay online or trust online servers that receive messages on behalf of receivers. This work presents a scalable end-to-end anonymity messaging system, $\mathsf{ORAM}^{-}$, that overcomes the mentioned issues and restrictions. It stems from a key observation that combining the recently-emerged oblivious message retrieval (OMR) primitive with oblivious shuffling can offer the desired end-to-end anonymity without severely restricting the number of messages a sender may send or a receiver may receive. We build our solution using two non-colluding servers and recent OMR protocol HomeRun and a compatible oblivious shuffle protocol. We then extend our solution to allow larger messages by employing a novel two-server distributed oblivious RAM technique, called $\mathsf{ORAM}^{-}$. Our performance analysis demonstrates that with the increase in the number and size of messages, the performance improvement brought by $\mathsf{ORAM}^{-}$ becomes higher. Specifically, for $2^{20}$ messages of size 1KB, our scheme only needs $5.577$ s to transmit a message.
Last updated:  2025-05-29
Distance-Aware OT with Application to Fuzzy PSI
Lucas Piske, Jaspal Singh, Ni Trieu, Vladimir Kolesnikov, and Vassilis Zikas
A two-party fuzzy private set intersection (PSI) protocol between Alice and Bob with input sets $A$ and $B$ allows Alice to learn nothing more than the points of Bob that are ``$\delta$-close'' to its points in some metric space $\texttt{dist}$. More formally, Alice learns only the set $\{ b\ |~\texttt{dist}{(a,b)} \leq \delta , a \in A,b\in B\}$ for a predefined threshold $\delta$ and distance metric $\texttt{dist}$, while Bob learns nothing about Alice's set. Fuzzy PSI is a valuable privacy tool in scenarios where private set intersection needs to be computed over imprecise or measurement-based data, such as GPS coordinates or healthcare data. Previous approaches to fuzzy PSI rely on asymmetric cryptographic primitives, generic two-party computation (2PC) techniques like garbled circuits, or function secret sharing methods, all of which are computationally intensive and lead to poor concrete efficiency. This work introduces a new modular framework for fuzzy PSI, {primarily built on efficient symmetric key primitives}. Our framework reduces the design of efficient fuzzy PSI to a novel variant of oblivious transfer (OT), which we term distance-aware random OT (da-ROT). This variant enables the sender to obtain two random strings $(r_0, r_1)$, while the receiver obtains one of these values $r_b$, depending on whether the receiver’s input keyword $a$ and the sender’s input keyword $b$ are close in some metric space i.e., $\texttt{dist}{(a,b)} \leq \delta$. The da-ROT can be viewed as a natural extension of traditional OT, where the condition (choice bit) is known to the receiver. We propose efficient constructions for da-ROT based on standard OT techniques tailored for small domains, supporting distance metrics such as the Chebyshev norm, the Euclidean norm, and the Manhattan norm. By integrating these da-ROT constructions, our fuzzy PSI framework achieves up to a $14\times$ reduction in communication cost and up to a $54\times$ reduction in computation cost compared to previous state-of-the-art protocols, across input set sizes ranging from $2^8$ to $2^{16}$. Additionally, we extend our framework to compute fuzzy PSI cardinality and fuzzy join from traditional PSI-related functionalities. All proposed protocols are secure in the semi-honest model.
Last updated:  2025-05-29
Quantum-Safe Public Key Blinding from MPC-in-the-Head Signature Schemes
Sathvika Balumuri, Edward Eaton, and Philippe Lamontagne
Key blinding produces pseudonymous digital identities by rerandomizing public keys of a digital signature scheme. It is used in anonymous networks to provide the seemingly contradictory goals of anonymity and authentication. Current key blinding schemes are based on the discrete log assumption. Eaton, Stebila and Stracovsky (LATINCRYPT 2021) proposed the first key blinding schemes from lattice assumptions. However, the large public keys and lack of QROM security means they are not ready to replace existing solutions. We present a new way to build key blinding schemes form any MPC-in-the-Head signature scheme. These schemes rely on well-studied symmetric cryptographic primitives and admit short public keys. We prove a general framework for constructing key blinding schemes and for proving their security in the quantum random oracle model (QROM). We instantiate our framework with the recent AES-based Helium signature scheme (Kales and Zaverucha, 2022). Blinding Helium only adds a minor overhead to the signature and verification time. Both Helium and the aforementioned lattice-based key blinding schemes were only proven secure in the ROM. This makes our results the first QROM proof of Helium and the first fully quantum-safe public key blinding scheme.
Last updated:  2025-05-29
General Functional Bootstrapping using CKKS
Andreea Alexandru, Andrey Kim, and Yuriy Polyakov
The Ducas-Micciancio (DM) and Chilotti-Gama-Georgieva-Izabachene (CGGI) cryptosystems provide a general privacy-preserving computation capability. These fully homomorphic encryption (FHE) cryptosystems can evaluate an arbitrary function expressed as a general look-up table (LUT) via the method of functional bootstrapping. The main limitation of DM/CGGI functional bootstrapping is its efficiency because this procedure has to bootstrap every encrypted number separately. A different bootstrapping approach, based on the Cheon-Kim-Kim-Song (CKKS) FHE scheme, can achieve much smaller amortized time due to its ability to bootstrap many thousands of numbers at once. However, CKKS does not currently provide a functional bootstrapping capability that can evaluate a general LUT. An open research question is whether such capability can be efficiently constructed. We give a positive answer to this question by proposing and implementing a general functional bootstrapping method based on CKKS-style bootstrapping. We devise a theoretical toolkit for evaluating an arbitrary function using the theory of trigonometric Hermite interpolations, which provides control over noise reduction during functional bootstrapping. Our experimental results for 8-bit LUT evaluation show that the proposed method achieves the amortized time of 0.72 milliseconds, which is three orders of magnitude faster than the DM/CGGI approach and 6.8x faster than (a more restrictive) amortized functional bootstrapping method based on the Brakerski/Fan-Vercauteren (BFV) FHE scheme.
Last updated:  2025-05-29
NIZK Amplification via Leakage-Resilient Secure Computation
Benny Applebaum and Eliran Kachlon
Suppose that we are given a weak \emph{Non-Interactive Zero-Knowledge} (NIZK) proof system for NP with non-negligible soundness and zero-knowledge errors, denoted by $\alpha$ and $\beta$, respectively. Is it possible to to reduce these errors to a negligible level? This problem, known as NIZK amplification, was introduced by Goyal, Jain, and Sahai (Crypto'19) and was further studied by Bitansky and Geier (Crypto'24). The latter work provides amplification theorems for proofs and arguments, assuming the existence of one-way functions and public-key encryption, respectively. Unfortunately, their results only apply when the security level, $1 - (\alpha + \beta)$, is a constant bounded away from zero. Amplifying NIZK with an inverse polynomial security level remains an open problem and was stated as the main open question in both works. In this work, we resolve the NIZK amplification problem and show how to amplify any non-trivial NIZK proof system that has a noticeable, inverse-polynomial level of security. As in previous works, we amplify proofs and arguments assuming the existence of one-way functions and public-key encryption, respectively. Furthermore, assuming the existence of collision-resistant hash functions, we preserve, for the first time, properties such as statistical zero-knowledge and proof succinctness. Our main technical contribution is a new \emph{leakage-resilient secure multiparty} protocol that computes any public-output functionality with information-theoretic security against an adversary that corrupts an arbitrary subset of parties and obtains bounded leakage from each honest party. Our protocol operates in the pairwise correlated randomness model. Previous works relied on stronger setup assumptions in the form of $n$-wise correlations and either supported a smaller corruption threshold or suffered from an exponential dependency on the number of parties. To transform our protocol into a NIZK amplifier, we introduce a new intermediate notion of \emph{leakage-resilient NP secret sharing}, that may be of independent interest.
Last updated:  2025-05-29
Quantum Rewinding for IOP-Based Succinct Arguments
Alessandro Chiesa, Marcel Dall'Agnol, Zijing Di, Ziyi Guan, and Nicholas Spooner
We analyze the post-quantum security of succinct interactive arguments constructed from interactive oracle proofs (IOPs) and vector commitment schemes. Specifically, we prove that an interactive variant of the *BCS transformation* is secure in the standard model against quantum adversaries when the vector commitment scheme is collapse binding. Prior work established the post-quantum security of Kilian's succinct interactive argument, a special case of the BCS transformation for one-message IOPs (i.e., PCPs). That analysis is inherently limited to one message because the reduction, like all prior quantum rewinding reductions, aims to extract classical information (a PCP string) from the quantum argument adversary. Our reduction overcomes this limitation by instead extracting a *quantum algorithm* that implements an IOP adversary; representing such an adversary classically may in general require exponential complexity. Along the way we define *collapse position binding*, which we propose as the ``correct'' definition of collapse binding for vector commitment schemes, eliminating shortcomings of prior definitions. As an application of our results, we obtain post-quantum secure succinct arguments, in the standard model (no oracles), with the *best asymptotic complexity known*.
Last updated:  2025-05-29
Fair Signature Exchange
Hossein Hafezi, Aditi Partap, Sourav Das, and Joseph Bonneau
We introduce the concept of Fair Signature Exchange (FSE). FSE enables a client to obtain signatures on multiple messages in a fair manner: the client receives all signatures if and only if the signer receives an agreed-upon payment. We formalize security definitions for FSE and present a practical construction based on the Schnorr signature scheme, avoiding computationally expensive cryptographic primitives such as SNARKs. Our scheme imposes minimal overhead on the Schnorr signer and verifier, leaving the signature verification process unchanged from standard Schnorr signatures. Fairness is enforced using a blockchain as a trusted third party, while exchanging only a constant amount of information on-chain regardless of the number of signatures exchanged. We demonstrate how to construct a batch adaptor signature scheme using FSE, and our FSE construction based on Schnorr results in an efficient implementation of a batch Schnorr adaptor signature scheme for the discrete logarithm problem. We implemented our scheme to show that it has negligible overhead compared to standard Schnorr signatures. For instance, exchanging $2^{10}$ signatures on the Vesta curve takes approximately $80$ms for the signer and $300$ms for the verifier, with almost zero overhead for the signer and $2$x overhead for the verifier compared to the original Schnorr protocol.
Last updated:  2025-05-29
A Fast Multiplication Algorithm and RLWE-PLWE Equivalence for the Maximal Real Subfield of the $2^r p^s$-th Cyclotomic Field
Wilmar Bolaños, Antti Haavikko, and Rodrigo M. Sánchez-Ledesma
This paper proves the RLWE-PLWE equivalence for the maximal real subfields of the cyclotomic fields with conductor $n = 2^r p^s$, where $p$ is an odd prime, and $r \geq 0$ and $s \geq 1$ are integers. In particular, we show that the canonical embedding as a linear transform has a condition number bounded above by a polynomial in $n$. In addition, we describe a fast multiplication algorithm in the ring of integers of these real subfields. The multiplication algorithm uses the fast Discrete Cosine Transform (DCT) and has computational complexity $\mathcal{O}(n \log n)$. This work extends the results of Ahola et al., where the same claims are proved for a single prime $p = 3$.
Last updated:  2025-05-29
Fully-Homomorphic Encryption from Lattice Isomorphism
Pedro Branco, Giulio Malavolta, and Zayd Maradni
The lattice isomorphism problem (LIP) asks, given two lattices $\Lambda_0$ and $\Lambda_1$, to decide whether there exists an orthogonal linear map from $\Lambda_0$ to $\Lambda_1$. In this work, we show that the hardness of (a circular variant of) LIP implies the existence of a fully-homomorphic encryption scheme for all classical and quantum circuits. Prior to our work, LIP was only known to imply the existence of basic cryptographic primitives, such as public-key encryption or digital signatures.
Last updated:  2025-05-29
Powerformer: Efficient and High-Accuracy Privacy-Preserving Language Model with Homomorphic Encryption
Dongjin Park, Eunsang Lee, and Joon-Woo Lee
We propose Powerformer, an efficient homomorphic encryption (HE)-based privacy-preserving language model (PPLM) designed to reduce computation overhead while maintaining model performance. Powerformer incorporates three key techniques to optimize encrypted computations: 1. A novel distillation technique that replaces softmax and layer normalization (LN) with computationally efficient power and linear functions, ensuring no performance degradation while enabling seamless encrypted computation. 2. A pseudo-sign composite approximation method that accurately approximates GELU and tanh functions with minimal computational overhead. 3. A homomorphic matrix multiplication algorithm specifically optimized for Transformer models, enhancing efficiency in encrypted environments. By integrating these techniques, Powerformer based on the BERT-base model achieves a 45% reduction in computation time compared to the state-of-the-art HE-based PPLM without any loss in accuracy.
Last updated:  2025-05-29
Reality Check on Side-Channels: Lessons learnt from breaking AES on an ARM Cortex A processor
Harishma Boyapally, Dirmanto Jap, Qianmei Wu, Fan Zhang, and Shivam Bhasin
Side-channel analysis (SCA) has posed a significant threat to systems for nearly three decades. Numerous practical demonstrations have targeted everyday devices, such as smartcards, cryptocurrency wallets, and smartphones. However, much of the research in the public domain has focused on low-end microcontrollers, limiting our understanding of the challenges involved in attacking more complex systems. In this work, we conduct a reality check on SCA by targeting a high-performance ARM Cortex-A72 out-of-order processor, commonly found in smartphones. We evaluate the practical effort required for key recovery attacks, considering various threat models, from basic to advanced. Our results show that while basic approaches fail, advanced approaches like deep learning-based SCA can successfully recover the secret key. This multi-tier evaluation approach is crucial for comprehensive risk assessment and informed decision-making regarding mitigation strategies, balancing security, performance, and area constraints.
Last updated:  2025-05-29
MOAI: Module-Optimizing Architecture for Non-Interactive Secure Transformer Inference
Linru Zhang, Xiangning Wang, Jun Jie Sim, Zhicong Huang, Jiahao Zhong, Huaxiong Wang, Pu Duan, and Kwok Yan Lam
The advent of Large Language Models (LLM) has brought about a new wave productivity, revolutionizing business operations while keeping cost relatively low. The human-like interface of LLM enables it to be easily integrated with business functions, thereby freeing up precious human resources for more complex, valuable tasks. However, due to the intensive computation and memory requirements of LLM inference, it is preferable and cheaper to deploy LLMs with the Cloud Service Providers (CSP) that offer high performance computation resources and low-latency networking. Nevertheless, privacy concerns have been raised about the possibility of data leakage to the CSP. In this work, we seek to address such privacy concerns through the use of Fully Homomorphic Encryption (FHE). FHE enables the CSP to work on data in its encrypted form, thus ensuring that the data stay private and secure. We propose the implementation of LLM inference with FHE. While a series of prior work have demonstrated that it is possible to execute LLM inference in a private manner, it remains a challenge to design a solution that is practical. Our contributions are as follows: We provide the first end-to-end open-source implementation of a non-interactive transformer inference with FHE. We report an amortized time of 9.6 minutes of one input with 128 tokens when evaluating the BERT model on CPU. Our packing methods for encrypted matrices remove the need to repack ciphertext between encrypted matrix multiplication and activation layers. Additionally, we introduce interleaved batching to eliminate the internal rotations during ciphertext matrix multiplications. Our approach also avoids HE rotations in evaluations of the softmax and layerNorm, leading to a speedup of 4.22× and 122× than existing works respectively. Our implementation supports arbitrary token lengths, in contrast with existing solutions that requires a full token embedding. Our implementation can be found at GitHub.
Last updated:  2025-05-29
Tight Multi-User Security of CCM and Enhancement by Tag-Based Key Derivation Applied to GCM and CCM
Yusuke Naito, Yu Sasaki, and Takeshi Sugawara
$\textsf{GCM}$ and $\textsf{CCM}$ are block cipher (BC) based authenticated encryption modes. In multi-user (mu) security, a total number of BC invocations by all users $\sigma$ and the maximum number of BC invocations per user $\sigma_\mathsf{u}$ are crucial factors. For $\textsf{GCM}$, the tight mu-security bound has been identified as $\frac{\sigma_\mathsf{u} \sigma}{2^n} + \frac{u p + u^2}{2^k}$, where $k$ and $n$ are respectively the key and block sizes, $u$ is the number of users, $p$ is the number of offline queries.In contrast, the $\mathsf{CCM}$'s mu-security bound is still unclear. Two bounds of $\frac{u \sigma_\mathsf{u}^2}{2^n} + \frac{u p + u^2}{2^k}$ and $\frac{\sigma^2}{2^n} + \frac{u p + u \sigma}{2^k}$ have been derived by Luykx~et~al.~(Asiacrypt~2017) and Zhang~et~al.~(CCS~2024), respectively, but both are not tight and worse than the $\textsf{GCM}$'s bound. Moreover, methods to enhance mu security without disruptive changes in the scheme have been considered for $\textsf{GCM}$, namely nonce randomization ($\textsf{NR}$) to improve offline security and nonce-based key derivation ($\textsf{KD}$) to improve online security, but their applicability to $\textsf{CCM}$ has never been discussed. In this paper, we prove an improved mu-security bound of $\textsf{CCM}$, which is tight, and reaches the $\textsf{GCM}$'s bound. We then prove that $\textsf{NR}$ and $\textsf{KD}$ applied to $\textsf{CCM}$ result in the same bounds for the case to $\textsf{GCM}$. An important takeaway is that $\textsf{CCM}$ is now proved to be as secure as $\textsf{GCM}$. Moreover, we argue that $\textsf{NR}$ and $\textsf{KD}$ can be insufficient for some applications with massive data, and propose a new enhancement method called nonce-based and tag-based key derivation ($\textsf{NTKD}$) that is applied to $\textsf{GCM}$ and $\textsf{CCM}$. We prove that the resulting schemes meet such real-world needs.
Last updated:  2025-05-29
Laconic Pre-Constrained Encryption
Shweta Agrawal, Simran Kumari, and Ryo Nishimaki
The recent work of Ananth et al. (ITCS 2022) initiated the study of pre-constrained encryption (PCE) which achieves meaningful security even against the system authority, without assuming any trusted setup. They provided constructions for special cases such as pre-constrained Attribute Based Encryption (PC-ABE) for point functions and pre-constrained Identity Based Encryption (PC-IBE) for general functions from the Learning with Errors (LWE) assumption. For the most general notion of PCE for circuits, they provided a construction from indistinguishability obfuscation (iO) and moreover, proved a lower bound showing that the reliance on iO was inherent. In all their constructions, the size of the public key scales linearly with the size of the constraint input to the setup algorithm.\smallskip In this work we initiate the study of laconic pre-constrained encryption, where the public key is sublinear in the size as well as number of constraints input to the setup algorithm. We make the following contributions: 1. We construct laconic pre-constrained ABE for point functions and laconic pre-constrained IBE for general functions from LWE which achieves succinct public keys, thus improving upon the work of Ananth et al. 2. For general constraints, we sidestep the lower bound by Ananth et al. by defining a weaker static notion of pre-constrained encryption (sPCE), which nevertheless suffices for all known applications. We show that laconic sPCE is impossible to achieve in the strongest malicious model of security against authority and provide the first construction of semi-malicious laconic sPCE for general constraints from LWE in the random oracle model. 3. For general constraints, to achieve malicious security without iO, we provide constructions of non-laconic sPCE from a variety of assumptions including DDH, LWE, QR and DCR. Our LWE based construction satisfies unconditional security against malicious authorities. 4. As an application of our sPCE, we provide the first construction of pre-constrained group signatures supporting general constraints, achieving unconditional anonymity and unlinkability against malicious authorities from the LWE assumption. The only other construction by Bartusek et al. supports the restricted set/database membership constraint, and achieves computational security from the DDH assumption. Along the way, we define and construct the notion of pre-constrained Input Obfuscation which may be of independent interest.
Last updated:  2025-05-29
Lower Bounds on the Bottleneck Complexity of Secure Multiparty Computation
Reo Eriguchi and Keitaro Hiwatashi
Secure multiparty computation (MPC) is a cryptographic primitive which enables multiple parties to jointly compute a function without revealing any extra information on their private inputs. Bottleneck complexity is an efficiency measure that captures the load-balancing aspect of MPC protocols, defined as the maximum amount of communication required by any party. In this work, we study the problem of establishing lower bounds on the bottleneck complexity of MPC protocols. While the previously known techniques for lower bounding total communication complexity can also be applied to bottleneck complexity, they do not provide nontrivial bounds in the correlated randomness model, which is commonly assumed by existing protocols achieving low bottleneck complexity, or they are applied only to functions of limited practical interest. We propose several novel techniques for lower bounding the bottleneck complexity of MPC protocols. Our methods derive nontrivial lower bounds even in the correlated randomness model and apply to practically relevant functions including the sum function and threshold functions. Furthermore, our lower bounds demonstrate the optimality of some existing MPC protocols in terms of bottleneck complexity or the amount of correlated randomness.
Last updated:  2025-05-28
A Generic Framework for Practical Lattice-Based Non-interactive Publicly Verifiable Secret Sharing
Behzad Abdolmaleki, John Clark, Mohammad Foroutani, Shahram Khazaei, and Sajjad Nasirzadeh
Non-interactive publicly verifiable secret sharing (PVSS) schemes enable the decentralized (re-)sharing of secrets in adversarial environments, allowing anyone to verify the correctness of distributed shares. Such schemes are essential for large-scale decentralized applications, including committee-based systems that require both transparency and robustness. However, existing PVSS schemes rely on group-based cryptography, resulting them vulnerable to quantum attacks and limiting their suitability for post-quantum applications. In this work, we propose the first practical, fully lattice-based, non-interactive PVSS scheme, grounded on standard lattice assumptions for post-quantum security. At the heart of our design is a generic framework that transforms vector commitments and linear encryption schemes into efficient PVSS protocols. We enhance vector commitments by incorporating functional hiding and proof of smallness, ensuring that encrypted shares are both verifiable and privacy-preserving. Our construction introduces two tailored lattice-based encryption schemes, each supporting efficient proofs of decryption correctness. This framework provides strong verifiability guarantees while maintaining low proof sizes and computational efficiency, making it suitable for systems with large numbers of participants.
Last updated:  2025-05-28
Linear-Time Accumulation Schemes
Benedikt Bünz, Alessandro Chiesa, Giacomo Fenzi, and William Wang
Proof-carrying data (PCD) is a powerful cryptographic primitive for computational integrity in a distributed setting. State-of-the-art constructions of PCD are based on accumulation schemes (and, closely related, folding schemes). We present WARP, the first accumulation scheme with linear prover time and logarithmic verifier time. Our scheme is hash-based (secure in the random oracle model), plausibly post-quantum secure, and supports unbounded accumulation depth. We achieve our result by constructing an interactive oracle reduction of proximity that works with any linear code over a sufficiently large field. We take a novel approach by constructing a straightline extractor that relies on erasure correction, rather than error-tolerant decoding like prior extractors. Along the way, we introduce a variant of straightline round-by-round knowledge soundness that is compatible with our extraction strategy.
Last updated:  2025-05-28
Dynamic Security: A Realistic Approach to Adaptive Security With Applications to Strong FaF Security
Bar Alon and Naty Peter
Secure multiparty computation allows multiple parties to jointly compute a function while maintaining security even in the presence of malicious adversaries. There are two types of adversaries in the literature: static adversaries, which choose the parties to corrupt before the protocol begins; and adaptive adversaries, which can corrupt parties during the execution of the protocol based on the messages exchanged by the parties. While adaptive security provides a more robust security guarantee, it may require too much in certain scenarios. Indeed, the adversary must allocate some of its resources to corrupt the parties; however, certain parties might be more susceptible to corruption, for instance, if they have not updated their operating system to the latest version. To address this, we introduce a new security notion called \emph{dynamic security}. Here, adversaries may corrupt new parties \emph{during and after} the protocol's execution, but \emph{cannot choose} targets based on the messages. A protocol is said to be $(t,h)$-dynamically secure if it is possible to simulate any adversary that can corrupt up to $t$ parties during the execution and $h$ thereafter. Dynamic security provides meaningful security for many real-world scenarios. Moreover, it circumvents known lower bounds on the communication complexity of adaptive security, allowing for more efficient protocols such as committee-based ones, which would be insecure against adaptive adversaries. We further explore dynamic security and establish the following results. 1. We show a surprising connection between dynamic security and the seemingly unrelated notion of security with friends and foes (FaF security), introduced by Alon et al. (CRYPTO 2020), which aims to protect honest parties not only from adversaries but also against other honest parties. The notion of $(t,h)$-\emph{strong FaF security} strengthens this by requiring the simulatability of the joint view of any $t$ malicious parties alongside any $h$ honest parties to be indistinguishable from their real-world view. We show that $(t,h)$-dynamic security and $(t,h)$-strong FaF security are equivalent. 2. We consider the feasibility of $(t,h)$-dynamic security and show that every $n$-party functionality can be computed with computational $(t,h)$-dynamic security (with guaranteed output delivery) if and only if $2t+h<n$. By our previous result, this also solves an open problem left by Alon et al. on the feasibility of strong FaF security.
Last updated:  2025-05-28
A Pure Indistinguishability Obfuscation Approach to Adaptively-Sound SNARGs for NP
Brent Waters and David J. Wu
We construct an adaptively-sound succinct non-interactive argument (SNARG) for NP in the CRS model from sub-exponentially-secure indistinguishability obfuscation ($i\mathcal{O}$) and sub-exponentially-secure one-way functions. Previously, Waters and Wu (STOC 2024), and subsequently, Waters and Zhandry (CRYPTO 2024) showed how to construct adaptively-sound SNARGs for NP by relying on sub-exponentially-secure indistinguishability obfuscation, one-way functions, and an additional algebraic assumption (i.e., discrete log, factoring, or learning with errors). In this work, we show that no additional algebraic assumption is needed and vanilla (sub-exponentially-secure) one-way functions already suffice in combination with $i\mathcal{O}$.
Last updated:  2025-05-28
Security of Linear Secret Sharing Schemes with Noisy Side-Channel Leakage
Utkarsh Gupta and Hessam Mahdavifar
Secret sharing is a foundational cryptographic primitive for sharing secret keys in distributed systems. In a classical threshold setting, it involves a dealer who has a secret, a set of $n$ users to whom shares of the secret are sent, and a threshold $t$ which is the minimum number of shares required to recover the secret. These schemes offer an all-or-nothing security approach where less than $t$ shares reveal no information about the secret. But these guarantees are threatened by side-channel attacks which can leak partial information from each share. Initiated by Benhamouda et. al. (CRYPTO'18), the security of such schemes has been studied for precise and worst-case bounded leakage models. However, in practice, side-channel attacks are inherently noisy. In this work, we propose a noisy leakage model for secret sharing, where each share is independently leaked to an adversary corrupted by additive noise in the underlying field $\mathbb{F}_q$. Under this model, we study the security of linear secret sharing schemes, and show bounds on the mutual information (MI) and statistical distance (SD) security metrics. We do this by using the MacWilliams' identity from the theory of error-correcting codes. For a given secret, it enables us to bound the the statistical deviation of the leaked shares from uniform as $\delta^t$, where $\delta$ is the Fourier bias of the added noise. Existing analyses for the security of linear $(n,t)$-threshold schemes only bound the SD metric, and show resilience for schemes with $t \ge 0.668n$. In this work, we show that these constraints are artifacts of the bounded leakage model. In particular, we show that $(n,t)$-threshold schemes over $\mathbb{F}_q$ with $t \ge \tau (n+1)$ leak $\mathcal{O}(q^{-2t(\gamma+1-1/\tau)})$ bits about the secret, given the bias of added noise satisfies $\delta \le q^{-\gamma}$. To the best of our knowledge, this is the first attempt towards understanding the side-channel security of linear secret sharing schemes for the MI metric.
Last updated:  2025-05-28
On the Adaptive Security of Key-Unique Threshold Signatures
Elizabeth Crites, Chelsea Komlo, and Mary Maller
In this work, we investigate the security assumptions required to prove the adaptive security of threshold signatures. Adaptive security is a strong notion of security that allows an adversary to corrupt parties at any point during the execution of the protocol, and is of practical interest due to recent standardization efforts for threshold schemes. Towards this end, we give two different impossibility results. We begin by formalizing the notion of a key-unique threshold signature scheme, where public keys have a unique correspondence to secret keys and there is an efficient algorithm for checking that public keys are well-formed. Key-uniqueness occurs in many threshold schemes that are compatible with standard, single-party signatures used in practice, such as BLS, ECDSA, and Schnorr signatures. Our first impossibility result demonstrates that it is impossible to prove the adaptive security of any key-unique threshold signature scheme under any non-interactive computational assumption for a broad class of reductions, in the range $⌊t/2⌋< t_c ≤ t$, where $n$ is the total number of parties, $t_c$ is the number of corrupted parties, and $t+ 1$ is the threshold. We begin by ruling out full adaptive security (i.e., $t_c = t$ corruptions) for key-unique threshold signatures under non-interactive computational assumptions, including, but not limited to, the discrete logarithm (DL), computational Diffie-Hellman (CDH), and q-Strong Diffie-Hellman (q-SDH) assumptions. We then generalize this impossibility result for all $t_c$ such that $⌊t/2⌋< t_c ≤ t$. Our second impossibility result applies specifically to key-unique threshold Schnorr signatures, currently an active area of research. We demonstrate that, even under the interactive computational assumptions One-More Discrete Logarithm (OMDL) and Algebraic OMDL (AOMDL), it is impossible to prove adaptive security for $⌊t/2⌋< t_c ≤ t$ in the programmable ROM with rewinding. Taken together, our results underscore the difficulty of achieving adaptive security for key-unique threshold signatures. However, we believe this work can open a new line of research, by indicating assumptions and properties to aim for when constructing adaptively secure threshold schemes.
Last updated:  2025-05-28
Simple Public Key Anamorphic Encryption and Signature using Multi-Message Extensions
Shalini Banerjee, Tapas Pal, Andy Rupp, and Daniel Slamanig
Anamorphic encryption (AE) considers secure communication in the presence of a powerful surveillant (typically called a ``dictator'') who only allows certain cryptographic primitives and knows all the secret keys in a system. The basic idea is that there is a second (anamorphic) mode of encryption that allows for transmitting an anamorphic message using a double key to a receiver who can decrypt this message using the same double key. From the point of view of the dictator, the encryption keys as well as the ciphertexts in the regular and anamorphic modes are indistinguishable. The most recent works in this field consider public key anamorphic encryption (PKAE), i.e., the sender of an anamorphic message requires an encryption double key (or no key at all), and the receiver requires an associated decryption double key. Known constructions, however, either work only for schemes that are mostly of theoretical interest or come with conceptual limitations, assuming additional unnecessary properties (e.g., randomness recoverability and CCA security). In this paper, we ask whether we can design PKAE schemes without such limitations and be closer to practically used PKE schemes. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than only CCA) secure PKE. Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Moreover, we consider IND-CCA security for the anamorphic channel of our PKAS.
Last updated:  2025-05-28
Space-Lock Puzzles and Verifiable Space-Hard Functions from Root-Finding in Sparse Polynomials
Nico Döttling, Jesko Dujmovic, and Antoine Joux
Timed cryptography has initiated a paradigm shift in the design of cryptographic protocols: Using timed cryptography we can realize tasks fairly, which is provably out of range of standard cryptographic concepts. To a certain degree, the success of timed cryptography is rooted in the existence of efficient protocols based on the sequential squaring assumption. In this work, we consider space analogues of timed cryptographic primitives, which we refer to as space-hard primitives. Roughly speaking, these notions require honest protocol parties to invest a certain amount of space and provide security against space constrained adversaries. While inefficient generic constructions of timed-primitives from strong assumptions such as indistinguishability obfuscation can be adapted to the space-hard setting, we currently lack concrete and versatile algebraically structured assumptions for space-hard cryptography. In this work, we initiate the study of space-hard primitives from concrete algebraic assumptions relating to the problem of root-finding of sparse polynomials. Our motivation to study this problem is a candidate construction of VDFs by Boneh et al. (CRYPTO 2018) which are based on the hardness of inverting permutation polynomials. Somewhat anticlimactically, our first contribution is a full break of this candidate. However, we then revise this hardness assumption by dropping the permutation requirement and considering arbitrary sparse high degree polynomials. We argue that this type of assumption is much better suited for space-hardness rather than timed cryptography. We then proceed to construct both space-lock puzzles and verifiable space-hard functions from this assumption.
Last updated:  2025-05-28
The Rényi Smoothing Parameter and Its Applications in Lattice-Based Cryptography
Cong Ling, Laura Luzzi, and Hao Yan
The smoothing parameter is a cornerstone concept in lattice-based cryptography. Traditionally defined using the \( L^{\infty} \) distance, this standard formulation can be overly stringent compared to the \( L^1 \) (or statistical) distance more commonly employed in cryptographic contexts. Recent work has proposed relaxed definitions based on Kullback-Leibler (KL) divergence and \( L^1 \) distance, thereby loosening the constraints required for the distance to vanish. However, the additive nature of the \( L^1 \) distance can be limiting for cryptographic applications where probability preservation is essential. In this paper, we introduce the {Rényi smoothing parameter} of a lattice, based on Rényi divergence, to address this limitation. The advantages of Rényi divergence in cryptographic settings are well known thanks to its multiplicative nature. The Rényi smooting parameter provides a tunable framework that interpolates between the \( L^1 \) and \( L^{\infty} \) distances, offering enhanced flexibility. We present two complementary methods to study the averaging behavior of the Rényi flatness factor: one uses classical tools such as the Minkowski-Hlawka ensemble and Rogers’ formula for computing lattice function moments; the other employs Construction A lattices derived from random codes. Finally, we illustrate how this new perspective yields improvements in lattice-based cryptographic constructions.
Last updated:  2025-05-28
Tighter Quantum Security for Fiat-Shamir-with-Aborts and Hash-and-Sign-with-Retry Signatures
Pouria Fallahpour, Serge Fehr, and Yu-Hsuan Huang
We revisit the quantum security (in the QROM) of digital signature schemes that follow the Fiat-Shamir-with-aborts (FSwA) or the probabilistic hash-and-sign with retry/abort (HSwA) design paradigm. Important examples of such signature schemes are Dilithium, SeaSign, Falcon+ and UOV. In particular, we are interested in the UF-CMA-to-UF-NMA reduction for such schemes. We observe that previous such reductions have a reduction loss that is larger than what one would hope for, or require a more stringent notion of zero-knowledge than one would hope for. We resolve this matter here by means of a novel UF-CMA-to-UF-NMA reduction that applies to FSwA and HSwA signature schemes simultaneously, and that offers an improved reduction loss (without making the zero-knowledge assumption more stringent).
Last updated:  2025-05-28
AsconAEAD128 Revisited in the Multi-user Setting
Bishwajit Chakraborty, Mridul Nandi, Soumit Pal, Thomas Peyrin, and Quan Quan Tan
After more than half a decade since its initiation, NIST declared Ascon as the winner of the LwC competition. In the first public draft of AsconAEAD128, NIST recognized that Ascon has limitations when used in multi-user applications. To mitigate this, NIST prescribed the use of a \(256\)-bit key in multi-user applications and produced an instantiation on how to process this extra key size in the current AsconAEAD128 API. While doing so, they identified a limitation of this new scheme (which we refer to as mu-Ascon in this document): mu-Ascon is vulnerable to committing attack and hence cannot be used in cases where committing security is required. On the other hand, the full key-binding property in Ascon, which separated it from other sponge-type constructions, has been used to show that Ascon is much stronger in the sense that it presents a key recovery resistance even in the case where some intermediate state is recovered. We remark that the current mu-Ascon has the limitation that only a partial key is bound during initialization and finalization. In this work, we propose some alternative instantiations of AsconAEAD128 API for multi-user applications. In comparison with the current mu-Ascon proposal, our first construction Ascon-256.v2 guarantees CMT-4 committing security up to 64 bits, and our second construction Ascon-256.v3 leads to both CMT-4 committing security and full 256-bit key binding. Structurally, our instantiations use only an extra-permutation call to provide these extra security features compared to mu-Ascon, which has a negligible overhead in terms of performance (given the lightweight nature of the Ascon permutation).
Last updated:  2025-05-28
LP2+: a robust symmetric-key AKE protocol with perfect forward secrecy, and an advocacy for thorough security proofs
Pierre-Alain Jacqmin and Jean Liénardy
Symmetric-key authenticated key establishment (AKE) protocols are particularly well suited in resource constraint environments such as internet of things (IoT) devices. Moreover, they often rely on better understood assumptions than asymmetric ones. In this paper, we review the security model for symmetric-key AKE protocols. We show why several existing models allow trivial attacks while they do not protect against some non-trivial ones. We fix these issues with our new security definitions. We show that the protocols $\textrm{LP2}$ and $\textrm{LP3}$ of Boyd et al. do not satisfy the claimed security properties. We propose a new 2-message protocol based on them, called $\textrm{LP2+}$. This protocol is proved to satisfy correctness, weak synchronization robustness, entity authentication, key indistinguishability and, as a consequence, it admits perfect forward secrecy. An instantiation of $\textrm{LP2+}$ is presented, whose security only relies on that of a pseudo-random function (PRF). Its total execution time in normal cases is dominated by only 14 evaluations of the PRF, making it a lightweight protocol that is particularly well suited for resource-constrained environments such as IoT devices. The flaws found in the security models as well as in the security arguments could have been avoided with precise and detailed proofs. We thus take this paper as an opportunity to advocate for thorough security proofs. Therefore, we have made the choice of rigor over concision.
Last updated:  2025-05-28
Refined TFHE Leveled Homomorphic Evaluation and Its Application
Ruida Wang, Jincheol Ha, Xuan Shen, Xianhui Lu, Chunling Chen, Kunpeng Wang, and Jooyoung Lee
TFHE is a fully homomorphic encryption scheme over the torus that supports fast bootstrapping. Its primary evaluation mechanism is based on gate bootstrapping and programmable bootstrapping (PBS), which computes functions while simultaneously refreshing noise. PBS-based evaluation is user-friendly and efficient for small circuits; however, the number of bootstrapping operations increases exponentially with the circuit depth. To address the challenge of efficiently evaluating large-scale circuits, Chillotti et al. introduced a leveled homomorphic evaluation (LHE) mode at Asiacrypt 2017. This mode decouples circuit evaluation from bootstrapping, resulting in a speedup of hundreds of times over PBS-based methods. However, the remaining circuit bootstrapping (CBS) becomes a performance bottleneck, even though its frequency is linear with the circuit depth. In this paper, we refine the LHE mode by mitigating the high cost of CBS. First, we patch the NTT-based CBS algorithm proposed by Wang et al. [WWL+, Eurocrypt 2024], accelerating their algorithm by up to 2.6$\times$. Then, observing the suboptimal parallelism and high complexity of modular reduction in NTT under CBS parameters, we extend WWL+ to an FFT-based algorithm by redesigning the pre-processing method and introducing a split FFT technique. This achieves the fastest CBS implementation with the smallest key size, outperforming the open-source WWL+ implementation by up to 12.1$\times$ (resp. 5.12$\times$ compared to our patched algorithm), and surpassing TFHEpp [MBM+, USENIX 2021] by 3.42$\times$ with a key size reduction of 33.2$\times$. Furthermore, we proposed an improved integer input LHE mode by extending our CBS algorithm to support higher precision and combining it with additional optimizations such as multi-bit extraction. Compared to the previous integer input LHE mode proposed by Bergerat et al. [BBB+, JoC 2023], our approach is up to 10.7$\times$ faster with a key size reduction of up to 4.4$\times$. To demonstrate the practicality of our improved LHE mode, we apply it to AES transciphering and general homomorphic look-up table (LUT) evaluation. For AES evaluation, our method is 4.8$\times$ faster and reduces the key size by 31.3$\times$ compared to the state-of-the-art method, Thunderbird [WLW+, TCHES 2024]. For LUT evaluation, we compare our results with the recent work of Trama et al. [TCBS, ePrint 2024/1201], which constructs a general 8-bit processor of TFHE. Our method not only achieves faster 8-to-8 LUT evaluation but also improves the efficiency of most heavy 8-bit bivariate instructions by up to 21$\times$ and the 16-bit sigmoid function by more than 26$\times$.
Last updated:  2025-05-28
Simulatability SOA Does Not Imply Indistinguishability SOA in the CCA Setting
Hans Heum
Contrary to expectation, we show that simulation-based selective-opening security (SSO) does not imply indistinguishability-based selective opening security (ISO) in the CCA setting, making them incomparable in the presence of either encryption randomness leakage (sender opening) or secret key leakage (receiver opening). This contrasts the CPA case, where SSO-CPA is known to be strictly stronger than ISO-CPA in the presence of sender and/or receiver opening. Our separation result holds relative to all message distributions with sufficiently high min-entropy. On the other hand, restricting to message distributions with low enough min-entropy gives rise to an implication. Our separation result does not rely on the presence of selective openings. At a glance, this may seem to contradict known equivalence results between indistinguishability, semantic security, and selective opening security under trivial openings. We reconcile the apparent contradiction by showing that the selective-opening CCA landscape splits into a “high-entropy” and a “low-entropy” world which must be considered separately.
Last updated:  2025-05-28
Algebraic Cryptanalysis of AO Primitives Based on Polynomial Decomposition Applications to Rain and Full AIM-IIIIV
Hong-Sen Yang, Qun-Xiong Zheng, and Jing Yang
The LowMC-based post-quantum signature scheme Picnic was selected as a third-round candidate for NIST PQC, attracting wide attention to the design of efficient and secure post-quantum signature schemes using Symmetric Techniques for Advanced Protocols (STAP). Symmetric primitives designed for advanced protocols such as secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge (ZK) proof systems, with the goal of reducing the number of multiplication operations, are referred to as arithmetic-oriented (AO) primitives. These cryptographic primitives are typically constructed over large finite fields, which makes classical statistical analysis methods like differential and linear cryptanalysis inefficient. Due to their inherent algebraic properties, the mainstream security evaluation approaches are based on algebraic attacks. In this paper, we analyze the security of the MPC-friendly primitives \textsc{Rain} (CCS 2022) and AIM (CCS 2023) used in the post-quantum signature schemes Rainier and AIMer. Existing algebraic attacks on \textsc{Rain} and AIM were conducted over $\mathbb{F}_2$. We propose a novel algebraic attack over $\mathbb{F}_{2^n}$ that uses the polynomial decomposition to reduce degrees of equations. By further combining with the guess-and-determine technique, meet-in-the-middle modeling, and resultant, we are able to attack \textsc{Rain} and the full AIM. Specifically, we successfully attacked 2-round \textsc{Rain} with $2^{73.7}/2^{107.0}/2^{138.9}$ primitive calls, 3-round \textsc{Rain} with $2^{160.6}/2^{236.0}/2^{311.1}$ primitive calls, for the $128/192/256$-bit key. For the full AIM, we successfully attacked it with $2^{114.0}/2^{163.2}/2^{228.3}$ primitive calls for the $128/192/256$-bit key. The attack complexities mainly lie in solving univariate polynomial equations and computing resultants, and hence the complexity evaluations are accurate.
Last updated:  2025-05-28
Super-Quadratic Quantum Speed-Ups and Guessing Many Likely Keys
Timo Glaser, Alexander May, and Julian Nowakowski
We study the fundamental problem of guessing cryptographic keys, drawn from some non-uniform probability distribution $\mathcal D$, as e.g. in LPN, LWE or for passwords. The optimal classical algorithm enumerates keys in decreasing order of likelihood. The optimal quantum algorithm, due to Montanaro (2011), is a sophisticated Grover search. We give the first tight analysis for Montanaro's algorithm, showing that its runtime is $2^{\operatorname{H}_{2/3}(\mathcal D)/2}$, where $\operatorname{H}_{\alpha}(\cdot)$ denotes Rényi entropy with parameter $\alpha$. Interestingly, this is a direct consequence of an information theoretic result called Arikan's Inequality (1996) -- which has so far been missed in the cryptographic community -- that tightly bounds the runtime of classical key guessing by $2^{\operatorname{H}_{1/2}(\mathcal D)}$. Since $\operatorname{H}_{2/3}(\mathcal D) < \operatorname{H}_{1/2}(\mathcal D)$ for every non-uniform distribution $\mathcal D$, we thus obtain a \emph{super-quadratic} quantum speed-up $s>2$ over classical key guessing. To give some numerical examples, for the binomial distribution used in Kyber, and for a typical password distribution, we obtain quantum speed-up $s>2.04$. For the $n$-fold Bernoulli distribution with parameter $p=0.1$ as in LPN, we obtain $s > 2.27$. For small error LPN with $p=\Theta(n^{-1/2})$ as in Alekhnovich encryption, we even achieve \emph{unbounded} quantum speedup $s = \Omega(n^{1/12})$. As another main result, we provide the first thorough analysis of guessing in a multi-key setting. Specifically, we consider the task of attacking many keys sampled independently from some distribution $\mathcal D$, and aim to guess a fraction of them. For product distributions $\mathcal D = \chi^n$, we show that any constant fraction of keys can be guessed within $2^{\operatorname{H}(\mathcal D)}$ classically and $2 ^{\operatorname{H}(\mathcal D)/2}$ quantumly per key, where $\operatorname{H}(\chi)$ denotes Shannon entropy. In contrast, Arikan's Inequality implies that guessing a single key costs $2^{\operatorname{H}_{1/2}(\mathcal D)}$ classically and $2^{\operatorname{H}_{2/3}(\mathcal D)/2}$ quantumly. Since $\operatorname{H}(\mathcal D) < \operatorname{H}_{2/3}(\mathcal D) < \operatorname{H}_{1/2}(\mathcal D)$, this shows that in a multi-key setting the guessing cost per key is substantially smaller than in a single-key setting, both classically and quantumly.
Last updated:  2025-05-28
Separations between simulation-based and simulation-free formulations of security for public key encryption
Yodai Watanabe
Simulation-based formulation of security enables us to naturally capture our intuition for security. However, since the simulation-based formulation is rather complicated, it is convenient to consider alternative simulation-free formulations which are easy to manipulate but can be employed to give the same security as the simulation-based one. So far the indistinguishability-based and comparison-based formulations have been introduced as such ones. Regarding the security for public key encryption, while these three formulations are shown equivalent in most settings, some relations among these formulations of non-malleability under the valid ciphertext condition, in which an adversary fails if it outputs an invalid ciphertext, remain open. This work aims to help to consider the appropriateness of the formulations of security by clarifying the above open relations among the formulations of non-malleable encryption.
Last updated:  2025-05-28
Formal Security and Functional Verification of Cryptographic Protocol Implementations in Rust
Karthikeyan Bhargavan, Lasse Letager Hansen, Franziskus Kiefer, Jonas Schneider-Bensch, and Bas Spitters
We present an effective methodology for the formal verification of practical cryptographic protocol implementations written in Rust. Within a single proof framework, we show how to develop machine-checked proofs of diverse properties like runtime safety, parsing correctness, and cryptographic protocol security. All analysis tasks are driven by the software developer who writes annotations in the Rust source code and chooses a backend prover for each task, ranging from a generic proof assistant like F$\star$ to dedicated crypto-oriented provers like ProVerif and SSProve Our main contribution is a demonstration of this methodology on Bert13, a portable, post-quantum implementation of TLS 1.3 written in Rust and verified both for security and functional correctness. To our knowledge, this is the first security verification result for a protocol implementation written in Rust, and the first verified post-quantum TLS 1.3 library.
Last updated:  2025-05-28
Collision Attacks on Reduced RIPEMD-128
Zhengrong Lu, Hongbo Yu, Xiaoen Lin, and Sitong Yuan
RIPEMD-128 is an ISO/IEC standard hash function based on a double-branch Merkle-Damgård structure. Its compression function includes two branches with distinct Boolean functions and message expansion permutations. To perform a collision attack, differential characteristics must be constructed simultaneously for both branches under the same message word difference, and the message modification order must align with conditions in both branches. These factors make collision attacks on (reduced) RIPEMD-128 highly challenging. In 2014, an attack on 40 steps of RIPEMD-128 was achieved by Wang with no state differences in round 3. In this work, we analyze message permutation properties and propose two new structures for creating message differences. These structures enable high-probability local collisions in both branches of round 3, extending the attack to more steps. Notably, the second structure can eliminate all state differences in round 3, allowing the attack to cover more than three whole rounds. To ensure practical attacks, we limit the number of conditions based on our message modification strategy and use multi-step message modification techniques to control more conditions. As a result, we successfully generate colliding message pairs for 46-step and 54-step reduced RIPEMD-128, with time complexities of approximately $2^{42}$ and $2^{54}$, respectively.
Last updated:  2025-05-28
Multi-Party Distributed Point Functions with Polylogarithmic Key Size from Invariants of Matrices
Toomas Krips and Pille Pullonen-Raudvere
Distributed point functions (DPFs), introduced in 2014, are a widely used primitive in secure computation for a wide variety of applications. However, until now, constructions for DPFs with polylogarithmic-size keys have been known only for the two-party setting. We propose a scheme for a polylogarithmic-size DPF for an arbitrary number of parties. We use a technique where a secret-shared vector is mapped to collinear vectors by public matrices serves as an invariant for off-path leaves. We show, using a technique by Shamir, that when we work over Z_pq , these vectors are hard to compute if factoring is hard. We also show that our scheme is a secure DPF, provided that two new assumptions hold, one of which is related to Generic Group Model and the other to MinRank. The output of our scheme is in the exponent in some group where Diffie-Hellman type problems are hard. Although this limits the usability of our scheme, we believe that our scheme is the first distributed point function for more than two parties with a key size that is polylogarithmic in the size of the domain and that does not use fully homomorphic encryption.
Last updated:  2025-05-28
A Novel Leakage Model in OpenSSL’s Miller-Rabin Primality Test
Xiaolin Duan, Fan Huang, Yaqi Wang, and Honggang Hu
At Crypto 2009, Heninger and Shacham presented a branch-and-prune algorithm for reconstructing RSA private keys given a random fraction of its private components. This method is widely adopted in side-channel attacks, and its complexity is closely related to the specific leakage pattern encountered. In this work, we identified a novel leakage model in the Miller-Rabin primality test implemented in OpenSSL. Under certain side-channel attacks against fixed-window modular exponentiation (e.g., recovering the least significant $b$ bits from each window), the proposed model enables staggered recovery of bits in $p$ and $q$, reducing uncertainty in key reconstruction. In particular, this model includes previously undocumented scenarios where full key recovery is achievable without branching. To understand how the proposed leakage model could contribute to attacks on modular exponentiation, we investigated the global and local behavior of key reconstruction. Our evaluation demonstrates that the proposed scenarios enable more efficient key reconstruction and retain this advantage when additional erasure bits are introduced. Moreover, in specific cases, successful reconstruction remains achievable within practical time even if the bits obtained are less than 50%. Finally, we conducted a series of experiments to confirm the practicality of our assumption, successfully recovering the lower 4 bits from each 6-bit window.
Last updated:  2025-05-28
Unbiasable Verifiable Random Functions from Generic Assumptions
Nicholas Brandt
We present conceptually simple and practically competitive constructions of verifiable random functions (VRF) that fulfill strong notions of unbiasability recently introduced by Giunta and Stewart. VRFs with such strong properties were previously only known in the random oracle model or from the decisional Diffie–Hellman assumption with preprocessing. In contrast, our constructions are based on generic assumptions and are thus the first to be plausibly post-quantum secure in the standard model (without any setup). Moreover, our transformation preserves useful properties of the underlying VRF such as aggregatability, (a form of) key-homomorphism, small entropy loss, and computability in \(\mathsf{NC}^1\); and it even yields a symmetric unbiasable VRF whose pseudorandomness holds even when the input and the key are swapped. To underscore the importance of a provably unbiasability in the standard model, we showcase a potential security weakness in the folklore VUF-then-Hash construction. Lastly, we discuss and remedy several issues regarding the definition of unbiasability, and outline a path towards a lattice-based instantiation of VRFs.
Last updated:  2025-05-28
Incompressible Encryption with Everlasting Security
Eylon Yogev and Shany Ben-David
Recently, the concept of incompressible encryption has emerged as a powerful enhancement to key-leakage resilience. In an incompressible encryption scheme, an adversary who intercepts ciphertexts is forced to dedicate a significant amount of memory to store them in full if they wish to extract any information about the plaintext later when the secret key becomes available. Given two messages, the security game involves two adversaries: the first adversary receives an encryption of one of the messages and produces a compressed state. Then, the second adversary, given both the secret key and the compressed state, attempts to determine which message was encrypted. Several positive results exist in incompressible cryptography. On the one hand, there are constructions based on minimal assumptions but with a poor rate (i.e., rate tends to 0). On the other hand, there are rate-1 constructions that achieve optimal efficiency but rely on strong cryptographic assumptions, such as obfuscation. A stronger security notion, known as everlasting security, has been proposed for incompressible encryption. In this formulation, the second adversary, who receives the compressed state and the secret key, is allowed to be computationally unbounded. While this notion is conceptually appealing, no constructions of everlasting incompressible encryption are currently known, regardless of the underlying assumption or even in idealized models. In this work, we give the first construction of everlasting incompressible encryption. In fact, we show that everlasting incompressible encryption is inherent in any sufficiently secure public-key encryption scheme. Specifically, we prove that any public-key encryption scheme with subexponential security (when instantiated with an appropriate security parameter) already satisfies the definition of everlasting incompressible encryption with subexponential security. Furthermore, our scheme achieves rate-1, improving upon existing results even for the weaker notion of standard incompressible encryption.
Last updated:  2025-05-28
OptAttest: Verifying Multi-List Multi-Hop History via a Hybrid Zero-Knowledge Architecture
Joshua G. Stern
To prevent privacy-preserving digital assets from becoming instruments of despotism via unitary-executivist compliance regimes, we propose OptAttest, a hybrid zero-knowledge architecture. This system empowers users to optionally generate verifiable attestation history for the current (Hop 0) and immediately preceding (Hop 1) transactions involving their private commitments. For crucial 0-hop multi-list attestations, users employ Zero-Knowledge Proofs (ZKPs) of claims from selected Verifiable Credentials (VCs). Users achieve per-transaction efficiency with diverse VC types by pre-computing and caching proofs of their VC validity. This approach avoids mandated adherence to singular, fallible external standards. Opted-in lightweight updates create cryptographic accumulator summaries, verified by network infrastructure (e.g., Layer 2 scaling solutions using Zero-Knowledge Virtual Machines), and are paired with user-managed Intermediate Attestation Data Packets (IADPs) containing detailed evidence. For comprehensive verification, users can then generate full recursive proofs from these IADPs for their attestation-enabled funds, leveraging native zkVM recursion. The protocol facilitates optional attestation generation, not enforcement, allowing downstream policy application. Aiming to cultivate a permissionless ethos, we propose a user-centric balance between privacy and verifiable accountability, distinct from models compelling broader data access. Folding schemes are noted as potential future enhancements for recursive proof efficiency.
Last updated:  2025-05-28
Advancements in Distributed RSA Key Generation: Enhanced Biprimality Tests
ChihYun Chuang, IHung Hsu, and TingFang Lee
This work re-evaluates the soundness guarantees of the Boneh-Franklin biprimality test ($2001$) for Blum integers. Under the condition $\gcd(pq, p + q - 1) = 1$, we show that the test accepts a non-RSA modulus with probability at most $1/4$. This is a refinement of the previously established $1/2$ bound and holds for all cases except the specific instance where $p=q=3$. We further demonstrate that this $1/4$ bound is tight, thereby halving the number of test iterations required to achieve equivalent soundness. This directly reduces computational and communication overhead in distributed RSA generation protocols. Additionally, we propose a generalized biprimality test based on the Lucas sequence. In the worst case, the acceptance probability of the proposed test is at most $1/4 + 1.25/(p_{\min} -3)$, where $p_{\min}$ is the smallest prime factor of $N$. To validate our approach, we implemented the variant Miller-Rabin test, the Boneh-Franklin test, and our proposed test, performing pairwise comparisons of their effectiveness. Both theoretical analysis and simulations indicate that the proposed test is generally more efficient than the Boneh-Franklin test in detecting cases where $N$ is not an RSA modulus. Furthermore, this test is applicable to generating RSA moduli for arbitrary odd primes. A distributed RSA modulus verification protocol that incorporates our test is also introduced. The protocol is secure against semi-honest adversaries for general odd primes. For Blum integers, it also offers security against malicious adversaries. We analyze its efficiency and compatibility with existing distributed RSA protocols, including those of Boneh-Franklin and Burkhardt et al. (CCS 2023). Our protocol offers competitive performance while enhancing soundness and generality in cryptographic applications.
Last updated:  2025-05-28
On Proving Equivalence Class Signatures Secure from Non-interactive Assumptions
Balthazar Bauer, Georg Fuchsbauer, and Fabian Regen
Equivalence class signatures (EQS), introduced by Hanser and Slamanig (AC’14, J.Crypto’19), sign vectors of elements from a bi- linear group. Their main feature is “adaptivity”: given a signature on a vector, anyone can transform it to a (uniformly random) signature on any multiple of the vector. A signature thus authenticates equivalence classes and unforgeability is defined accordingly. EQS have been used to improve the efficiency of many cryptographic applications, notably (delegatable) anonymous credentials, (round-optimal) blind signatures, group signa- tures and anonymous tokens. EQS security implies strong anonymity (or blindness) guarantees for these schemes which hold against malicious signers without trust assumptions. Unforgeability of the original EQS construction is proven directly in the generic group model. While there are constructions from standard assumptions, these either achieve prohibitively weak security notions (PKC’18) or they require a common reference string (AC’19, PKC’22), which reintroduces trust assumptions avoided by EQS. In this work we ask whether EQS schemes that satisfy the original secu- rity model can be proved secure under standard (or even non-interactive) assumptions with standard techniques. Our answer is negative: assum- ing a reduction that, after running once an adversary breaking unforge- ability, breaks a non-interactive computational assumption, we construct efficient meta-reductions that either break the assumption or break class- hiding, another security requirement for EQS.
Last updated:  2025-05-28
Exploring General Cyclotomic Rings in Torus-Based Fully Homomorphic Encryption: Part I - Prime Power Instances
Philippe Chartier, Michel Koskas, and Mohammed Lemou
In this article, we delve into the domain of fully homomorphic encryption over the torus, focusing on the algebraic techniques required for managing polynomials within cyclotomic rings defined by prime power indices. Our study encompasses essential operations, such as modulo reduction, efficient homomorphic evaluation of trace operators, blind extraction, and the blind rotation pivotal to the bootstrapping process, all redefined within this mathematical context. Through the extensive application of duality theory and trace operators in general cyclotomic rings or fields, we systematize and enhance these operations, introducing a simplified formulation of bootstrapping alongside an effective packing strategy. This investigation serves as an initial step toward addressing the broader case of composite cyclotomic indices, which we expect will open up new avenues for cryptographic applications and functionalities.
Last updated:  2025-05-28
Generalized BGV, BFV, and CKKS for Homomorphic Encryption over Matrix Rings
Bence Mali
Some of the most valuable applications of homomorphic encryption, such as encrypted machine learning inference, require efficient large-scale plaintext-ciphertext and ciphertext-ciphertext matrix multiplications. Current state-of-the-art techniques for matrix multiplications all build on the ability to pack many ciphertexts into a ciphertext and compute on them in a Single Instruction, Multiple Data (SIMD) manner. However, to fit the operation of matrix multiplication into this computational model, a large number of additional costly operations need to be performed, such as the rotation of elements between the plaintext slots. In this work, we propose an orthogonal approach to performing encrypted matrix operations with BGV-like encryption schemes, where the plaintext and ciphertext spaces are generalized to a matrix ring of arbitrary dimension. To deal with the inherent problem of noncommutativity in the case of matrix rings, we present a new superoperator technique to better represent linear and quadratic expressions in the secret key, which allows for the relinearization of ciphertexts after multiplication. The security of the modified encryption schemes is based on Module-LWE with module rank equal to the dimension of the matrices. With this construction, we demonstrate that Ring-LWE, Module-LWE, and LWE are potentially equally efficient for homomorphic encryption, both in terms of useful information density and noise growth, only for different sizes of matrices.
Last updated:  2025-05-28
Delegated PSI from Homomorphic Encryptions
Sicheng Wei and Jingwei Hu
This paper presents an efficient protocol for private set intersection in a setting with multiple set owners and a semi-honest cloud server. The core idea is to reduce the intersection computation to secure operations over Bloom filters, enabling both scalability and efficiency. By leveraging this transformation, our protocols achieve strong privacy guarantees while minimizing computation and communication overhead.
Last updated:  2025-05-28
DFS: Delegation-friendly zkSNARK and Private Delegation of Provers
Yuncong Hu, Pratyush Mishra, Xiao Wang, Jie Xie, Kang Yang, Yu Yu, and Yuwen Zhang
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) lead to proofs that can be succinctly verified but require huge computational resources to generate. Prior systems outsource proof generation either through public delegation, which reveals the witness to the third party, or, more preferably, private delegation that keeps the witness hidden using multiparty computation (MPC). However, current private delegation schemes struggle with scalability and efficiency due to MPC inefficiencies, poor resource utilization, and suboptimal design of zkSNARK protocols. In this paper, we introduce DFS, a new zkSNARK that is delegation-friendly for both public and private scenarios. Prior work focused on optimizing the MPC protocols for existing zkSNARKs, while DFS uses co-design between MPC and zkSNARK so that the protocol is efficient for both distributed computing and MPC. In particular, DFS achieves linear prover time and logarithmic verification cost in the non-delegated setting. For private delegation, DFS introduces a scheme with zero communication overhead in MPC and achieves malicious security for free, which results in logarithmic overall communication; while prior work required linear communication. Our evaluation shows that DFS is as efficient as state-of-the-art zkSNARKs in public delegation; when used for private delegation, it scales better than previous work. In particular, for $2^{24}$ constraints, the total communication of DFS is less than $500$KB, while prior work incurs $300$GB, which is linear to the circuit size. Additionally, we identify and address a security flaw in prior work, EOS (USENIX'23).
Last updated:  2025-05-28
LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems
Dan Boneh and Binyi Chen
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure and require a large (256-bit) field. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Since LatticeFold can operate over a small (64-bit) field, our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing plausible post-quantum security. Moreover, LatticeFold operates over the same module structure used by fully homomorphic encryption (FHE) and lattice signatures schemes, and can therefore benefit from software optimizations and custom hardware designed to accelerate these lattice schemes.
Last updated:  2025-05-27
Sabot: Efficient and Strongly Anonymous Bootstrapping of Communication Channels
Christoph Coijanovic, Laura Hetz, Kenneth G. Paterson, and Thorsten Strufe
Anonymous communication is vital for enabling individuals to participate in social discourse without fear of marginalization or persecution. An important but often overlooked part of anonymous communication is the bootstrapping of new communication channels, generally assumed to occur out-of-band. However, if the bootstrapping discloses metadata, communication partners are revealed even if the channel itself is fully anonymized. We propose Sabot, the first anonymous bootstrapping protocol that achieves both strong cryptographic privacy guarantees and bandwidth-efficient communication. In Sabot, clients cooperatively generate a private relationship matrix, which encodes who wants to contact whom. Clients communicate with k ≥ 2 servers to obtain “their” part of the matrix and augment the received information using Private Information Retrieval (PIR) to learn about their prospective communication partners. Compared to previous solutions, Sabot achieves stronger privacy guarantees and reduces the bandwidth overhead by an order of magnitude.
Last updated:  2025-05-27
How to Verify that a Small Device is Quantum, Unconditionally
Giulio Malavolta and Tamer Mour
A proof of quantumness (PoQ) allows a classical verifier to efficiently test if a quantum machine is performing a computation that is infeasible for any classical machine. In this work, we propose a new approach for constructing PoQ protocols where soundness holds unconditionally assuming a bound on the memory of the prover, but otherwise no restrictions on its runtime. In this model, we propose two protocols: 1. A simple protocol with a quadratic gap between the memory required by the honest parties and the memory bound of the adversary. The soundness of this protocol relies on Raz's (classical) memory lower bound for matrix inversion (Raz, FOCS 2016). 2. A protocol that achieves an exponential gap, building on techniques from the literature on the bounded storage model (Dodis et al., Eurocrypt 2023). Both protocols are also efficiently verifiable. Despite having worse asymptotics, our first protocol is conceptually simple and relies only on arithmetic modulo 2, which can be implemented with one-qubit Hadamard and CNOT gates, plus a single one-qubit non-Clifford gate.
Last updated:  2025-05-27
Decentralized Data Archival: New Definitions and Constructions
Elaine Shi, Rose Silver, and Changrui Mu
We initiate the study of a new abstraction called incremental decentralized data archival (${\sf iDDA}$). Specifically, imagine that there is an ever-growing, massive database such as a blockchain, a comprehensive human knowledge base like Wikipedia, or the Internet archive. We want to build a decentralized archival of such datasets to ensure long-term robustness and sustainability. We identify several important properties that an ${\sf iDDA}$ scheme should satisfy. First, to promote heterogeneity and decentralization, we want to encourage even weak nodes with limited space (e.g., users' home computers) to contribute. The minimum space requirement to contribute should be approximately independent of the data size. Second, if a collection of nodes together receive rewards commensurate with contributing a total of $m$ blocks of space, then we want the following reassurances: 1) if $m$ is at least the database size, we should be able to reconstruct the entire dataset; and 2) these nodes should actually be commiting roughly $m$ space in aggregate --- even when $m$ is much larger than the data size, the nodes should be storing redundant copies of the database rather than storing just one copy, and yet impersonating arbitrarily many pseudonyms to get unbounded rewards. We propose new definitions that mathematically formalize the aforementioned requirements of an ${\sf iDDA}$ scheme. We also devise an efficient construction in the random oracle model which satisfies the desired security requirements. Our scheme incurs only $\widetilde{O}(1)$ audit cost, as well as $\widetilde{O}(1)$ update cost for both the publisher and each node, where $\widetilde{O}(\cdot)$ hides polylogarithmic factors. Further, the minimum space provisioning required to contribute is as small as polylogarithmic. Our construction exposes several interesting technical challenges. Specifically, we show that a straightforward application of the standard hierarchical data structure fails, since both our security definition and the underlying cryptographic primitives we employ lack the desired compositional guarantees. We devise novel techniques to overcome these compositional issues, resulting in a construction with provable security while still retaining efficiency. Finally, our new definitions also make a conceptual contribution, and lay the theoretical groundwork for the study of ${\sf iDDA}$. We raise several interesting open problems along this direction.
Last updated:  2025-05-27
Low-Latency Bootstrapping for CKKS using Roots of Unity
Jean-Sébastien Coron and Robin Köstler
We introduce Sparse Roots of Unity (SPRU) bootstrapping, a new bootstrapping algorithm for the CKKS homomorphic encryption scheme for approximate arithmetic. The original CKKS bootstrapping method relies on homomorphically evaluating a polynomial that approximates modular reduction modulo q. In contrast, SPRU bootstrapping directly embeds the additive group modulo q into the complex roots of unity, which can be evaluated natively in the CKKS scheme. This approach significantly reduces the multiplicative depth required for bootstrapping, enabling the use of a smaller ring dimension and improving efficiency. In practice, using the OpenFHE C++ library, SPRU bootstrapping achieves up to a 5× reduction in latency when applied to ciphertexts with a small number of slots.
Last updated:  2025-05-27
Improved Alternating-Moduli PRFs and Post-Quantum Signatures
Navid Alamati, Guru-Vamsi Policharla, Srinivasan Raghuraman, and Peter Rindal
We revisit the alternating-moduli paradigm for constructing symmetric-key primitives with a focus on constructing efficient protocols to evaluate them using secure multi-party computation (MPC). The alternating-moduli paradigm of Boneh, Ishai, Passelègue, Sahai, and Wu (TCC 2018) enables the construction of various symmetric-key primitives with the common characteristic that the inputs are multiplied by two linear maps over different moduli. The first contribution focuses on efficient two-party evaluation of alternating-moduli pseudorandom functions (PRFs), effectively building an oblivious PRF. We present a generalized alternating-moduli PRF construction along with methods to lower the communication and computation. We then provide several variants of our protocols with different computation and communication tradeoffs for evaluating the PRF. Most of our protocols are in the hybrid model while one is based on specialized garbling. Our most efficient protocol effectively is about $3\times$ faster and requires $1.3\times$ less communication. Our next contribution is the efficient evaluation of the one-way function (OWF) $f(x)=\mathbf{B}\cdot_3 (\mathbf{A} \cdot_2 x)$ proposed by Dinur, Goldfeder, Halevi, Ishai, Kelkar, Sharma, and Zaverucha (CRYPTO 21) where $\mathbf{A} \in \mathbb{F}^{m\times n}_2, \mathbf{B}\in\mathbb{F}^{t\times m}_3$, and $\cdot_p$ is multiplication mod $p$. This surprisingly simple OWF can be evaluated within MPC by secret sharing $[\![x]\!]$ over $\mathbb{F}_2$, locally computing $[\![v]\!]=\mathbf{A}\cdot_2 [\![x]\!]$, performing a modulus switching protocol to $\mathbb{F}_3$ shares, followed by locally computing the output shares $[\![y]\!]=\mathbf{B}\cdot_3 [\![v]\!]$. We design a bespoke MPC-in-the-Head (MPCitH) signature scheme that evaluates the aforementioned OWF, achieving state-of-the-art performance. The resulting signature has a size ranging from $4.0$ to $5.5$ KB, achieving between $2\text{-}3\times$ reduction compared to the prior work. To the best of our knowledge, this is only $\approx 5\%$ larger than the smallest signature based on symmetric-key primitives, including the latest NIST post-quantum cryptography competition submissions. We also show that our core techniques can be extended to build very small post-quantum ring signatures for rings of small to medium size, which are competitive with state-of-the-art lattice-based schemes. Our techniques are in fact more generally applicable to set membership in MPCitH.
Last updated:  2025-05-27
Learning with Alternating Moduli, Arora-Ge over Composite Moduli, and Weak PRFs
Yilei Chen, Liheng Ji, and Wenjie Li
In TCC 2018, Boneh, Ishai, Passelègue, Sahai, and Wu propose candidates of weak and strong PRFs by evaluating linear functions over coprime moduli alternatively. Such PRFs can be evaluated by low-depth circuits and are MPC-friendly. However, they have not been able to base the security of their PRFs on well-formed assumptions other than assuming that the PRF constructions themselves are secure. In this paper, we formalize a new assumption called Learning with Alternating Moduli (LAM). We show that over certain large moduli, the LAM assumption is as hard as the Learning with Errors (LWE) assumption. For LAM over constant moduli, we do not know how to base its hardness on the LWE assumption. Instead, we provide (i) polynomial-time attacks on LAM with constant prime-power moduli and certain constant non-prime-power moduli, and (ii) evidence of the sub-exponential hardness of LAM with other moduli by analyzing the effect of typical attacks. More specifically, we put forward two new attacks. The first attack is a recursive algorithm that solves LWE with certain constant composite moduli and error distributions. The algorithm extends the Arora-Ge algorithm for LWE from prime moduli to composite moduli, and it also solves LAM for certain parameters. The second attack is a polynomial-time attack that rules out the existence of weak PRFs in $\mathsf{NC}^0[p]$ for any prime $p$. Based on our studies, we propose candidate weak PRFs in $\mathsf{NC}^0[p_1,p_2]$ for some distinct primes $p_1,p_2$ based on LAM over constant moduli, or the Learning with Rounding (LWR) assumption over constant moduli. Compared to the weak PRF candidates by Boneh et al., our weak PRF candidates live in the same complexity class while having the advantage of being based on well-formed assumptions.
Last updated:  2025-05-27
Registered Functional Encryption for Pseudorandom Functionalities from Lattices: Registered ABE for Unbounded Depth Circuits and Turing Machines, and More
Tapas Pal, Robert Schädlich, and Erkan Tairi
Registered functional encryption (RFE) is a generalization of public-key encryption that enables computation on encrypted data (like classical FE), but without needing a central trusted authority. Concretely, the users choose their own public keys and register their keys together with a function with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key, which serves as the public key of the FE scheme. Currently, we only know RFE constructions for restricted functionalities using standard assumptions, or for all circuits using powerful tools such as indistinguishability obfuscation, and only in the non-uniform model. In this work, we make progress on this front by providing the first lattice-based constructions of RFE for pseudorandom functionalities, where the model of computation is either non-uniform (unbounded depth circuits) or uniform (Turing machines). Intuitively, we call a functionality pseudorandom if the output of the circuit is indistinguishable from uniform for every input seen by the adversary. Security relies on LWE and a recently introduced primitive called pseudorandom FE (prFE), which currently can be instantiated from evasive LWE. We illustrate the versatility of these new functionalities for RFE by leveraging them to achieve key-policy and ciphertext-policy registered attribute-based encryption and registered predicate encryption schemes (KP-RABE, CP-RABE and RPE) for both unbounded depth circuits and Turing machines. Existing RABE constructions support only bounded depth circuits, and prior to our work there neither existed RABE for uniform models of computation nor RPE. As an appealing feature, all our constructions enjoy asymptotic optimality in the sense that their parameters depend neither on the length of public attributes nor the size of policies. Along the way, we can also improve on the state-of-the-art for classical attribute-based encryption (ABE) and predicate encryption (PE). Specifically, we obtain new constructions for KP-ABE, CP-ABE and PE for Turing machines with optimal asymptotic parameters. For KP-ABE, this is an in improvement in terms of efficiency, whereas for CP-ABE and PE we are not aware of any prior purely lattice-based construction supporting Turing machines.
Last updated:  2025-05-27
Improved Lattice Blind Signatures from Recycled Entropy
Corentin Jeudy and Olivier Sanders
Blind signatures represent a class of cryptographic primitives enabling privacy-preserving authentication with several applications such as e-cash or e-voting. It is still a very active area of research, in particular in the post-quantum setting where the history of blind signatures has been hectic. Although it started to shift very recently with the introduction of a few lattice-based constructions, all of the latter give up an important characteristic of blind signatures (size, efficiency, or security under well-known assumptions) to achieve the others. In this paper, we propose another design which revisits the link between the two main procedures of blind signatures, namely issuance and showing, demonstrating that we can significantly alleviate the second one by adapting the former. Concretely, we show that we can harmlessly inject excess randomness in the issuance phase, and then recycle the entropy surplus during showing to decrease the complexity of the zero-knowledge proof which constitutes the main component of the signature. This leads to a blind signature scheme with small sizes, low complexity, and that still relies on well-known lattice assumptions.
Last updated:  2025-05-27
Plonkify: R1CS-to-Plonk transpiler
Pengfei Zhu
Rank-1 Constraint Systems (R1CS) and Plonk constraint systems are two commonly used circuit formats for zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). We present Plonkify, a tool that converts a circuit in an R1CS arithmetization to Plonk, with support for both vanilla gates and custom gates. Our tool is able to convert an R1CS circuit (compiled from the Circom circuit description language) with 250,938 constraints to a vanilla Plonk circuit with 589,829 constraints, or a jellyfish turbo Plonk circuit with 370,086 constraints. This represents a $3.8\times$ and $2.0\times$ reduction in the number of constraints over the respective naïve conversions. Further, we make several optimizations to the Circom compiler in order to minimize the number of non-zero elements in the generated R1CS circuits, and to facilitate conversion to Plonks. When recompiled with our optimized version of Circom, the aforementioned circuit sees a 49% reduction in the number of non-zero elements, with only a 0.4% increase in the number of constraints. The same circuit can now be represented in just 422,610 vanilla Plonk constraints, or 312,163 high-degree ones.
Last updated:  2025-05-27
Multiparty Homomorphic Secret Sharing and More from LPN and MQ
Geoffroy Couteau, Naman Kumar, and Xiaxi Ye
We give the first constructions of multiparty pseudorandom correlation generators, distributed point functions, and (negligible-error) homomorphic secret sharing for constant-degree polynomials for any number of parties without using LWE or iO. Our constructions are proven secure under the combination of LPN with dimension $n$, $2n$ samples, and noise rate $n^{\varepsilon-1}$ for a small constant $\varepsilon$, and MQ with $n$ variables and $n^{1+\delta}$ equations. As applications of our results, we obtain from the same assumptions secure multiparty computation protocols with sublinear communication and silent preprocessing, as well as private information retrieval for $M$ servers and size-$\lambda^d$ databases with optimal download rate and client-to-server communication $M^d\cdot \lambda^3$.
Last updated:  2025-05-27
Multiparty FHE Redefined: A Framework for Unlimited Participants
Robin Jadoul, Barry van Leeuwen, and Oliver Zajonc
Multiparty fully homomorphic encryption (MPFHE) is a generalization of (multi-key) fully homomorphic encryption ((MK)FHE) that lives on the cusp between multiparty computation (MPC) and FHE, enabling a computation over encrypted data using multiple keys. However, contrary to MKFHE it seeks to reduce the noise inflation based on the number of parties by allowing the parties to first compute shared data in MPC before executing the computation in FHE. Generally, MPFHE protocols have required ad-hoc constructions and adaptations to already existing protocols. In this work we present a new framework that standardizes the approach of MPFHE to allow the use of a broad spectrum of MPC and FHE protocols, while eliminating the noise inflation based on the participating number of parties. This presents the first ever multiparty FHE protocol which allows an arbitrary number of participants. We then show a case study of this using the FINAL scheme and show that we reduce the required key material by 40-99.9% compared to the MKFHE FINAL scheme, FINALLY, 8-71% compared to the AKÖ scheme, and 65-70% compared to the Park-Rovira scheme. Moreover, we reduce the bootstrapping time for the AKÖ, Park-Rovira, and KMS schemes by 75-99.7%.
Last updated:  2025-05-27
Diving Deep Into UC: Uncovering and Resolving Issues in Universal Composability
Céline Chevalier and Éric Sageloli
Introduced by Canetti in 2001, Universal Composability (UC) is a widely adopted security model that enables the specification and proof of security for a broad range of protocols, offering strong security guarantees. At its core lies the universal composition theorem (UC theorem), which ensures that protocols proven secure within the framework remain secure even when deployed in real-world environments with multiple instances of them. In this work, we present two key contributions. First, we identify several problems with the UC framework, in particular the UC Theorem. They include counterexamples, limitations that make it unusable for important classes of protocols, and weaknesses in its proof. These problems reveal flaws in nearly all the fundamental concepts of UC. Secondly, we update the main concepts of UC to address these problems. Although these revisions are nontrivial, our updated definitions are intended to stay as closely aligned with the original model as possible, while providing greater simplicity overall. To ensure the validity of these updates, we present a proof of the updated UC theorem, which is more detailed and modular than the original.
Last updated:  2025-05-27
Higher-Order Deterministic Masking with Application to Ascon
Vahid Jahandideh, Bart Mennink, and Lejla Batina
Side-channel attacks (SCAs) pose a significant threat to the implementations of lightweight ciphers, particularly in resource-constrained environments where masking—the primary countermeasure—is constrained by tight resource limitations. This makes it crucial to reduce the resource and randomness requirements of masking schemes. In this work, we investigate an approach to minimize the randomness complexity of masking algorithms. Specifically, we explore the theoretical foundations of deterministic higher-order masking, which relies solely on offline randomness present in the initial input shares and eliminates the need for online (fresh) randomness during internal computations. We demonstrate the feasibility of deterministic masking for ciphers such as Ascon, showing that their diffusion layer can act as a refresh subcircuit. This ensures that, up to a threshold number, probes placed in different rounds remain independent. Based on this observation, we propose composition theorems for deterministic masking schemes. On the practical side, we extend the proof of first- and second-order probing security for Ascon’s protected permutation from a single round to an arbitrary number of rounds
Last updated:  2025-05-27
A Decomposition Approach for Evaluating Security of Masking
Vahid Jahandideh, Bart Mennink, and Lejla Batina
Masking is a common countermeasure against side-channel attacks that encodes secrets into multiple shares, each of which may be subject to leakage. A key question is under what leakage conditions, and to what extent, does increasing the number of shares actually improve the security of these secrets. Although this question has been studied extensively in low-SNR regimes, scenarios where the adversary obtains substantial information—such as on low-noise processors or through static power analysis—have remained underexplored. In this paper, we address this gap by deriving necessary and sufficient noise requirements for masking security in both standalone encodings and linear gadgets. We introduce a decomposition technique that reduces the relationship between an extended-field variable and its leakage into subproblems involving linear combinations of the variable’s bits. By working within binary subfields, we derive optimal bounds and then lift these results back to the extended field. Beyond binary fields, we also present a broader framework for analyzing masking security in other structures, including prime fields. As an application, we prove a conjecture by Dziembowski et al. (TCC 2016), which states that for an additive group \(\mathbb{G}\) with its largest subgroup \(\mathbb{H}\), a \(\delta\)-noisy leakage satisfying \(\delta < 1 - \tfrac{|\mathbb{H}|}{|\mathbb{G}|}\) ensures that masking enhances the security of the secret.
Last updated:  2025-05-27
IP Masking with Generic Security Guarantees under Minimum Assumptions, and Applications
Sebastian Faust, Loïc Masure, Elena Micheli, Hai Hoang Nguyen, Maximilian Orlt, and François-Xavier Standaert
Leakage-resilient secret sharing is a fundamental building block for securing implementations against side-channel attacks. In general, such schemes correspond to a tradeoff between the complexity of the resulting masked implementations, their security guarantees and the physical assumptions they require to be effective. In this work, we revisit the Inner-Product (IP) framework, where a secret \(y\) is encoded by two vectors \((\vec{\omega},\vec{y})\), such that their inner product is equal to \(y\). So far, the state of the art is split in two. On the one hand, the most efficient IP masking schemes (in which \(\vec{\omega}\) is public but random) are provably secure with the same security notions (i.e., in the abstract probing model) as Boolean masking, yet at the cost of a slightly more expensive implementation. Hence, their theoretical interest and practical relevance remain unclear. On the other hand, the most secure IP masking schemes (in which \(\vec{\omega}\) is secret) lead to expensive implementations. We improve this state of the art by investigating the leakage resilience of IP masking with public \(\vec{\omega}\) coefficients in the bounded leakage model, which depicts well implementation contexts where the physical noise is negligible. Furthermore, we do that without assuming independent leakage from the shares, which may be challenging to enforce in practice. In this model, we show that if \(m\) bits are leaked from the \(d\) shares \(\vec{y}\) of the encoding over an \(n\)-bit field, then, with probability at least \(1 - 2^{-\lambda}\) over the choice of \(\vec{\omega}\), the scheme is \(\mathcal{O}{(\sqrt{2^{-(d-1)\cdot n + m + 2\lambda}}})\)-leakage resilient. We additionally show that in large Mersenne-prime fields, a wise choice of the public coefficients \(\vec{\omega}\) can yield leakage resilience up to \(\mathcal{O}({n \cdot 2^{-d \cdot n +n + d}})\), in the case where one physical bit from each share is revealed to the adversary. The exponential rate of the leakage resilience we put forward significantly improves upon previous bounds in additive masking, where the past literature exhibited a constant exponential rate only. We additionally discuss the applications of our results, and the new research challenges they raise.
Last updated:  2025-05-27
Enabling Puncturable Encrypted Search over Lattice for Privacy-Preserving in Mobile Cloud
Yibo Cao, Shiyuan Xu, Gang Xu, Xiu-Bo Chen, Zongpeng Li, Jiawen Kang, and Dusit Niyato
Searchable encryption (SE) has been widely studied for mobile cloud computing, allowing data encrypted search. However, existing SE schemes cannot support the fine-grained searchability revocation. Puncturable encryption (PE) can revoke the decryption ability for a specific message, which can potentially alleviate this issue. Moreover, the threat of quantum computing remains an important concern, leading to privacy leakage in the mobile cloud. Consequently, designing a post-quantum puncturable encrypted search scheme is still far-reaching. In this paper, we propose PunSearch, the first puncturable encrypted search scheme over lattice for data privacy-preserving in mobile cloud. PunSearch provides a fine-grained searchability revocation while enjoying quantum safety. Different from existing PE schemes, we construct a novel trapdoor generation mechanism through evaluation algorithms and pre-image sampling technique. We then design a search permission verification method to revoke the searchability for specific keywords. Furthermore, we formulate a new IND-Pun-CKA model and utilize it to analyze the security of PunSearch. Comprehensive performance evaluation indicates that the computational overheads of Encrypt, Trapdoor, Search, and Puncture algorithms in PunSearch are just 0.064, 0.005, 0.050, and 0.311 times of other prior arts, respectively under the best cases. These results demonstrate that PunSearch is effective and secure in mobile cloud computing.
Last updated:  2025-05-27
On the Fiat–Shamir Security of Succinct Arguments from Functional Commitments
Alessandro Chiesa, Ziyi Guan, Christian Knabenhans, and Zihan Yu
We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a functional interactive oracle proof (FIOP) and a functional commitment scheme (FC scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step (even in special cases). We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FC scheme satisfies a natural state-restoration variant of function binding (a generalization of position binding for vector commitment schemes). Moreover, we prove that notable FC schemes satisfy state-restoration function binding, allowing us to establish, via our main result, the security of several SNARGs of interest (in the random oracle model). This includes a security proof of Plonk, in the ROM, based on ARSDH (a falsifiable assumption).
Last updated:  2025-05-27
Improved Quantum Linear Attacks and Application to CAST
Kaveh Bashiri, Xavier Bonnetain, Akinori Hosoyamada, Nathalie Lang, and André Schrottenloher
This paper studies quantum linear key-recovery attacks on block ciphers. The first such attacks were last-rounds attacks proposed by Kaplan et al. (ToSC 2016), which combine a linear distinguisher with a guess of a partial key. However, the most efficient classical attacks use the framework proposed by Collard et al. (ICISC 2007), which computes experimental correlations using the Fast Walsh-Hadamard Transform. Recently, Schrottenloher (CRYPTO 2023) proposed a quantum version of this technique, in which one uses the available data to create a quantum \emph{correlation state}, which is a superposition of subkey candidates where the amplitudes are the corresponding correlations. A limitation is that the good subkey is not marked in this state, and cannot be found easily. In this paper, we combine the correlation state with another distinguisher. From here, we can use Amplitude Amplification to recover the right key. We apply this idea to Feistel ciphers and exemplify different attack strategies on LOKI91 before applying our idea on the CAST-128 and CAST-256 ciphers. We demonstrate the approach with two kinds of distinguishers, quantum distinguishers based on Simon's algorithm and linear distinguishers. The resulting attacks outperform the previous Grover-meet-Simon attacks.
Last updated:  2025-05-27
Accountable Light Client Systems for Proof-of-Stake Blockchains
Oana Ciobotaru, Fatemeh Shirazi, Alistair Stewart, and Sergey Vasilyev
A major challenge for blockchain interoperability is having an on-chain light client protocol that is both efficient and secure. We present a protocol that provides short proofs about the state of a decentralised consensus protocol while being able to detect misbehaving parties. To do this naively, a verifier would need to maintain an updated list of all participants' public keys which makes the corresponding proofs long. In general, existing solutions either lack accountability or are not efficient. We define and design a committee key scheme with short proofs that do not include any of the individual participants' public keys in plain. Our committee key scheme, in turn, uses a custom designed SNARK which has a fast prover time. Moreover, using our committee key scheme, we define and design an accountable light client system as the main cryptographic core for building bridges between proof of stake blockchains. Finally, we implement a prototype of our custom SNARK for which we provide benchmarks.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.