All papers (24425 results)

Last updated:  2025-06-06
Private Signaling Secure Against Actively Corrupted Servers
Haotian Chu, Xiao Wang, and Yanxue Jia
Private signaling allows servers to identify a recipient's messages on a public bulletin board without knowing the recipient's metadata. It is a central tool for systems like privacy-preserving blockchains and anonymous messaging. However, unless with TEE, current constructions all assume that the servers are only passively corrupted, which significantly limits their practical relevance. In this work, we present a TEE-free simulation-secure private signaling protocol assuming two non-colluding servers, either of which can be actively corrupted. Crucially, we convert signal retrieval into a problem similar to private set intersection and use custom-built zero-knowledge proofs to ensure consistency with the public bulletin board. As a result, our protocol achieves lower server-to-server communication overhead and a much smaller digest compared to state-of-the-art semi-honest protocol. For example, for a board size of $2^{19}$ messages, the resulting digest size is only 33.57KB. Our protocol is also computationally efficient: retrieving private signals only takes about 2 minutes, using 16 threads and a LAN network.
Last updated:  2025-06-06
Single-server Stateful PIR with Verifiability and Balanced Efficiency
Pranav Shriram Arunachalaramanan and Ling Ren
Recent stateful private information retrieval (PIR) schemes have significantly improved amortized computation and amortized communication while aiming to keep client storage minimal. However, all the schemes in the literature still suffer from a poor tradeoff between client storage and computation. We present BALANCED-PIR, a stateful PIR scheme that effectively balances computation and client storage. For a database of a million entries, each of 8 bytes, our scheme requires 0.2 MB of client storage, 0.2 ms of amortized computation, and 11.14 KB of amortized communication. Compared with the state-of-the-art scheme using a similar storage setting, our scheme is almost 9x better in amortized computation and 40x better in offline computation. Verifiable private information retrieval has been gaining more attention recently. However, all existing schemes require linear amortized computation and huge client storage. We present Verifiable BALANCED-PIR, a verifiable stateful PIR scheme with sublinear amortized computation and small client storage. In fact, our Verifiable BALANCED-PIR adds modest computation, communication, and storage costs on top of BALANCED-PIR. Compared with the state-of-the-art verifiable scheme, the client storage of our scheme is 100x smaller, the amortized computation is 15x less, and the amortized communication is 2.5x better.
Last updated:  2025-06-05
Rewardable Naysayer Proofs
Gennaro Avitabile, Luisa Siniscalchi, and Ivan Visconti
Combining verifiable computation with optimistic approaches is a promising direction to scale blockchain applications. The basic idea consists of saving computations by avoiding the verification of proofs unless there are complaints. A key tool to design systems in the above direction has been recently proposed by Seres, Glaeser and Bonneau [FC'24] who formalized the concept of a Naysayer proof: an efficient to verify proof disproving a more demanding to verify original proof. In this work, we discuss the need of rewarding naysayer provers, the risks deriving from front-running attacks, and the failures of generic approaches trying to defeat them. Next, we introduce the concept of verifiable delayed naysayer proofs and show a construction leveraging proofs of sequential work, without relying on any additional infrastructure.
Last updated:  2025-06-05
Breaking the 1/λ-Rate Barrier for Arithmetic Garbling
Geoffroy Couteau, Carmit Hazay, Aditya Hegde, and Naman Kumar
Garbled circuits, introduced in the seminal work of Yao (FOCS, 1986), have received considerable attention in the boolean setting due to their efficiency and application to round-efficient secure computation. In contrast, arithmetic garbling schemes have received much less scrutiny. The main efficiency measure of garbling schemes is their rate, defined as the bit size of each gate's output divided by the size of the (amortized) garbled gate. Despite recent progress, state-of-the-art garbling schemes for arithmetic circuits suffer from important limitations: all existing schemes are either restricted to $B$-bounded integer arithmetic circuits (a computational model where the arithmetic is performed over $\mathbb{Z}$ and correctness is only guaranteed if no intermediate computation exceeds the bound $B$) and achieve constant rate only for very large bounds $B = 2^{\Omega(\lambda^3)}$, or have a rate at most $O(1/\lambda)$ otherwise, where $\lambda$ denotes a security parameter. In this work, we improve this state of affairs in both settings. - As our main contribution, we introduce the first arithmetic garbling scheme over modular rings $\mathbb{Z}_B$ with rate $O(\log\lambda/\lambda)$, breaking for the first time the $1/\lambda$-rate barrier for modular arithmetic garbling. Our construction relies on the power-DDH assumption. - As a secondary contribution, we introduce a new arithmetic garbling scheme for $B$-bounded integer arithmetic that achieves a constant rate for bounds $B$ as low as $2^{O(\lambda)}$. Our construction relies on a new non-standard KDM-security assumption on Paillier encryption with small exponents.
Last updated:  2025-06-05
How to Trace Viral Content in End-to-End Encrypted Messaging
Pedro Branco, Matthew Green, Aditya Hegde, Abhishek Jain, and Gabriel Kaptchuk
We study the problem of combating *viral* misinformation campaigns in end-to-end encrypted (E2EE) messaging systems such as WhatsApp. We propose a new notion of Hop Tracking Signatures (HTS) that allows for tracing originators of messages that have been propagated on long forwarding paths (i.e., gone viral), while preserving anonymity of everyone else. We define security for HTS against malicious servers. We present both negative and positive results for HTS: on the one hand, we show that HTS does not admit succinct constructions if tracing and anonymity thresholds differ by exactly one "hop". On the other hand, by allowing for a larger gap between tracing and anonymity thresholds, we can build succinct HTS schemes where the signature size does not grow with the forwarding path. Our positive result relies on streaming algorithms and strong cryptographic assumptions. Prior works on tracing within E2EE messaging systems either do not achieve security against malicious servers or focus only on tracing originators of pre-defined banned content.
Last updated:  2025-06-05
Synergy: A Lightweight Block Cipher with Variable Bit Rotation Feistel Network
Anders Lindman
Synergy is a lightweight block cipher designed for resource-constrained environments such as IoT devices, embedded systems, and mobile applications. Built around a 16-round Feistel network, 8 independent pseudorandom number generators (PRNGs) ensure strong diffusion and confusion through the generation of per-block unique round keys. With a 1024-bit key and a 64-bit block size, Synergy mitigates vulnerabilities to ML-based cryptanalysis by using a large key size in combination with key- and data-dependent bit rotations, which reduce statistical biases and increase unpredictability. By utilizing 32-bit arithmetic for efficient processing, Synergy achieves high throughput, low latency, and low power consumption, providing performance and security for applications where both are critical.
Last updated:  2025-06-05
Integral Resistance of Block Ciphers with Key Whitening by Modular Addition
Christof Beierle, Phil Hebborn, Gregor Leander, and Yevhen Perehuda
Integral attacks exploit structural weaknesses in symmetric cryptographic primitives by analyzing how subsets of inputs propagate to produce outputs with specific algebraic properties. For the case of (XOR) key-alternating block ciphers using (independent) round keys, at ASIACRYPT'21, Hebborn et al. established the first non-trivial lower bounds on the number of rounds required for ensuring integral resistance in a quite general sense. For the case of adding keys by modular addition, no security arguments are known so far. Here, we present a unified framework for analyzing the integral resistance of primitives using (word-wise) modular addition for key whitening, allowing us to not only fill the gap for security arguments, but also to overcome the heavy computational cost inherent in the case of XOR-whitening.
Last updated:  2025-06-05
XHMQV: Better Efficiency and Stronger Security for Signal’s Initial Handshake based on HMQV
Rune Fiedler, Felix Günther, Jiaxin Pan, and Runzhi Zeng
The Signal protocol is the most widely deployed end-to-end-encrypted messaging protocol. Its initial handshake protocol X3DH allows parties to asynchronously derive a shared session key without the need to be online simultaneously, while providing implicit authentication, forward secrecy, and a form of offline deniability. The X3DH protocol has been extensively studied in the cryptographic literature and is acclaimed for its strong "maximum-exposure" security guarantees, hedging against compromises of users' long-term keys and medium-term keys but also the ephemeral randomness used in the handshake. This maximum-exposure security is achieved by deriving keys from the concatenation of 3–4 Diffie–Hellman (DH) secrets, each combining two long-term, medium-term, or ephemeral DH shares. Remarkably, X3DH's approach of concatenating plain DH combinations is sub-optimal, both in terms of maximum-exposure security and performance. Indeed, Krawczyk's well-known HMQV protocol (Crypto '05) is a high-performance, DH-based key exchange that provides strong security against long-term and ephemeral key compromise. One might hence wonder: why not base Signal's initial handshake on HMQV? In this work, we study this question and show that a carefully adapted variant of HMQV, which we call XHMQV, indeed enables stronger security and efficiency while matching the constraints of Signal's initial handshake. Most notably, HMQV does not work as a drop-in replacement for X3DH, as the latter's asynchronicity requires the protocol to handle cases where one party runs out of ephemeral keys (pre-uploaded to the Signal server). Our XHMQV design hence augments HMQV with medium-term keys analogous to those used in X3DH. We prove that XHMQV provides security in all 3–4 compromise scenarios where X3DH does and additionally in 1–2 further scenarios, strengthening the handshake's maximum-exposure guarantees while using more efficient group operations. We further confirm that our XHMQV design achieves deniability guarantees comparable to X3DH. Our security model is the first to capture Signal's long-term key reuse between DH key exchange and signatures, which may be of independent interest.
Last updated:  2025-06-04
One-way multilinear functions of the second order with linear shifts
Stanislav Semenov
We introduce and analyze a novel class of binary operations on finite-dimensional vector spaces over a field \( K \), defined by second-order multilinear expressions with linear shifts. These operations generate polynomials whose degree increases linearly with each iterated application, while the number of distinct monomials grows combinatorially. We demonstrate that, despite the non-associative and non-commutative nature in general, these operations exhibit power associativity and internal commutativity when iterated on a single vector. This allows for well-defined exponentiation \( a^n \). Crucially, the absence of a simple closed-form expression for \( a^n \) suggests a one-way property: computing \( a^n \) from \( a \) and \( n \) is straightforward, but recovering \( n \) from \( a^n \) (the Discrete Iteration Problem) appears computationally hard. We propose a Diffie–Hellman-like key exchange protocol utilizing these properties over finite fields, defining an Algebraic Diffie–Hellman Problem (ADHP). The proposed structures are of interest for cryptographic primitives, algebraic dynamics, and computational algebra.
Last updated:  2025-06-04
Orient Express: Using Frobenius to Express Oriented Isogenies
Wouter Castryck, Riccardo Invernizzi, Gioella Lorenzon, Jonas Meers, and Frederik Vercauteren
In this paper we study supersingular elliptic curves primitively oriented by an imaginary quadratic order, where the orientation is determined by an endomorphism that factors through the Frobenius isogeny. In this way, we partly recycle one of the main features of CSIDH, namely the fact that the Frobenius orientation can be represented for free. This leads to the most efficient family of ideal-class group actions in a range where the discriminant is significantly larger than the field characteristic $p$. Moreover, if we orient with a non-maximal order $\mathcal{O} \subset \mathbb{Q}(\sqrt{-p})$ and we assume that it is feasible to compute the ideal-class group of the maximal order, then also the ideal-class group of $\mathcal{O}$ is known and we recover the central feature of SCALLOP-like constructions. We propose two variants of our scheme. In the first one, the orientation is by a suborder of the form $\mathbb{Z}[f\sqrt{-p}]$ for some $f$ coprime to $p$, so this is similar to SCALLOP. In the second one, inspired by the work of Chenu and Smith, the orientation is by an order of the form $\mathbb{Z}[\sqrt{-dp}]$ where $d$ is square-free and not a multiple of $p$. We give practical ways of generating parameters, together with a proof-of-concept SageMath implementation of both variants, which shows the effectiveness of our construction.
Last updated:  2025-06-04
A Quasi-polynomial Time Algorithm for the Extrapolated Dihedral Coset Problem over Power-of-Two Moduli
Shi Bai, Hansraj Jangir, Elena Kirshanova, Tran Ngo, and William Youmans
The Learning With Errors (LWE) problem, introduced by Regev (STOC'05), is one of the fundamental problems in lattice-based cryptography, believed to be hard even for quantum adversaries. Regev (FOCS'02) showed that LWE reduces to the quantum Dihedral Coset Problem (DCP). Later, Brakerski, Kirshanova, Stehlé and Wen (PKC'18) showed that LWE reduces to a generalization known as the Extrapolated Dihedral Coset Problem (EDCP). We present a quasi-polynomial time quantum algorithm for the EDCP problems over power-of-two moduli using a quasi-polynomial number of samples, which also applies to the SLWE problem defined by Chen, Liu, and Zhandry (Eurocrypt'22). Our EDCP algorithm can be viewed as a provable variant to the "Simon-meets-Kuperberg" algorithm introduced by Bonnetain and Naya-Plasencia (Asiacrypt'18), adapted to the EDCP setting. We stress that our algorithm does not affect the security of LWE with standard parameters, as the reduction from standard LWE to EDCP limits the number of samples to be polynomial.
Last updated:  2025-06-04
Constrained Verifiable Random Functions Without Obfuscation and Friends
Nicholas Brandt, Miguel Cueto Noval, Christoph U. Günther, Akin Ünal, and Stella Wohnig
CVRFs are PRFs that unify the properties of verifiable and constrained PRFs. Since they were introduced concurrently by Fuchsbauer and Chandran-Raghuraman-Vinayagamurthy in 2014, it has been an open problem to construct CVRFs without using heavy machinery such as multilinear maps, obfuscation or functional encryption. We solve this problem by constructing a prefix-constrained verifiable PRF that does not rely on the aforementioned assumptions. Essentially, our construction is a verifiable version of the Goldreich-Goldwasser-Micali PRF. To achieve verifiability we leverage degree-2 algebraic PRGs and bilinear groups. In short, proofs consist of intermediate values of the Goldreich-Goldwasser-Micali PRF raised to the exponents of group elements. These outputs can be verified using pairings since the underlying PRG is of degree 2. We prove the selective security of our construction under the Decisional Square Diffie-Hellman (DSDH) assumption and a new assumption, which we dub recursive Decisional Diffie-Hellman (recursive DDH). We prove the soundness of recursive DDH in the generic group model assuming the hardness of the Multivariate Quadratic (MQ) problem and a new variant thereof, which we call MQ+. Last, in terms of applications, we observe that our CVRF is also an exponent (C)VRF in the plain model. Exponent VRFs were recently introduced by Boneh et al. (Eurocrypt’25) with various applications to threshold cryptography in mind. In addition to that, we give further applications for prefix-CVRFs in the blockchain setting, namely, stake-pooling and compressible randomness beacons.
Last updated:  2025-06-04
When Threshold Meets Anamorphic Signatures: What is Possible and What is Not!
Hien Chu, Khue Do, Lucjan Hanzlik, and Sri AravindaKrishnan Thyagarajan
Anamorphic signatures allow covert communication through signatures in environments where encryption is restricted. They enable trusted recipients with a double key to extract hidden messages while the signature remains indistinguishable from a fresh and regular one. However, the traditional notion of anamorphic signatures suffers from vulnerabilities, particularly when a single recipient or sender is compromised, exposing all hidden messages and providing undeniable proof that citizens are part of the anamorphic exchange. To address these limitations, we explore a threshold-based approach to distribute trust among multiple recipients, preventing adversaries from decrypting anamorphic messages even if some recipients are compromised. Our first contribution is the formalization of the notion of \emph{threshold-recipient anamorphic signatures}, where decryption is possible only through collaboration among a subset of recipients. We then explore a \emph{stronger model} where the dictator controls the key generation process through which it learns all secret keys and how citizens store cryptographic keys. A particular example of this model in the real world is a dictator providing citizens with electronic identity documents (eIDs) and blocking all other usage of cryptography. We demonstrate that anamorphic communication is still possible even in such a scenario. Our construction is secure against post-quantum adversaries and does not rely on any computational assumptions except the random oracle model. Finally, we show an \emph{impossibility result} for encoding anamorphic messages with a threshold-sender model when using many existing threshold signature schemes and the adversary is part of the signing group. Our work outlines both the possibilities and limitations of extending anamorphic signatures with threshold cryptography, offering new insights into improving the security and privacy of individuals under authoritarian regimes.
Last updated:  2025-06-04
Designing QC-MDPC Public Key Encryption Schemes with Niederreiter's Construction and a Bit Flipping Decoder with Bounded DFR
Alessandro Annechini, Alessandro Barenghi, Gerardo Pelosi, and Simone Perriello
Post-quantum public key encryption (PKE) schemes employing Quasi-cyclic (QC) sparse parity-check matrix codes are enjoying significant success, thanks to their good performance profile and reduction to believed-hard problems from coding theory. However, using QC sparse parity-check matrix codes (i.e., QC-MDPC/LDPC codes) comes with a significant challenge: determining in closed-form their decoding failure rate (DFR), as decoding failures are known to leak information on the private key. Furthermore, there is no formal proof that changing the (constant) rate of the employed codes does not change the nature of the underlying hard problem, nor of the hardness of decoding random QC codes is formally related to the decoding hardness of random codes. In this work, we address and solve these challenges, providing a novel closed-form estimation of the decoding failure rate for three-iteration bit flipping decoders, and proving computational equivalences among the aforementioned problems. This allows us to design systematically a Niederreiter-style QC-MDPC PKE, enjoying the flexibility granted by freely choosing the code rate, and the significant improvements in tightness of our DFR bound. We report a $2\times$ improvement in public key and ciphertext size w.r.t. the previous best cryptosystem design with DFR closed-form bounds, LEDAcrypt-KEM. Furthermore, we show that our PKE parameters yield $30$% smaller public key size and $2.6\times$ smaller ciphertexts w.r.t. HQC, which is the key encapsulation method employing a code based PKE, recently selected by the US NIST for standardization.
Last updated:  2025-06-04
Crowhammer: Full Key Recovery Attack on Falcon with a Single Rowhammer Bit Flip
Calvin Abou Haidar, Quentin Payet, and Mehdi Tibouchi
The Rowhammer attack is a fault-injection technique leveraging the density of RAM modules to trigger persistent hardware bit flips that can be used for probing or modifying protected data. In this paper, we show that Falcon, the hash-and-sign signature scheme over NTRU lattices selected by NIST for standardization, is vulnerable to an attack using Rowhammer. Falcon's Gaussian sampler is the core component of its security, as it allows to provably decorrelate the short basis used for signing and the generated signatures. Other schemes, lacking this guarantee (such as NTRUSign, GGH or more recently Peregrine) were proven insecure. However, performing efficient and secure lattice Gaussian sampling has proved to be a difficult task, fraught with numerous potential vulnerabilities to be exploited. To avoid timing attacks, a common technique is to use distribution tables that are traversed to output a sample. The official Falcon implementation uses this technique, employing a hardcoded reverse cumulative distribution table (RCDT). Using Rowhammer, we target Falcon's RCDT to trigger a very small number of targeted bit flips, and prove that the resulting distribution is sufficiently skewed to perform a key recovery attack. Namely, we show that a single targeted bit flip suffices to fully recover the signing key, given a few hundred million signatures, with more bit flips enabling key recovery with fewer signatures. Interestingly, the Nguyen–Regev parallelepiped learning attack that broke NTRUSign, GGH and Peregrine does not readily adapt to this setting unless the number of bit flips is very large. However, we show that combining it with principal component analysis (PCA) yields a practical attack. This vulnerability can also be triggered with other types of persistent fault attacks on memory like optical faults. We suggest cheap countermeasures that largely mitigate it, including rejecting signatures that are unusually short.
Last updated:  2025-06-04
Rubato: Provably Post-Quantum Secure and Batched Asynchronous Randomness Beacon
Linghe Yang, Jian Liu, Jingyi Cui, Guangquan Xu , Yude Bai, and Wei Wang
Distributed Randomness Beacons (DRBs) provide secure, unbiased random numbers for decentralized systems. However, existing protocols face critical limitations. Most rely on cryptographic assumptions which are vulnerable to quantum attacks, risking long-term security in asynchronous networks where unbounded delays may allow attackers time to exploit these weaknesses. Many achieve low beacon generation rates, often below 100 beacons per minute in moderate-scale networks (e.g., Spurt IEEE S&P’22), hindering their use in applications requiring high-throughput randomness. Additionally, traditional Verifiable Secret Sharing (VSS)-based DRBs, using a share-consensus-reconstruct paradigm, are unsuitable for asynchronous networks due to circular dependencies between beacon generation and consensus. Given these limitations, we propose Rubato, the first provably post-quantum secure DRB for asynchronous environments, incorporating a lattice-based batched Asynchronous Verifiable Secret Sharing scheme (bAVSS-PQ). Rubato supports batching of $\mathcal{O}(\lambda^2)$ secrets with communication complexity $\mathcal{O}(\lambda n^3 \log n)$ and tolerates Byzantine faults in up to one-third of the nodes. Integrated with DAG-based consensus protocols like Bullshark or Tusk, its epoch-staggered architecture resolves circular dependencies, enabling efficient and secure randomness generation. Evaluations across 10 to 50 nodes show Rubato generates 5200 to 350 beacons per minute with per-beacon latencies of 11.60 to 96.37 milliseconds, achieving a consensus throughput of 186,088 transactions per second with a latency of 16.78 seconds at 30 nodes. Rubato offers robust post-quantum security and high performance for small-to-medium-scale decentralized systems.
Last updated:  2025-06-03
Weave: Efficient and Expressive Oblivious Analytics at Scale
Mahdi Soleimani, Grace Jia, and Anurag Khandelwal
Many distributed analytics applications that are offloaded to the cloud operate on sensitive data. Even when the computations for such analytics workloads are confined to trusted hardware enclaves and all stored data and network communications are encrypted, several studies have shown that they are still vulnerable to access pattern attacks. Prior efforts towards preventing access pattern leakage often incur network and compute overheads that are logarithmic in dataset size, while also limiting the functionality of supported analytics jobs. We present Weave, an efficient, expressive, and secure analytics platform that scales to large datasets. Weaveemploys a combination of noise injection and hardware memory isolation via enclave page caches to reduce the network and compute overheads for oblivious analytics to a constant factor. Weave also employs several optimizations and extensions that exploit dataset and workload-specific properties to ensure performance at scale without compromising on functionality. Our evaluations show that Weave reduces the end-to-end execution time for a wide range of analytics jobs on large real-world datasets by $4$--$10\times$ compared to prior state-of-the-art while providing strong obliviousness guarantees.
Last updated:  2025-06-03
Unbounded Distributed Broadcast Encryption and Registered ABE from Succinct LWE
Hoeteck Wee and David J. Wu
We construct distributed broadcast encryption and registered attribute-based encryption (ABE) that support an arbitrary polynomial of users from the succinct LWE assumption. Specifically, if we take $\lambda$ to be the security parameter and $N$ to be the number of users, we obtain the following: * We obtain a distributed broadcast encryption scheme where the size of the public parameters, user public/secret keys, and ciphertexts are optimal (i.e., have size $\mathsf{poly}(\lambda, \log N)$). Security relies on the $\mathsf{poly}(\lambda, \log N)$-succinct LWE assumption. Previously, this was only known from indistinguishability obfuscation or witness encryption. All constructions that did not rely on these general tools could only support an a priori bounded number of users. * We obtain a key-policy registered ABE scheme that supports arbitrary bounded-depth Boolean circuit policies from the $\mathsf{poly}(\lambda, d, \log N)$-succinct LWE assumption in the random oracle model, where $d$ is the depth of the circuit computing the policy. The public parameters, user public/secret keys, and ciphertexts have size $\mathsf{poly}(\lambda, d, \log N)$, which are optimal up to the $\mathsf{poly}(d)$ factor. This is the first registered ABE scheme with nearly-optimal parameters. All previous schemes (including constructions based on indistinguishability obfuscation, witness encryption, or evasive LWE) either have ciphertexts that scale with the policy size and attribute length, or can only support a bounded number of users (with long public parameters and public keys that scale with the number of users).
Last updated:  2025-06-03
Security of Operations on Random Numbers: A Review
Tejas Sharma and Ashish Kundu
Random numbers are often used in cryptography algorithms, protocols, and in several security and non-security applications. Such us- ages often apply Arithmetic and Boolean operations on pseudorandom numbers, such as addition, XOR, NOT, bit shifts, and other operations, in order to achieve the desired amount of entropy and desired level of security. In this paper, we have reviewed, studied, and analyzed the se- curity properties of these operations on random numbers: do Arithmetic and Boolean operations and other related operations on cryptograph- ically secure pseudorandom numbers lead to cryptographically secure pseudorandom numbers; do they lead to loss of preservation of entropy?
Last updated:  2025-06-03
Committed Vector Oblivious Linear Evaluation and Its Applications
Yunqing Sun, Hanlin Liu, Kang Yang, Yu Yu, Xiao Wang, and Chenkai Weng
We introduce the notion of committed vector oblivious linear evaluation (C-VOLE), which allows a party holding a pre-committed vector to generate VOLE correlations with multiple parties on the committed value. It is a unifying tool that can be found useful in zero-knowledge proofs (ZKPs) of committed values, actively secure multi-party computation, private set intersection (PSI), etc. To achieve the best efficiency, we design a tailored commitment scheme and matching C-VOLE protocols, both based on the learning parity with noise assumption. In particular, exploiting the structures of the carefully designed LPN-based commitment minimizes the cost of ensuring consistency between the committed vector and VOLE correlation. As a result, we achieve a 28$\times$ improvement over the protocol proposed in prior work (Usenix 2021) that uses ZKP to prove the correct opening of the commitment. We also apply C-VOLE to design a PSI protocol that allows one server to run PSI repeatedly with multiple clients while ensuring that the same set is used across all executions. Compared with the state-of-the-art PSI (CCS 2024) with similar security requirements, our protocol reduces the communication overhead by a factor of 35$\times$.
Last updated:  2025-06-03
A Critique on Average-Case Noise Analysis in RLWE-Based Homomorphic Encryption
Mingyu Gao and Hongren Zheng
Homomorphic encryption schemes based on the Ring-Learning-with-Errors problem require accurate ciphertext noise analysis to ensure correctness and security. However, ring multiplications during homomorphic computations make the noise in the result ciphertexts difficult to characterize. Existing average-case noise analyses derive a bound on the noise by either assuming it follows a Gaussian distribution, or giving empirical formulae, with strong independence assumption and the Central Limit Theorem extensively applied. In this work, we question the validity of these methods, by showing that the noise exhibits a heavy-tailed distribution via exact calculation of its variance and kurtosis, for both independent and dependent noises. The heavy-tailedness suggests the failing probability of bounds derived from these methods may not be negligible, and we experimentally demonstrate several cases where the noise growth is underestimated.
Last updated:  2025-06-03
Continuous Group-Key Agreement: Concurrent Updates without Pruning
Benedikt Auerbach, Miguel Cueto Noval, Boran Erol, and Krzysztof Pietrzak
Continuous Group Key Agreement (CGKA) is the primitive underlying secure group messaging. It allows a large group of $N$ users to maintain a shared secret key that is frequently rotated by the group members in order to achieve forward secrecy and post compromise security. The group messaging scheme Messaging Layer Security (MLS) standardized by the IETF makes use of a CGKA called TreeKEM which arranges the $N$ group members in a binary tree. Here, each node is associated with a public-key, each user is assigned one of the leaves, and a user knows the corresponding secret keys from their leaf to the root. To update the key material known to them, a user must just replace keys at $\log(N)$ nodes, which requires them to create and upload $\log(N)$ ciphertexts. Such updates must be processed sequentially by all users, which for large groups is impractical. To allow for concurrent updates, TreeKEM uses the ``propose and commit'' paradigm, where multiple users can concurrently propose to update (by just sampling a fresh leaf key), and a single user can then commit to all proposals at once. Unfortunately, this process destroys the binary tree structure as the tree gets pruned and some nodes must be ``blanked'' at the cost of increasing the in-degree of others, which makes the commit operation, as well as, future commits more costly. In the worst case, the update cost (in terms of uploaded ciphertexts) per user can grow from $\log(N)$ to $\Omega(N)$. In this work we provide two main contributions. First, we show that MLS' communication complexity is bad not only in the worst case but also if the proposers and committers are chosen at random: even if there's just one update proposal for every commit the expected cost is already over $\sqrt{N}$, and it approaches $N$ as this ratio changes towards more proposals. Our second contribution is a new variant of propose and commit for TreeKEM which for moderate amounts of update proposals per commit provably achieves an update cost of $\Theta(\log(N))$ assuming the proposers and committers are chosen at random.
Last updated:  2025-06-03
JANUS: Enhancing Asynchronous Common Subset with Trusted Hardware
Liangrong Zhao, Hans Schmiedel, Qin Wang, and Jiangshan Yu
Asynchronous common subset (ACS) has been extensively studied since the asynchronous Byzantine fault tolerance (BFT) framework was introduced by Ben-Or, Kemler, and Rabin (BKR). The line of work (i.e., HoneyBadgerBFT, BEAT, EPIC) uses parallel reliable broadcast (RBC) and asynchronous binary agreement (ABA) instances to reach an agreement on a subset of proposed transactions. In this paper, we further progress the BKR paradigm by presenting Janus, the first hybrid ACS protocol leveraging trusted hardware components. Janus is the first ACS protocol that tolerates a minority of Byzantine processes and that has O(n^2) message complexity. Supported by trusted hardware components, we introduce a provable broadcast primitive to replace RBC, and develop a resilient binary agreement protocol. Messages for concurrent instances of agreement are aggregated into vectors. Our experimental results demonstrate significant performance improvements over predominant ACS constructions with a 92%+ increase compared to HoneyBadgerBFT and a 47%+ increase compared to BEAT. Additionally, we provide a comparison with open-source hybrid BFT protocols that operate under a partially synchronous network, highlighting the performance enhancement compared to previous hybrid protocols that also tolerate the Byzantine minority (e.g., MinBFT and Damysus, by 49%+).
Last updated:  2025-06-03
Trusted Hardware-Assisted Leaderless Byzantine Fault Tolerance Consensus
Liangrong Zhao, Jérémie Decouchant, Joseph K. Liu, Qinghua Lu, and Jiangshan Yu
Byzantine Fault Tolerance (BFT) Consensus protocols with trusted hardware assistance have been extensively explored for their improved resilience to tolerate more faulty processes. Nonetheless, the potential of trust hardware has been scarcely investigated in leaderless BFT protocols. RedBelly is assumed to be the first blockchain network whose consensus is based on a truly leaderless BFT algorithm. This paper proposes a trusted hardware-assisted leaderless BFT consensus protocol by offering a hybrid solution for the set BFT problem defined in the RedBelly blockchain. Drawing on previous studies, we present two crucial trusted services: the counter and the collector. Based on these two services, we introduce two primitives to formulate our leaderless BFT protocol: a hybrid verified broadcast (VRB) protocol and a hybrid binary agreement. The hybrid VRB protocol enhances the hybrid reliable broadcast protocol by integrating a verification function. This addition ensures that a broadcast message is verified not only for authentication but also for the correctness of its content. Our hybrid BFT consensus is integrated with these broadcast protocols to deliver binary decisions on all proposals. We prove the correctness of the proposed hybrid protocol and demonstrate its enhanced performance in comparison to the prior trusted BFT protocol.
Last updated:  2025-06-03
Constant-Round Asynchronous MPC with Optimal Resilience and Linear Communication
Junru Li and Yifan Song
In this work, we consider secure multiparty computation (MPC) in the asynchronous network setting. MPC allows $n$ parties to compute a public function on their private inputs against an adversary corrupting at most $t$ of them. We consider both communication complexity and round complexity of asynchronous MPC (AMPC) with the optimal resilience $n=3t+1$. Without fully homomorphic encryptions, the best-known result in this setting is achieved by Coretti, Garay, Hirt, and Zikas (ASIACRYPT 2016), which requires $O(|C|n^3\kappa)$ bits of communication assuming one-way functions, where $\kappa$ is the security parameter. On the other hand, the best-known non-constant-round AMPC by Goyal, Liu, and Song (CRYPTO 2024) can achieve $O(|C|n)$ communication even in the information-theoretic setting. In this work, we give the first construction of a constant-round AMPC with $O(|C|n\kappa)$ bits of communication that achieves malicious security with abort assuming random oracles. We provide new techniques for adapting the MPC-in-the-head framework in the asynchronous network to compute a constant-size garbled circuit.
Last updated:  2025-06-03
Quasidifferential Saves Infeasible Differential: Improved Weak-Key Key-Recovery Attacks on Round-Reduced GIFT
Chengcheng Chang, Meiqin Wang, Wei Wang, and Kai Hu
\gift, including \gift-64 and \gift-128, is a family of lightweight block ciphers with outstanding implementation performance and high security, which is a popular underlying primitive chosen by many AEADs such as \sundae. Currently, differential cryptanalysis is the best key-recovery attack on both ciphers, but they have stuck at 21 and 27 rounds for \gift-64 and \gift-128, respectively. Recently, Beyne and Rijmen proposed the quasidifferential transition matrix for differential cryptanalysis at CRYPTO 2022 and showed that the fixed-key probability of a differential (characteristic) can be expressed as the sum of correlations of all quasidifferential trails corresponding to this differential (characteristic). As pointed out by Beyne and Rijmen in their paper, the quasidifferential methodology is useful in identifying weak-key differential attacks. In this paper, we apply Beyne and Rijmen's method to \gift. Some differential characteristics with small (average) probabilities can have much larger probabilities when weak-key conditions hold. Improved weak-key differential attacks on \gift-64 and \gift-128 are thus obtained. For \gift-64, the probability of a 13-round differential is improved from $2^{-62.06}$ to $2^{-57.82}$ with 4 bits of weak-key conditions, then an improved differential key-recovery attack on 21-round \gift-64 is obtained with $2^{117.42}/2^{64}$ time/data complexities; the probability of a 13-round multiple differential (containing 33 characteristics) is improved from $2^{-58.96}$ to $2^{-55.67}$ with 4 bits of weak-key conditions, then an improved multiple differential key-recovery attack on 21-round \gift-64 is obtained with $2^{123.27}/2^{64}$ time/data complexities. For \gift-128, the probability of a 20-round differential is improved from $2^{-121.83}$ to $2^{-114.77}$ with 6 bits of weak-key conditions; the probability of a 21-round multiple differential (containing 2 differentials) is improved from $2^{-128.38}$ to $2^{-122.77}$ with 4 bits of weak-key conditions. Improved (multiple) differential weak-key key-recovery attacks are obtained for 27 and 28 rounds of \gift-128 with $2^{115.77}$/$2^{115.77}$ and $2^{123.77}$/$2^{123.77}$ time/data complexities, respectively. As far as we know, this is the first time that a (weak-key) key-recovery attack can reach 28 rounds of \gift-128. Additionally, as an independent interest, we perform the first differential attack on \sundae. The differential used in this attack is checked with quasidifferential trails, thus the probability is reliable. Our attack is nonce-respecting and has significantly better complexities than the currently best attack.
Last updated:  2025-06-03
Everlasting Anonymous Rate-Limited Tokens
Rutchathon Chairattana-Apirom, Nico Döttling, Anna Lysyanskaya, and Stefano Tessaro
Anonymous rate-limited tokens are a special type of credential that can be used to improve the efficiency of privacy-preserving authentication systems like Privacy Pass. In such a scheme, a user obtains a "token dispenser" by interacting with an issuer, and the dispenser allows the user to create up to a pre-determined number $k$ of unlinkable and publicly verifiable tokens. Unlinkable means that one should not be able to tell that two tokens originate from the same dispenser, but also they cannot be linked to the interaction that generated the dispenser. Furthermore, we can limit the rate at which these tokens are created by linking each token to a context (e.g., the service we are authenticating to), and imposing a limit $N \leq k$ such that seeing more than $N$ tokens for the same context will reveal the identity of the user. Constructions of such tokens were first given by Camenisch, Hohenberger and Lysyanskaya (EUROCRYPT '05) and Camenisch, Hohenberger, Kohlweiss, Lysyanskaya, and Meyerovich (CCS '06). In this work, we present the first construction of \emph{everlasting} anonymous rate-limited tokens, for which unlinkability holds against computationally unbounded adversaries, whereas other security properties (e.g., unforgeability) remain computational. Our construction relies on pairings. While several parameters in our construction unavoidably grow with $k$, the key challenge we resolve is ensuring that the complexity of dispensing a token is independent of the parameter $k$. We are motivated here by the goal of providing solutions that are robust to potential future quantum attacks against the anonymity of previously stored tokens. A construction based on post-quantum secure assumptions (e.g., based on lattices) would be rather inefficient---instead, we take a pragmatic approach dispensing with post-quantum security for properties not related to privacy.
Last updated:  2025-06-03
Improved Key Recovery Attacks of Ascon
Shuo Peng, Kai Hu, Jiahui He, and Meiqin Wang
Ascon, a family of algorithms that support hashing and Authenticated Encryption with Associated Data (AEAD), is the final winner of the NIST Lightweight Cryptography Project. As a research hotspot, Ascon has received substantial third-party security evaluation. Among all the results of Ascon-128 (the primary recommendation of AEAD), the key recovery attack can only be achieved by reducing the initialization phase to 7 rounds or fewer, regardless of whether it violates the security claims made by the designers (i.e., misuse of the nonce or exceeding data limits $2^{64}$). In this paper, we, from two aspects (misuse-free setting and misused setting), improve the key recovery attack on Ascon-128 using the cube attack method. In one part, we present a faster method to recover the superpolies for a 64-dimensional cube in the output bits of the 7-round initialization, enabling us to recover the secret key with a time complexity of $2^{95.96}$ and a data complexity of $2^{64}$. Our 7-round key recovery attack, based on the full key space, greatly improves the time complexity, making it the best result to date. Additionally, we utilize several techniques to extend state recovery to key recovery, answering the open problem of transitioning from full state recovery in the encryption phase to key recovery for Ascon-128 (ToSc Vol 4, 2022). By combining encryption phase state recovery with initialization phase key recovery, we can achieve 8-round and 9-round initialization phase key recovery in the nonce misuse scenario, with time complexities of $2^{101}$ and $2^{123.92}$, respectively. This represents an improvement of two rounds over previous results in the misused setting. Our first key recovery attack is also applicable to Ascon-128a, achieving the same result. In cases where the full state, prior to the encryption phase, can be recovered in other Ascon AEAD modes, our second key recovery attack will also be useful. It is worth noting that this work does not threaten the security of the full 12 rounds Ascon, but we expect that our results provide new insights into the security of Ascon.
Last updated:  2025-06-02
Group Key Progression: Strong Security for Shared Persistent Data
Matilda Backendal, David Balbás, and Miro Haller
End-to-end encryption allows data to be outsourced and stored on an untrusted server, such as in the cloud, without compromising data privacy. In the setting when this data is shared between a group of users, members also all share access to the same static key material used for data encryption. When the group membership changes, access control is only enforced by the server: security breaches or compelled disclosure would allow even a removed member to decrypt the current shared data. We propose to move away from static keys and instead use a group key progression (GKP) scheme, a novel primitive that enables a dynamic group of users to agree on a persistent sequence of keys while keeping a compact local state. GKP ensures that group members can only derive keys within a certain interval of the sequence, a notion that we call interval access control (IAC), and also provide post-compromise security. Our GKP construction, called Grappa, combines continuous group key agreement (CGKA, by Alwen et al., 2020) with a new abstraction called interval scheme. The latter is a symmetric-key primitive that can derive a sequence of keys from a compact state while preserving IAC. We explore different interval scheme constructions and simulate their storage and communication costs when used in group settings. The most efficient of them is a generalization of dual key regression (Shafagh et al., 2020), which we formalize and prove secure. Overall, our protocols offer a practical and robust solution to protect persistent data shared by a group.
Last updated:  2025-06-02
Parallel Repetition for Post-Quantum Arguments
Andrew Huang and Yael Tauman Kalai
In this work, we show that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least $t$ of the executions are accepted (for some threshold $t$). Prior to this work, these results were known only when the cheating prover was assumed to be classical. We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the verifier and communication may be quantum. We consider only protocols where the verifier is classical, but obtain a simplified analysis, and for the more general setting of threshold verifiers.
Last updated:  2025-06-02
Malicious Security in Collaborative zk-SNARKs: More than Meets the Eye
Sanjam Garg, Aarushi Goel, Abhishek Jain, Bhaskar Roberts, and Sruthi Sekar
Collaborative zk-SNARKs (Ozdemir and Boneh, USENIX’22) are a multiparty variant of zk-SNARKs where multiple, mutually distrustful provers, each holding a private input, jointly compute a zk-SNARK using their combined inputs. A sequence of works has proposed efficient constructions of collaborative zk-SNARKs using a common template that involves designing secure multiparty computation (MPC) protocols to emulate a zk-SNARK prover without making non-black-box use of cryptography. To achieve security against malicious adversaries, these works adopt compilers from the MPC literature that transform semi-honest MPC into malicious-secure MPC. In this work, we revisit this design template. • Pitfalls: We demonstrate two pitfalls in the template, which can lead to a loss of input privacy. We first show that it is possible to compute collaborative proofs on invalid witnesses, which in turn can leak the inputs of honest provers. Next, we show that using state-of-the-art malicious security compilers as-is for proof computation is insecure, in general. Finally, we discuss mitigation strategies. • Malicious Security Essentially for Free: As our main technical result, we show that in the honest-majority setting, one can forego malicious security checks performed by state-of-the-art malicious security compilers during collaborative proof generation of several widely used zk-SNARKs. In other words, we can avoid the overheads of malicious security compilers, enabling faster proof generation. To the best of our knowledge, this is the first example of non-trivial computations where semi-honest MPC protocols achieve malicious security. The observations underlying our positive results are general and may have applications beyond collaborative zkSNARKs.
Last updated:  2025-06-02
Secure Noise Sampling for Differentially Private Collaborative Learning
Olive Franzese, Congyu Fang, Radhika Garg, Somesh Jha, Nicolas Papernot, Xiao Wang, and Adam Dziedzic
Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step. We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible – while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230× runtime improvement and 3× less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.
Last updated:  2025-06-02
Towards Trustless Provenance: A Privacy-Preserving Framework for On-chain Media Verification
Piotr Mikołajczyk, Parisa Hassanizadeh, and Shahriar Ebrahimi
As generative models continue to evolve, verifying the authenticity, provenance, and integrity of digital media has become increasingly critical—particularly for domains like journalism, digital art, and scientific documentation. In this work, we present a decentralized verifiable media ecosystem for managing, verifying, and transacting authentic digital media using zero-knowledge proofs (ZKPs). Building on VIMz (Dziembowski et al., PETS'25), we extend the framework in three key directions. First, we generalize the model to support arbitrary image regions to achieve selective transformations support such as redaction and regional blurring—features commonly required in privacy-preserving applications. Second, we introduce performance optimizations that yield up to an 18% improvement in off-chain proof generation, and enhance the framework to support cost-efficient on-chain verification. Third, we design and implement a modular smart contract architecture to support a wide range of decentralized media applications. As a flagship use case, we develop a decentralized media marketplace that enables permissionless licensing, ownership transfer, and verifiable attribution. In this setting, users can share transformed media—such as cropped, blurred, or resized previews—alongside ZKPs that prove derivation from a signed original, eliminating the need to trust the seller. Unlike prior fair exchange protocols, which rely on trusted descriptions or encrypted payload delivery, our system enables verifiable public previews and origin-bound proofs without revealing the full content. This approach unlocks new applications beyond marketplaces, including automated infringement dispute resolution and photography contests with verifiable criteria.
Last updated:  2025-06-02
Universal Channel Rebalancing: Flexible Coin Shifting in Payment Channel Networks
Stefan Dziembowski, Shahriar Ebrahimi, Omkar Gavhane, and Susil Kumar Mohanty
Payment Channel Networks (PCNs) enhance blockchain scalability by enabling off-chain transactions. However, repeated unidirectional multi-hop payments often cause channel imbalance or depletion, limiting scalability and usability. Existing rebalancing protocols, such as Horcrux [NDSS’25] and Shaduf [NDSS’22], rely on on-chain operations, which hinders efficiency and broad applicability. We propose Universal Channel Rebalancing (UCRb), a blockchain-agnostic, fully off-chain framework that ensures correct behavior among untrusted participants without on-chain interaction. UCRb incorporates the following core innovations: (1) a fair and reliable incentive-compatible mechanism that encourages voluntary user participation in off-chain channel rebalancing, (2) integration of Pedersen commitments to achieve atomic off-chain payments and rebalancing operations, while ensuring balance security, and (3) zero-knowledge proofs to enable privacy-preserving channel initialization and coin shifting, ensuring that user identities and fund allocations remain hidden throughout the process. We evaluate UCRb using real-world Lightning Network dataset and compare its performance against state-of-the-art solutions including Horcrux, Shaduf, and Revive [CCS'17]. UCRb exhibits a success ratio enhancement between 15% and 50%, while also reducing the required user deposits by 72%--92%. It maintains an almost negligible rate of channel depletion. Additionally, the long-term performance of UCRb is roughly 1.5 times that of its short-term performance, suggesting that continuous operation leads to improved efficiency. We implement a prototype for UCRb smart contracts and demonstrate its practicality through extensive evaluation. As \texttt{CoinShift} operations require no on-chain interaction, the protocol incurs minimal gas costs. For instance, opening and closing channels with 10 neighbors costs only 130K-160K gas—significantly lower than comparable solutions.
Last updated:  2025-06-02
Burn Your Vote: Decentralized and Publicly Verifiable Anonymous Voting at Scale
Stefan Dziembowski, Shahriar Ebrahimi, Haniyeh Habibi, Parisa Hassanizadeh, and Pardis Toolabi
Secure and trustworthy electronic voting requires more than correctness and censorship resistance, it must also ensure voter privacy, vote confidentiality, and protection against coercion. Prior work attempt to address these challenges using heavyweight cryptographic primitives such as homomorphic encryption, time-lock puzzles, or multi-party computation. These approaches often involve complex computations, depend on trusted parties, and typically do not scale well. We propose a lightweight, fully on-chain anonymous voting protocol based on a novel application of the proof-of-burn (PoB) mechanism. Voters anonymously commit to their votes by burning tokens to pseudorandom addresses and later submit zero-knowledge proofs attesting to their valid participation. Our design achieves vote integrity, coercion resistance, and unlinkability without relying on encrypted ballots, trusted third parties, or centralized tallying. The tallying process is public and operates on plaintext votes that are authenticated yet unlinkable to voters. This enables flexible voting models—including token-weighted and quadratic voting—with minimal on-chain overhead. We formally analyze the protocol’s security guarantees and demonstrate support for a broad range of voting models. We implement the protocol as an open-source library fully compatible with the Ethereum Virtual Machine (EVM), and our experimental evaluation confirms its high scalability and improved efficiency compared to the state-of-the-art.
Last updated:  2025-06-02
Black-Box Crypto is Useless for Pseudorandom Codes
Sanjam Garg, Sam Gunn, and Mingyuan Wang
A pseudorandom code is a keyed error-correction scheme with the property that any polynomial number of encodings appear random to any computationally bounded adversary. We show that the pseudorandomness of any code tolerating a constant rate of random errors cannot be based on black-box reductions to almost any generic cryptographic primitive: for instance, anything that can be built from random oracles, generic multilinear groups, and virtual black-box obfuscation. Our result is optimal, as Ghentiyala and Guruswami (2024) observed that pseudorandom codes tolerating any sub-constant rate of random errors exist using a black-box reduction from one-way functions. The key technical ingredient in our proof is the hypercontractivity theorem for Boolean functions, which we use to prove our impossibility in the random oracle model. It turns out that this easily extends to an impossibility in the presence of "crypto oracles," a notion recently introduced---and shown to be capable of implementing all the primitives mentioned above---by Lin, Mook, and Wichs (EUROCRYPT 2025).
Last updated:  2025-06-06
Separating Pseudorandom Codes from Local Oracles
Nico Döttling, Anne Müller, and Mahesh Sreekumar Rajasree
Pseudorandom codes (PRCs) are error-correcting codes with the distinguishing feature that their codewords are computationally indistinguishable from random strings. Introduced by Christ and Gunn (CRYPTO 2024), PRCs have found applications in areas such as AI watermarking, where both robustness and pseudorandomness are essential. All known constructions of PRCs rely on coding-theoretic hardness assumptions. In this work, we study how inherent the use of coding-theoretic hardness is in the construction of pseudorandom codes. We show that there is no black-box construction of PRCs with binary alphabets capable of decoding from a constant fraction of Bernoulli noise from a class of oracles we call local oracles. The class of local oracles includes random oracles and trapdoor permutation oracles, and can be interpreted as a meaningful notion of oracles that are not resilient against noise. Our separation result is cast in the Impagliazzo-Rudich framework and crucially relies on the Bonami-Beckner hypercontractivity theorem on the Boolean hypercube. As a complementary result, we show that PRCs with large alphabets that can tolerate high error rates can indeed be constructed in a black-box manner from one-way functions.
Last updated:  2025-06-02
Silent Splitter: Privacy for Payment Splitting via New Protocols for Distributed Point Functions
Margaret Pierce and Saba Eskandarian
In a world where financial transactions are primarily performed or recorded online, protecting sensitive transaction details has become crucial. Roommates sharing housing costs or friends splitting travelling expenses may use applications such as Splitwise to easily track debts and minimize the number of individual repayments. However, these apps reveal potentially sensitive financial transaction activity to their operators. In this paper, we present Silent Splitter, a privacy-preserving payment splitting system which enables users to securely set up groups, perform transactions within those groups, and "settle up" without revealing group membership or any sensitive transaction details (such as the users involved or amount of money exchanged) to the system itself. Silent Splitter operates in the two server setting and uses Distributed Point Functions (DPFs) to securely record transactions. Of independent interest, we also present new protocols for proving knowledge of properties of DPFs as part of our system.
Last updated:  2025-06-02
MT-TMVP: Modular Tiled TMVP-based Polynomial Multiplication for Post-Quantum Cryptography on FPGAs
Shekoufeh Neisarian and Elif Bilge Kavun
As quantum technology advances, developing cryptographic solutions resistant to quantum attacks is crucial. Post-Quantum Cryptography (PQC) provides a practical approach by running on classical computers. They rely on hard mathematical problems, with lattice-based being one of the National Institute of Standards and Technology (NIST)-recognized schemes known for its small key sizes. Hardware implementation of these schemes faces challenges due to the computational intensity of operations like polynomial multiplication, especially for resource-constrained devices. This paper proposes a novel Modular Tiled Toeplitz Matrix-Vector Polynomial Multiplication (MT-TMVP) for lattice-based PQC algorithms and presents a resource-optimized Field Programmable Gate Array (FPGA) architecture. The proposed implementation significantly reduces resource utilization and Area-Delay Product (ADP) compared to state-of-the-art polynomial multipliers. It utilizes 99.68% and 84.22% fewer Look-Up Tables (LUTs) on Artix-7 and Zynq Ultrascale+ FPGAs, respectively, and achieves 99.94% and 80.02% ADP improvements on these FPGAs compared to the best results in the literature. By leveraging Block RAM (BRAM), the proposed architecture offers robustness against timing-based Side-Channel Attacks (SCAs), and the design is modular and scalable to any polynomial degree.
Last updated:  2025-06-02
Using the Schur Product to Solve the Code Equivalence Problem
Michele Battagliola, Rocco Mora, and Paolo Santini
Given two linear codes, the Code Equivalence Problem asks to find (if it exists) an isometry between them. A special case is the Permutation Equivalence Problem (PEP), where the isometry is a permutation. The hardness of PEP is crucially dependent on the hull of a code, that is, the intersection between a code and its dual. For random codes, with large probability the hull is trivial, i.e., has dimension $h = 0$: this allows for efficient attacks. However, when the hull is large enough, all known attacks take exponential time and PEP is deemed hard. In this paper we study how the so-called Schur product between linear codes can be employed to solve PEP. The main idea is to transform a given PEP instance by computing powers of the given codes. We show that, while squaring a pair of equivalent codes preserves the equivalence, the new pair of codes have trivial hull with high probability. This allows to identify many new weak instances of PEP, namely: whenever $h<\sqrt{2n}$ With some technical caveats, our solver runs in average polynomial time. As a concrete application, we consider the updatable encryption scheme proposed by Albrecht, Benčina and Lai at Eurocrypt 2025. All the recommended instances fall into the range of weak PEP instances we identify in this paper, hence are susceptible to our attack. We successfully recover the secret permutation for one of the instances claiming 128 bits of security. As a fix, instances with hull dimension $h>\sqrt{2n}$ shall be employed.
Last updated:  2025-06-02
Leader Election with Poly-logarithmic Communication Per Party
Amey Bhangale, Chen-Da Liu-Zhang, Julian Loss, Kartik Nayak, and Sravya Yandamuri
The leader election problem requires a set of $n$ parties, out of which up to $t$ can be Byzantine, to elect a leader uniformly at random such that no two parties disagree on the elected leader and an honest leader is elected with constant probability. The Scalable Leader Election protocol published in SODA'2006 is an important breakthrough in solving this problem efficiently for all but $o(1)$ of the parties. They achieve a protocol for $t < (\frac{1}{3} - \epsilon)n$ (for $\epsilon = o(1)$) in the full-information setting such that every party only sends polylog \(n\) bits. In this paper, we revisit their work and show that there are subtleties in the protocol that are not dealt with in the analysis. In particular, two mechanisms related to ``silencing'' parties and dealing with ``bad nodes'' are at odds with each other, which is why the existing analysis is insufficient. We present these concerns in detail and subsequently present a modification to their protocol with a corresponding analysis to solve leader election with the desired metrics.
Last updated:  2025-06-02
Zero-Knowledge Polynomial Commitment in Binary Fields
Benjamin E. Diamond
In recent work, Diamond and Posen ('24) introduce a polynomial commitment scheme for large binary fields, adapting BaseFold (CRYPTO '24). In this note, we devise a zero-knowledge variant of Diamond and Posen's scheme. Our construction reprises a few ideas from Aurora (EUROCRYPT '19). We carry through those ideas in characteristic 2, and moreover show that they're compatible with BaseFold.
Last updated:  2025-06-02
How to Make Any Computational Secret Sharing Scheme Adaptively Secure
George Lu and Brent Waters
Secret sharing (SS) is a foundational cryptographic primitive with diverse applications, including secure multiparty computation and conditional disclosure of secrets. While traditional schemes have primarily emphasized information-theoretic security, recent advancements have increasingly leveraged computational assumptions to achieve more efficient constructions and support broader access policies. Despite these successes, most existing computational secret sharing (CSS) schemes are limited to a static security model, where adversaries must commit to their choice of corrupted participants at the outset. A critical challenge in CSS lies in achieving adaptive security, where adversaries can dynamically select participants to corrupt, better reflecting real-world threat models. In this paper, we present a novel transformation that converts any statically secure CSS scheme into an adaptively secure one while preserving the original access policy and computational assumptions, providing a framework for bridging the gap between static and adaptive security. Our construction introduces a multiplicative share size overhead of $O(n^2)$ where $n$ is the number of parties. Additionally, we explore trade-offs in efficiency and security, offering more efficient adaptive CSS constructions for specific, restricted policy classes. This work addresses key limitations in the current landscape of CSS and paves the way for broader adoption of adaptively secure secret sharing in cryptographic applications.
Last updated:  2025-06-01
Silentium: Implementation of a Pseudorandom Correlation Generator for Beaver Triples
Vincent Rieder
Secure Multi-Party Computation is a privacy-enhancing technology that allows several parties to securely compute on distributed private data. In the line of the well established SPDZ protocol, the by far most expensive task is the generation of Beaver triples in the so called offline phase. Silentium is our implementation of an actively secure offline phase in the form of a Pseudorandom Correlation Generator for Beaver triples (Bt-PCG, Boyle et al. CRYPTO 2020), which, as any PCG, is designed to have low communication. Compared to previous offline phases, their Bt-PCG reduces the communication costs by three orders of magnitude. However, so far efficiency was only estimated. With Silentium, we demonstrate that their Bt-PCG can achieve even better running times than state-of-the-art offline phase implementations in the MP-SPDZ library. To actually achieve such a performance, Silentium comprises a systematic parallelization strategy and implementation-friendly decomposition scenarios of the Bt-PCG into structured modules. Looking forward for large-scale applications on the cloud, Silentium is designed to be versatile to support hardware acceleration in future.
Last updated:  2025-06-01
Nearly Optimal Parallel Broadcast in the Plain Public Key Model
Ran Gelles, Christoph Lenzen, Julian Loss, and Sravya Yandamuri
Parallel Byzantine broadcast (PBC) (also known as Interactive Consistency), is a fundamental problem in distributed computing and cryptography which asks that all parties reliably distribute a message to all other parties. We give the first communication-efficient protocol for PBC in the model with plain public keys (i.e., no trusted dealer) which achieves security against an adaptive adversary that can corrupt up to $t<n/2$ parties. Our protocol runs in total communication complexity $O(n^2\ell\log(n)+n\kappa^2\log^4(n))$ bits to succeed with probability $1-2^{-\kappa}$, where $\ell$ is the length of a message. All prior protocols either rely on a trusted setup or require at least $O(n^3)$ communication complexity. As a stepping stone, we present a binary consensus protocol with the same resilience and success probability that sends $O(n^2\kappa\log(n)+n\kappa^2\log^3(n))$ bits. We achieve these results based on a highly efficient gossip procedure that implements echo operations at low cost, and might prove useful in deriving further efficient protocols relying on simple cryptographic tools.
Last updated:  2025-06-01
Adaptive TDFs from Injective TDFs
Xinyu Mao and Hongxu Yi
Adaptive trapdoor functions (ATDFs) and tag-based ATDFs (TB-ATDFs) are variants of trapdoor functions proposed by Kiltz, Mohassel, and O’Neill (EUROCRYPT 2010). They are both sufficient for constructing chosen-ciphertext secure public-key encryption (CCA-secure PKE), and their definitions are closely related to CCA-secure PKE. Hohenberger, Koppula, and Waters (CRYPTO 2020) showed that CCA-secure PKE can be constructed from injective TDFs; however, the relations among TDF, ATDF, and TB-ATDF remain unclear. We provide black-box constructions of ATDFs and TB-ATDFs from injective TDFs, answering the question posed by Kiltz, Mohassel, and O’Neill (EUROCRYPT 2010). Our results indicate that ATDF, TB-ATDF, and TDF are equivalent under mild restrictions.
Last updated:  2025-05-31
UPKE and UKEM Schemes from Supersingular Isogenies
Pratima Jana and Ratna Dutta
Forward-secure public key encryption (FS-PKE) is a key-evolving public-key paradigm that ensures the confidentiality of past encryptions even if the secret key is compromised at some later point in time. However, existing FS-PKE schemes are considerably complex and less efficient compared to standard public-key encryption. Updatable public-key encryption (UPKE), introduced by Jost et al. (Eurocrypt 2019), was designed to achieve forward security in secure group messaging while maintaining efficiency. However, existing UPKE constructions either lack post-quantum security or do not support an unbounded number of updates. We focus on isogeny-based cryptosystems due to their suitability for handling an unbounded number of updates in long-term secure messaging. Existing isogeny-based UPKE schemes lack strong security guarantees and formal security proofs. They do not support asynchronous key updates and require sender-receiver coordination. In this work, we present two isogeny-based UPKE schemes. The first scheme, UhPKE, extends Moriya et al.’s hash-based public key encryption scheme hPKE to support key updates while the second scheme USimS is an updatable version of Fouotsa et al.’s public key encryption scheme simplified sigamal (SimS). The scheme UhPKE relies on the commutative supersingular isogeny Diffie-Hellman(CSSIDH) assumption and achieves indistinguishability under chosen randomness and chosen plaintext attack (IND-CR-CPA). The scheme USimS derives its security under the hardness of the CSSIDH problem and the commutative supersingular isogeny knowledge of exponent (CSSIKoE) problem. It is the first isogeny-based UPKE scheme that exhibits indistinguishability under chosen randomness and chosen ciphertext attack (IND-CR-CCA). The security of UhPKE and USimS is established by proving that their underlying schemes, hPKE and SimS are circular secure and leakage resilient (CS + LR). We emphasized that our constructions support an unlimited number of key updates while retaining the efficiency of their underlying public key encryption schemes. Besides, proposed UPKEs enable asynchronous key updates, allowing senders to update the public key independently. More affirmatively, UhPKE and USimS offer improved storage, computation and communication efficiency compared to existing UPKE schemes. Furthermore, we extend and refine the security notion of the updatable key encapsulation mechanism (UKEM) introduced by Haidar et al. (Asiacrypt 2023)from the bounded number of updates to the unbounded number of updates. We present the first post-quantum secure UKEM that does not rely on zero-knowledge proofs. More precisely, we introduce two UKEM schemes which are the first of their kind in the isogeny setting. Our first scheme, UKEM1, is derived from our UhPKE and achieves IND-CR-CPA security. Our second construction, UKEM2, is based on our USimS scheme and achieves IND-CR-CCA security. We provide security for our UKEMs in our proposed enhanced security framework that supports an unbounded number of key updates. More positively, our UKEMs not only support unlimited key updates but also enable independent encapsulation and decapsulation key updates without requiring sender-receiver synchronization similar to our UPKEs. Both UKEM1 and UKEM2 exhibit compact storage and communication costs with minimal size ciphertexts while their computational efficiency differs in decapsulation and key updates where UKEM2 incurs an additional discrete logarithm computation in the decapsulation phase, but potentially offering stronger IND-CR-CCA security in contrast to UKEM1 which is IND-CR-CPA secure.
Last updated:  2025-05-31
Adaptively Secure Three-Round Threshold Schnorr Signatures from DDH
Renas Bacho, Sourav Das, Julian Loss, and Ling Ren
Threshold signatures are one of the most important cryptographic primitives in distributed systems. Of particular interest is the threshold Schnorr signature, a pairing-free signature with efficient verification, compatible with standardized EdDSA (non-threshold) signature. However, most threshold Schnorr signatures have only been proven secure against a static adversary, which has to declare its corruptions before the protocol execution. Many existing adaptively secure constructions require either secure erasures or non-standard assumptions, such as the algebraic group model or hardness of the algebraic one-more discrete logarithm problem. The latest adaptively secure threshold Schnorr signature schemes under standard assumptions require five rounds of communication to create a single signature, limiting its practicality. In this work, we present Gargos, a three-round, adaptively secure threshold Schnorr signature scheme based on the hardness of the decisional Diffie-Hellman (DDH) problem in the random oracle model (ROM). Our protocol supports full corruption threshold $t < n$, where $t$ is the signing threshold and $n$ is the total number of signers. We achieve our result with an enhanced proof technique that enables us to eliminate two rounds of communication from the recent Glacius scheme (Eurocrypt 2025). We believe our techniques are of independent interest to further research in improving the round complexity of multi-party signing protocols.
Last updated:  2025-05-31
Reviving a Grover based Quantum Secret Sharing Scheme
Debajyoti Bera and Santanu Majhi
Secret-sharing schemes allow a dealer to split a secret into multiple “shares” and distribute them individually among many parties while mandating certain constraints on its reconstruction. Such protocols are usually executed over a secure communication channel since an eavesdropper, after intercepting all the shares, is expected to be able to reconstruct the secret. Leveraging the unique properties of quantum channels, several quantum protocols have been designed for secret sharing. However, almost all of them detect the presence of an eavesdropper by statistical analysis of the outcome of multiple rounds, or simply require a secure channel of communication. We mathematically analyse the correctness and security properties of a quantum-search based secret-sharing framework proposed by Hsu (2003) (and attacked by Hao et al. (2010)) that was proposed as an alternative that works over public channels and does not require multiple rounds.We show how to improve the original protocol to be more resistant towards eavesdropping and other attacks; however, we also prove that complete security against an eavesdropper is not possible in this framework. Our tight characterization will be helpful towards the construction of more quantum secret sharing schemes based on the same framework.
Last updated:  2025-05-31
Scalable Multiparty Computation from Non-linear Secret Sharing
Sanjam Garg, Abhishek Jain, Pratyay Mukherjee, and Mingyuan Wang
A long line of work has investigated the design of scalable secure multiparty computation (MPC) protocols with computational and communication complexity independent of the number of parties (beyond any dependence on the circuit size). We present the first unconditionally-secure MPC protocols for arithmetic circuits over {\em large fields} with total computation $\mathcal{O}(|C|\log|F|)$, where $|C|$ and $|F|$ denote the circuit and field size, respectively. Prior work could either achieve similar complexity only in {\em communication}, or required highly structured circuits, or expensive circuit transformations. To obtain our results, we depart from the prior approach of share packing in linear secret-sharing schemes; instead, we use an ``unpacking'' approach via {\em non-linear} secret sharing.
Last updated:  2025-06-02
Adding Feeding Forward Back to the Sponge Construction
Chun Guo, Kai Hu, Yanhong Fan, Yong Fu, and Meiqin Wang
Avoiding feeding forward seems to be a major goal of the sponge construction. We make a step back and investigate adding feeding forward back to sponge. The obtained sponge-with-feeding-forward construction has a number of benefits: (1) In the random permutation model, its preimage and second preimage security bounds are much better than the standard sponge with the same capacity, while collision and indifferentiability security bounds are comparable; (2) Its collision and (second) preimage security can be reduced to well-defined properties of the underlying permutation, i.e., correlation intractability w.r.t. certain family of evasive relations. We further incorporate several somewhat new ideas to form detailed hash and XOF constructions SpongeFwd: (1) Feeding-forward is only applied to the capacity part, and the final output(s) is the capacity part (with the rate part discarded). In this way, when $c$ equals the primary hash output size $h$, a single permutation-call suffices for squeezing. This also simplifies the underlying evasive relations for the reduction security proof. (2) We replace the hash IV with the first message block to have the best possible efficiency. (3) We employ a parallel structure to define an XOF variant. (4) We use HAIFI-style counter inputs to achieve both length-independent second-preimage security and domain separation for XOF. The better (second) preimage security enables constructing 512-bit output hash function from Keccak-p[800]: with 512-bit capacity, its collision and (second) preimage security bounds are the same as the standard SHA-3-512, while its hardware area is reduced by roughly 38% (according to our preliminary estimation).
Last updated:  2025-05-31
TEAKEX: TESLA-Authenticated Group Key Exchange
Qinyi Li, Lise Millerjord, and Colin Boyd
We present a highly efficient authenticated group key exchange protocol, TEAKEX, using only symmetric key primitives. Our protocol provides proven strong security, including forward secrecy, post-compromise security, and post-quantum security. For online applications we claim that TEAKEX is much simpler and more efficient than currently available alternatives. As part of our construction we also give a new abstract security definition for delayed authentication and describe its instantiation with the TESLA protocol.
Last updated:  2025-05-30
On Factoring and Power Divisor Problems via Rank-3 Lattices and the Second Vector
Yiming Gao, Yansong Feng, Honggang Hu, and Yanbin Pan
We propose a deterministic algorithm based on Coppersmith's method that employs a rank-3 lattice to address factoring-related problems. An interesting aspect of our approach is that we utilize the second vector in the LLL-reduced basis to avoid trivial collisions in the Baby-step Giant-step method, rather than the shortest vector as is commonly used in the literature. Our results are as follows: - Compared to the result by Harvey and Hittmeir (Math. Comp. 91 (2022), 1367–1379), who achieved a complexity of $O\left( \frac{N^{1/5} \log^{16/5} N}{(\log \log N)^{3/5}} \right)$ for factoring a semiprime $N = pq$, we demonstrate that in the balanced $p$ and $q$ case, the complexity can be improved to $O\left( \frac{N^{1/5} \log^{13/5} N}{(\log\log N)^{3/5}} \right).$ - For factoring sums and differences of powers, i.e., numbers of the form $N = a^n \pm b^n$, we improve Hittmeir's result (Math. Comp. 86 (2017), 2947–2954) from $O(N^{1/4} \log^{3/2} N)$ to $O\left( {N^{1/5} \log^{13/5} N} \right).$ - For the problem of finding $r$-power divisors, i.e., finding all integers $p$ such that $p^r \mid N$, Harvey and Hittmeir (Proceedings of ANTS XV, Res. Number Theory 8 (2022), no.4, Paper No. 94) recently directly applied Coppersmith's method and achieved a complexity of $O\left(\frac{N^{1/4r} \log^{10+\epsilon} N}{r^3}\right)$. By using faster LLL-type algorithm and sieving on small primes, we improve their result to $O\left(\frac{N^{1/4r} \log^{7+3\epsilon} N}{(\log\log N-\log 4r)r^{2+\epsilon}}\right)$. The worst case running time for their algorithm occurs when $N = p^r q$ with $q = \Theta(N^{1/2})$. By focusing on this case and employing our rank-3 lattice approach, we achieve a complexity of $O\left(\sqrt{r} N^{1/4r} \log^{5/2} N \right).$ In conclusion, we offer a new perspective on these problems, which we hope will provide further insights.
Last updated:  2025-05-30
Low-Latency Dynamically Available Total Order Broadcast
Sravya Yandamuri, Nibesh Shrestha, LUCA ZANOLINI, and Kartik Nayak
This work addresses the problem of Byzantine Fault-Tolerant (BFT) Total-Order Broadcast (TOB) in a dynamically available setting, where parties can transition between online and offline states without knowing the number of active parties. Existing dynamically available protocols rely on a synchronous network assumption, which means their latency remains tied to the pessimistic network delay $\Delta$, even when the actual network delay is $\delta << \Delta$. This raises the question of whether a dynamically available BFT TOB protocol can maintain safety and liveness under synchrony while committing blocks at a rate closer to the actual network delay. We answer this question affirmatively by designing the first dynamically available BFT TOB protocol that can commit blocks at the rate of $O(\Delta_{ideal})$ where $\Delta_{ideal} < 2\delta$.
Last updated:  2025-05-30
Cool + Cruel = Dual
Alexandr Karenin, Elena Kirshanova, Julian Nowakowski, Eamonn W. Postlethwaite, and Fernando Virdia
Recently [Wenger et al.~IEEE S\&P 2025] claimed that the `Cool and Cruel' (C+C) approach to solving LWE with sparse secrets [Nolte et al.~AFRICACRYPT 2024] outperforms other approaches, in particular the well established primal attack. In this work we show that i.~C+C is an instantiation of a known dual attack [Albrecht, EUROCRYPT 2017], ii.~experimental evidence that the primal attack can outperform C+C in similar regimes to those studied by Wenger et al. and iii.~both theoretical justification and experimental evidence that C+C is a consequence of a basis profile called the Z-shape. To prove i.~we introduce a framework for dimension reduction in bounded distance decoding problems that may be of independent interest. For ii.~we provide an open source implementation of the primal attack that is properly parametrised for short, sparse ternary secret LWE and guesses portions of the secret, along with an error analysis for a rounded variant of LWE that proves useful for practical cryptanalysis. Given iii.~we falsify a claim of Nolte et al.
Last updated:  2025-05-30
A Plausible Attack on the Adaptive Security of Threshold Schnorr Signatures
Elizabeth Crites and Alistair Stewart
The standard notion of security for threshold signature schemes is static security, where the set of corrupt parties is assumed to be fixed before protocol execution. In this model, the adversary may corrupt up to t−1 out of a threshold of t parties. A stronger notion of security for threshold signatures considers an adaptive adversary, who may corrupt parties dynamically based on its view of the protocol execution, learning the corrupted parties’ secret keys as well as their states. Adaptive security of threshold signatures has become an active area of research recently due to ongoing standardization efforts. Of particular interest is full adaptive security, the analogue of static security, where the adversary may adaptively corrupt a full t−1 parties. We present a plausible attack on the full adaptive security of threshold Schnorr signature schemes with public key shares of the form $pk_i = g^{sk_i},$ where all secret keys $sk_i$ lie on a polynomial. We show that a wide range of threshold Schnorr signature schemes, including all variants of FROST, Sparkle, and Lindell’22, cannot be proven fully adaptively secure without modifications or assuming the hardness of a search problem that we define in this work. We then prove a generalization that extends below t−1 adaptive corruptions.
Last updated:  2025-05-30
Post-Quantum Multi-Message Public Key Encryption from Extended Reproducible PKE
Hongxiao Wang, Ron Steinfeld, Markku-Juhani O. Saarinen, Muhammed F. Esgin, and Siu-Ming Yiu
A multi-message multi-recipient Public Key Encryption (mmPKE) enables batch encryption of multiple messages for multiple independent recipients in one go, significantly reducing costs, particularly bandwidth, compared to the trivial solution of encrypting each message individually. This capability is especially critical in the post-quantum setting, where ciphertext length is typically significantly larger than the corresponding plaintext. In this work, we first observe that the generic construction of mmPKE from reproducible PKE proposed by Bellare et al. (PKC ’03) does not apply in the lattice-based setting because existing lattice-based PKE schemes do not fit the notion of reproducible PKE. To this end, we first extend their construction by proposing a new variant of PKE, named extended reproducible PKE (XR-PKE), which enables the reproduction of ciphertexts via additional hints. However, standard lattice-based PKE schemes, such as Kyber (EuroS&P '18), do not readily satisfy the XR PKE requirements. To construct XR-PKE from lattices, we introduce a novel technique for precisely estimating the impact of such hints on the ciphertext security while also establishing suitable parameters. This enables us to instantiate the first CPA-secure mmPKE and Multi-Key Encapsulation Mechanism (mmKEM) from the standard Module Learning with Errors (MLWE) lattice assumption, named mmCipher-PKE and mmCipher-KEM, respectively. We then extend our works to the identity-based setting and construct the first mmIBE and mmIB-KEM schemes. As a bonus contribution, we explore generic constructions of adaptively secure mmPKE, achieving security against adaptive corruption and chosen-ciphertext attacks. We also provide an efficient implementation and thorough evaluation of the practical performance of our mmCipher. Our results show that mmCipher provides significant bandwidth and computational savings in practice, compared to the state-of-the-art. For example, for 1024 recipients, our mmCipher-KEM achieves a 23~45 times reduction in bandwidth overhead, reaching within 4~9% of the plaintext length (near optimal bandwidth), while also offering a 3~5 times reduction in computational cost.
Last updated:  2025-05-30
Insecurity of One Ring Signature Scheme with Batch Verification for Applications in VANETs
Zhengjun Cao and Lihua Liu
We show that the Negi-Kumar certificateless ring signature scheme [Wirel. Pers. Commun. 134(4): 1987-2011 (2024)] is insecure against forgery attack. The signer's public key $PK_j$ and secret key $PSK_j$ are simply invoked to compute the hash value $H_{2_j}=h_5(m_j\|PSK_j\|PK_j\|t_j)$, which cannot be retrieved by the verifier for checking their dependency. The explicit dependency between the public key and secret key is not properly used to construct some intractable problems, such as Elliptic Curve Discrete Logarithm (ECDL), Computational Diffie-Hellman (CDH), and Decisional Diffie-Hellman (DDH). An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. The findings in this note could be helpful for the newcomers who are not familiar with the designing techniques for certificateless ring signature.
Last updated:  2025-05-30
On the UC-(In)Security of PAKE Protocols Without the Random Oracle Model
Naman Kumar and Jiayu Xu
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to jointly establish a cryptographic key, where the only information shared in advance is a low-entropy password. The first efficient PAKE protocol whose security does not rely on the random oracle model is the one by Katz, Ostrovsky and Yung (KOY, EUROCRYPT 2001). Unfortunately, the KOY protocol has only been proven secure in the game-based setting, and it is unclear whether KOY is secure in the stronger Universal Composability (UC) framework, which is the current security standard for PAKE. In this work, we present a thorough study of the UC-security of KOY. Our contributions are two-fold: 1. We formally prove that the KOY protocol is not UC-secure; 2. We then show that the UC-security of KOY holds in the Algebraic Group Model, under the Decisional Square Diffie-Hellman (DSDH) assumption. Overall, we characterize the exact conditions under which KOY is UC-secure. Interestingly, the DSDH assumption is stronger than DDH under which KOY can be proven game-based secure, which reveals some subtle gaps between the two PAKE security notions that have never been studied.
Last updated:  2025-05-30
Kerblam — Anonymous Messaging System Protecting Both Senders and Recipients
Yanxue Jia, Debajyoti Das, Wenhao Zhang, and Aniket Kate
While popular messaging apps already offer end-to-end confidentially, end-to-end metadata privacy is still far from being practical. Although several meta-data hiding systems have been developed and some like Tor have been popular, the proposed solutions lack in one or more aspects: the Tor network is prone to easy low-resourced attacks, and most others solely focus on anonymity for senders or receivers but do not both. Some recent solutions do consider end-to-end anonymity, however, they put significant restrictions on how users use the system. Particularly, the receivers must stay online or trust online servers that receive messages on behalf of receivers. This work presents a scalable end-to-end anonymity messaging system, $\mathsf{ORAM}^{-}$, that overcomes the mentioned issues and restrictions. It stems from a key observation that combining the recently-emerged oblivious message retrieval (OMR) primitive with oblivious shuffling can offer the desired end-to-end anonymity without severely restricting the number of messages a sender may send or a receiver may receive. We build our solution using two non-colluding servers and recent OMR protocol HomeRun and a compatible oblivious shuffle protocol. We then extend our solution to allow larger messages by employing a novel two-server distributed oblivious RAM technique, called $\mathsf{ORAM}^{-}$. Our performance analysis demonstrates that with the increase in the number and size of messages, the performance improvement brought by $\mathsf{ORAM}^{-}$ becomes higher. Specifically, for $2^{20}$ messages of size 1KB, our scheme only needs $5.577$ s to transmit a message.
Last updated:  2025-05-29
Distance-Aware OT with Application to Fuzzy PSI
Lucas Piske, Jaspal Singh, Ni Trieu, Vladimir Kolesnikov, and Vassilis Zikas
A two-party fuzzy private set intersection (PSI) protocol between Alice and Bob with input sets $A$ and $B$ allows Alice to learn nothing more than the points of Bob that are ``$\delta$-close'' to its points in some metric space $\texttt{dist}$. More formally, Alice learns only the set $\{ b\ |~\texttt{dist}{(a,b)} \leq \delta , a \in A,b\in B\}$ for a predefined threshold $\delta$ and distance metric $\texttt{dist}$, while Bob learns nothing about Alice's set. Fuzzy PSI is a valuable privacy tool in scenarios where private set intersection needs to be computed over imprecise or measurement-based data, such as GPS coordinates or healthcare data. Previous approaches to fuzzy PSI rely on asymmetric cryptographic primitives, generic two-party computation (2PC) techniques like garbled circuits, or function secret sharing methods, all of which are computationally intensive and lead to poor concrete efficiency. This work introduces a new modular framework for fuzzy PSI, {primarily built on efficient symmetric key primitives}. Our framework reduces the design of efficient fuzzy PSI to a novel variant of oblivious transfer (OT), which we term distance-aware random OT (da-ROT). This variant enables the sender to obtain two random strings $(r_0, r_1)$, while the receiver obtains one of these values $r_b$, depending on whether the receiver’s input keyword $a$ and the sender’s input keyword $b$ are close in some metric space i.e., $\texttt{dist}{(a,b)} \leq \delta$. The da-ROT can be viewed as a natural extension of traditional OT, where the condition (choice bit) is known to the receiver. We propose efficient constructions for da-ROT based on standard OT techniques tailored for small domains, supporting distance metrics such as the Chebyshev norm, the Euclidean norm, and the Manhattan norm. By integrating these da-ROT constructions, our fuzzy PSI framework achieves up to a $14\times$ reduction in communication cost and up to a $54\times$ reduction in computation cost compared to previous state-of-the-art protocols, across input set sizes ranging from $2^8$ to $2^{16}$. Additionally, we extend our framework to compute fuzzy PSI cardinality and fuzzy join from traditional PSI-related functionalities. All proposed protocols are secure in the semi-honest model.
Last updated:  2025-05-29
NIZK Amplification via Leakage-Resilient Secure Computation
Benny Applebaum and Eliran Kachlon
Suppose that we are given a weak \emph{Non-Interactive Zero-Knowledge} (NIZK) proof system for NP with non-negligible soundness and zero-knowledge errors, denoted by $\alpha$ and $\beta$, respectively. Is it possible to to reduce these errors to a negligible level? This problem, known as NIZK amplification, was introduced by Goyal, Jain, and Sahai (Crypto'19) and was further studied by Bitansky and Geier (Crypto'24). The latter work provides amplification theorems for proofs and arguments, assuming the existence of one-way functions and public-key encryption, respectively. Unfortunately, their results only apply when the security level, $1 - (\alpha + \beta)$, is a constant bounded away from zero. Amplifying NIZK with an inverse polynomial security level remains an open problem and was stated as the main open question in both works. In this work, we resolve the NIZK amplification problem and show how to amplify any non-trivial NIZK proof system that has a noticeable, inverse-polynomial level of security. As in previous works, we amplify proofs and arguments assuming the existence of one-way functions and public-key encryption, respectively. Furthermore, assuming the existence of collision-resistant hash functions, we preserve, for the first time, properties such as statistical zero-knowledge and proof succinctness. Our main technical contribution is a new \emph{leakage-resilient secure multiparty} protocol that computes any public-output functionality with information-theoretic security against an adversary that corrupts an arbitrary subset of parties and obtains bounded leakage from each honest party. Our protocol operates in the pairwise correlated randomness model. Previous works relied on stronger setup assumptions in the form of $n$-wise correlations and either supported a smaller corruption threshold or suffered from an exponential dependency on the number of parties. To transform our protocol into a NIZK amplifier, we introduce a new intermediate notion of \emph{leakage-resilient NP secret sharing}, that may be of independent interest.
Last updated:  2025-05-29
A Fast Multiplication Algorithm and RLWE-PLWE Equivalence for the Maximal Real Subfield of the $2^r p^s$-th Cyclotomic Field
Wilmar Bolaños, Antti Haavikko, and Rodrigo M. Sánchez-Ledesma
This paper proves the RLWE-PLWE equivalence for the maximal real subfields of the cyclotomic fields with conductor $n = 2^r p^s$, where $p$ is an odd prime, and $r \geq 0$ and $s \geq 1$ are integers. In particular, we show that the canonical embedding as a linear transform has a condition number bounded above by a polynomial in $n$. In addition, we describe a fast multiplication algorithm in the ring of integers of these real subfields. The multiplication algorithm uses the fast Discrete Cosine Transform (DCT) and has computational complexity $\mathcal{O}(n \log n)$. This work extends the results of Ahola et al., where the same claims are proved for a single prime $p = 3$.
Last updated:  2025-05-29
Fully-Homomorphic Encryption from Lattice Isomorphism
Pedro Branco, Giulio Malavolta, and Zayd Maradni
The lattice isomorphism problem (LIP) asks, given two lattices $\Lambda_0$ and $\Lambda_1$, to decide whether there exists an orthogonal linear map from $\Lambda_0$ to $\Lambda_1$. In this work, we show that the hardness of (a circular variant of) LIP implies the existence of a fully-homomorphic encryption scheme for all classical and quantum circuits. Prior to our work, LIP was only known to imply the existence of basic cryptographic primitives, such as public-key encryption or digital signatures.
Last updated:  2025-06-02
Improved Private Simultaneous Messages Protocols for Symmetric Functions with Universal Reconstruction
Koji Nuida
Private Simultaneous Messages (PSM) is a kind of secure multiparty computation with minimal interaction pattern and minimal security requirement. A PSM protocol is said to be with universal reconstruction for a given function family if the algorithm of the referee (the output party) is independent of a function to be computed and the referee cannot infer the function from a protocol execution. In a recent work by Eriguchi and Shinagawa (EUROCRYPT 2025), they proposed a compiler to obtain a PSM protocol for symmetric functions from PSM protocols with universal reconstruction for symmetric functions with smaller domains. They also constructed the latter PSM protocols with universal reconstruction, by which the former PSM protocol achieves communication complexity better than the previously known protocols. In this paper, we construct the latter PSM protocols with universal reconstruction for symmetric functions more efficiently; the communication complexity is exponentially (in the input range) smaller than the protocols by Eriguchi and Shinagawa. As a consequence, we also obtain a PSM protocol (and also an ad-hoc PSM protocol and a robust PSM protocol) for symmetric functions that is more efficient than their protocol. Technically, a main ingredient of their protocols is a linear and injective encoding of histograms for the input elements, and our improvement is realized by finding a more efficient encoding of the histograms.
Last updated:  2025-05-29
MOAI: Module-Optimizing Architecture for Non-Interactive Secure Transformer Inference
Linru Zhang, Xiangning Wang, Jun Jie Sim, Zhicong Huang, Jiahao Zhong, Huaxiong Wang, Pu Duan, and Kwok Yan Lam
The advent of Large Language Models (LLM) has brought about a new wave productivity, revolutionizing business operations while keeping cost relatively low. The human-like interface of LLM enables it to be easily integrated with business functions, thereby freeing up precious human resources for more complex, valuable tasks. However, due to the intensive computation and memory requirements of LLM inference, it is preferable and cheaper to deploy LLMs with the Cloud Service Providers (CSP) that offer high performance computation resources and low-latency networking. Nevertheless, privacy concerns have been raised about the possibility of data leakage to the CSP. In this work, we seek to address such privacy concerns through the use of Fully Homomorphic Encryption (FHE). FHE enables the CSP to work on data in its encrypted form, thus ensuring that the data stay private and secure. We propose the implementation of LLM inference with FHE. While a series of prior work have demonstrated that it is possible to execute LLM inference in a private manner, it remains a challenge to design a solution that is practical. Our contributions are as follows: We provide the first end-to-end open-source implementation of a non-interactive transformer inference with FHE. We report an amortized time of 9.6 minutes of one input with 128 tokens when evaluating the BERT model on CPU. Our packing methods for encrypted matrices remove the need to repack ciphertext between encrypted matrix multiplication and activation layers. Additionally, we introduce interleaved batching to eliminate the internal rotations during ciphertext matrix multiplications. Our approach also avoids HE rotations in evaluations of the softmax and layerNorm, leading to a speedup of 4.22× and 122× than existing works respectively. Our implementation supports arbitrary token lengths, in contrast with existing solutions that requires a full token embedding. Our implementation can be found at GitHub.
Last updated:  2025-05-29
Lower Bounds on the Bottleneck Complexity of Secure Multiparty Computation
Reo Eriguchi and Keitaro Hiwatashi
Secure multiparty computation (MPC) is a cryptographic primitive which enables multiple parties to jointly compute a function without revealing any extra information on their private inputs. Bottleneck complexity is an efficiency measure that captures the load-balancing aspect of MPC protocols, defined as the maximum amount of communication required by any party. In this work, we study the problem of establishing lower bounds on the bottleneck complexity of MPC protocols. While the previously known techniques for lower bounding total communication complexity can also be applied to bottleneck complexity, they do not provide nontrivial bounds in the correlated randomness model, which is commonly assumed by existing protocols achieving low bottleneck complexity, or they are applied only to functions of limited practical interest. We propose several novel techniques for lower bounding the bottleneck complexity of MPC protocols. Our methods derive nontrivial lower bounds even in the correlated randomness model and apply to practically relevant functions including the sum function and threshold functions. Furthermore, our lower bounds demonstrate the optimality of some existing MPC protocols in terms of bottleneck complexity or the amount of correlated randomness.
Last updated:  2025-06-02
List Decoding in Private Information Retrieval: Formal Definition and Efficient Constructions
Reo Eriguchi, Kaoru Kurosawa, and Koji Nuida
Multi-server Private Information Retrieval (PIR) is a cryptographic primitive that allows a client to retrieve an item of a database shared by multiple servers without revealing the index. This paper addresses the problem of error correction in multi-server PIR, enabling the client to obtain the item even when some servers provide incorrect responses. In a dishonest-majority setting where the majority of servers may introduce errors, it is known that the client can no longer uniquely determine the correct value. Previous approaches in this setting have typically settled for relaxed guarantees that the client can only reject incorrect values. However, these guarantees are substantially weak, as they only indicate the presence of errors without providing any information about the desired item. In this paper, we explore a more natural alternative called list-decodable PIR, which ensures that the client receives a small list of candidate values one of which is correct. We provide the first formal definition of list-decodable PIR and study its basic properties including a fundamental lower bound on the number of servers and the difficulty of simulation-based security definitions. We propose generic constructions of list-decodable PIR from any semi-honest PIR protocols, each offering different trade-offs. Our constructions improve upon the communication complexity of the only previously known protocol satisfying our definition. Furthermore, they achieve communication complexity comparable to that of the currently best known semi-honest PIR protocols.
Last updated:  2025-05-28
Dynamic Security: A Realistic Approach to Adaptive Security With Applications to Strong FaF Security
Bar Alon and Naty Peter
Secure multiparty computation allows multiple parties to jointly compute a function while maintaining security even in the presence of malicious adversaries. There are two types of adversaries in the literature: static adversaries, which choose the parties to corrupt before the protocol begins; and adaptive adversaries, which can corrupt parties during the execution of the protocol based on the messages exchanged by the parties. While adaptive security provides a more robust security guarantee, it may require too much in certain scenarios. Indeed, the adversary must allocate some of its resources to corrupt the parties; however, certain parties might be more susceptible to corruption, for instance, if they have not updated their operating system to the latest version. To address this, we introduce a new security notion called \emph{dynamic security}. Here, adversaries may corrupt new parties \emph{during and after} the protocol's execution, but \emph{cannot choose} targets based on the messages. A protocol is said to be $(t,h)$-dynamically secure if it is possible to simulate any adversary that can corrupt up to $t$ parties during the execution and $h$ thereafter. Dynamic security provides meaningful security for many real-world scenarios. Moreover, it circumvents known lower bounds on the communication complexity of adaptive security, allowing for more efficient protocols such as committee-based ones, which would be insecure against adaptive adversaries. We further explore dynamic security and establish the following results. 1. We show a surprising connection between dynamic security and the seemingly unrelated notion of security with friends and foes (FaF security), introduced by Alon et al. (CRYPTO 2020), which aims to protect honest parties not only from adversaries but also against other honest parties. The notion of $(t,h)$-\emph{strong FaF security} strengthens this by requiring the simulatability of the joint view of any $t$ malicious parties alongside any $h$ honest parties to be indistinguishable from their real-world view. We show that $(t,h)$-dynamic security and $(t,h)$-strong FaF security are equivalent. 2. We consider the feasibility of $(t,h)$-dynamic security and show that every $n$-party functionality can be computed with computational $(t,h)$-dynamic security (with guaranteed output delivery) if and only if $2t+h<n$. By our previous result, this also solves an open problem left by Alon et al. on the feasibility of strong FaF security.
Last updated:  2025-05-28
Security of Linear Secret Sharing Schemes with Noisy Side-Channel Leakage
Utkarsh Gupta and Hessam Mahdavifar
Secret sharing is a foundational cryptographic primitive for sharing secret keys in distributed systems. In a classical threshold setting, it involves a dealer who has a secret, a set of $n$ users to whom shares of the secret are sent, and a threshold $t$ which is the minimum number of shares required to recover the secret. These schemes offer an all-or-nothing security approach where less than $t$ shares reveal no information about the secret. But these guarantees are threatened by side-channel attacks which can leak partial information from each share. Initiated by Benhamouda et. al. (CRYPTO'18), the security of such schemes has been studied for precise and worst-case bounded leakage models. However, in practice, side-channel attacks are inherently noisy. In this work, we propose a noisy leakage model for secret sharing, where each share is independently leaked to an adversary corrupted by additive noise in the underlying field $\mathbb{F}_q$. Under this model, we study the security of linear secret sharing schemes, and show bounds on the mutual information (MI) and statistical distance (SD) security metrics. We do this by using the MacWilliams' identity from the theory of error-correcting codes. For a given secret, it enables us to bound the the statistical deviation of the leaked shares from uniform as $\delta^t$, where $\delta$ is the Fourier bias of the added noise. Existing analyses for the security of linear $(n,t)$-threshold schemes only bound the SD metric, and show resilience for schemes with $t \ge 0.668n$. In this work, we show that these constraints are artifacts of the bounded leakage model. In particular, we show that $(n,t)$-threshold schemes over $\mathbb{F}_q$ with $t \ge \tau (n+1)$ leak $\mathcal{O}(q^{-2t(\gamma+1-1/\tau)})$ bits about the secret, given the bias of added noise satisfies $\delta \le q^{-\gamma}$. To the best of our knowledge, this is the first attempt towards understanding the side-channel security of linear secret sharing schemes for the MI metric.
Last updated:  2025-05-28
The Rényi Smoothing Parameter and Its Applications in Lattice-Based Cryptography
Cong Ling, Laura Luzzi, and Hao Yan
The smoothing parameter is a cornerstone concept in lattice-based cryptography. Traditionally defined using the \( L^{\infty} \) distance, this standard formulation can be overly stringent compared to the \( L^1 \) (or statistical) distance more commonly employed in cryptographic contexts. Recent work has proposed relaxed definitions based on Kullback-Leibler (KL) divergence and \( L^1 \) distance, thereby loosening the constraints required for the distance to vanish. However, the additive nature of the \( L^1 \) distance can be limiting for cryptographic applications where probability preservation is essential. In this paper, we introduce the {Rényi smoothing parameter} of a lattice, based on Rényi divergence, to address this limitation. The advantages of Rényi divergence in cryptographic settings are well known thanks to its multiplicative nature. The Rényi smooting parameter provides a tunable framework that interpolates between the \( L^1 \) and \( L^{\infty} \) distances, offering enhanced flexibility. We present two complementary methods to study the averaging behavior of the Rényi flatness factor: one uses classical tools such as the Minkowski-Hlawka ensemble and Rogers’ formula for computing lattice function moments; the other employs Construction A lattices derived from random codes. Finally, we illustrate how this new perspective yields improvements in lattice-based cryptographic constructions.
Last updated:  2025-05-28
Tighter Quantum Security for Fiat-Shamir-with-Aborts and Hash-and-Sign-with-Retry Signatures
Pouria Fallahpour, Serge Fehr, and Yu-Hsuan Huang
We revisit the quantum security (in the QROM) of digital signature schemes that follow the Fiat-Shamir-with-aborts (FSwA) or the probabilistic hash-and-sign with retry/abort (HSwA) design paradigm. Important examples of such signature schemes are Dilithium, SeaSign, Falcon+ and UOV. In particular, we are interested in the UF-CMA-to-UF-NMA reduction for such schemes. We observe that previous such reductions have a reduction loss that is larger than what one would hope for, or require a more stringent notion of zero-knowledge than one would hope for. We resolve this matter here by means of a novel UF-CMA-to-UF-NMA reduction that applies to FSwA and HSwA signature schemes simultaneously, and that offers an improved reduction loss (without making the zero-knowledge assumption more stringent).
Last updated:  2025-05-28
AsconAEAD128 Revisited in the Multi-user Setting
Bishwajit Chakraborty, Mridul Nandi, Soumit Pal, Thomas Peyrin, and Quan Quan Tan
After more than half a decade since its initiation, NIST declared Ascon as the winner of the LwC competition. In the first public draft of AsconAEAD128, NIST recognized that Ascon has limitations when used in multi-user applications. To mitigate this, NIST prescribed the use of a \(256\)-bit key in multi-user applications and produced an instantiation on how to process this extra key size in the current AsconAEAD128 API. While doing so, they identified a limitation of this new scheme (which we refer to as mu-Ascon in this document): mu-Ascon is vulnerable to committing attack and hence cannot be used in cases where committing security is required. On the other hand, the full key-binding property in Ascon, which separated it from other sponge-type constructions, has been used to show that Ascon is much stronger in the sense that it presents a key recovery resistance even in the case where some intermediate state is recovered. We remark that the current mu-Ascon has the limitation that only a partial key is bound during initialization and finalization. In this work, we propose some alternative instantiations of AsconAEAD128 API for multi-user applications. In comparison with the current mu-Ascon proposal, our first construction Ascon-256.v2 guarantees CMT-4 committing security up to 64 bits, and our second construction Ascon-256.v3 leads to both CMT-4 committing security and full 256-bit key binding. Structurally, our instantiations use only an extra-permutation call to provide these extra security features compared to mu-Ascon, which has a negligible overhead in terms of performance (given the lightweight nature of the Ascon permutation).
Last updated:  2025-05-28
LP2+: a robust symmetric-key AKE protocol with perfect forward secrecy, and an advocacy for thorough security proofs
Pierre-Alain Jacqmin and Jean Liénardy
Symmetric-key authenticated key establishment (AKE) protocols are particularly well suited in resource constraint environments such as internet of things (IoT) devices. Moreover, they often rely on better understood assumptions than asymmetric ones. In this paper, we review the security model for symmetric-key AKE protocols. We show why several existing models allow trivial attacks while they do not protect against some non-trivial ones. We fix these issues with our new security definitions. We show that the protocols $\textrm{LP2}$ and $\textrm{LP3}$ of Boyd et al. do not satisfy the claimed security properties. We propose a new 2-message protocol based on them, called $\textrm{LP2+}$. This protocol is proved to satisfy correctness, weak synchronization robustness, entity authentication, key indistinguishability and, as a consequence, it admits perfect forward secrecy. An instantiation of $\textrm{LP2+}$ is presented, whose security only relies on that of a pseudo-random function (PRF). Its total execution time in normal cases is dominated by only 14 evaluations of the PRF, making it a lightweight protocol that is particularly well suited for resource-constrained environments such as IoT devices. The flaws found in the security models as well as in the security arguments could have been avoided with precise and detailed proofs. We thus take this paper as an opportunity to advocate for thorough security proofs. Therefore, we have made the choice of rigor over concision.
Last updated:  2025-05-28
Simulatability SOA Does Not Imply Indistinguishability SOA in the CCA Setting
Hans Heum
Contrary to expectation, we show that simulation-based selective-opening security (SSO) does not imply indistinguishability-based selective opening security (ISO) in the CCA setting, making them incomparable in the presence of either encryption randomness leakage (sender opening) or secret key leakage (receiver opening). This contrasts the CPA case, where SSO-CPA is known to be strictly stronger than ISO-CPA in the presence of sender and/or receiver opening. Our separation result holds relative to all message distributions with sufficiently high min-entropy. On the other hand, restricting to message distributions with low enough min-entropy gives rise to an implication. Our separation result does not rely on the presence of selective openings. At a glance, this may seem to contradict known equivalence results between indistinguishability, semantic security, and selective opening security under trivial openings. We reconcile the apparent contradiction by showing that the selective-opening CCA landscape splits into a “high-entropy” and a “low-entropy” world which must be considered separately.
Last updated:  2025-05-28
Algebraic Cryptanalysis of AO Primitives Based on Polynomial Decomposition Applications to Rain and Full AIM-IIIIV
Hong-Sen Yang, Qun-Xiong Zheng, and Jing Yang
The LowMC-based post-quantum signature scheme Picnic was selected as a third-round candidate for NIST PQC, attracting wide attention to the design of efficient and secure post-quantum signature schemes using Symmetric Techniques for Advanced Protocols (STAP). Symmetric primitives designed for advanced protocols such as secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge (ZK) proof systems, with the goal of reducing the number of multiplication operations, are referred to as arithmetic-oriented (AO) primitives. These cryptographic primitives are typically constructed over large finite fields, which makes classical statistical analysis methods like differential and linear cryptanalysis inefficient. Due to their inherent algebraic properties, the mainstream security evaluation approaches are based on algebraic attacks. In this paper, we analyze the security of the MPC-friendly primitives \textsc{Rain} (CCS 2022) and AIM (CCS 2023) used in the post-quantum signature schemes Rainier and AIMer. Existing algebraic attacks on \textsc{Rain} and AIM were conducted over $\mathbb{F}_2$. We propose a novel algebraic attack over $\mathbb{F}_{2^n}$ that uses the polynomial decomposition to reduce degrees of equations. By further combining with the guess-and-determine technique, meet-in-the-middle modeling, and resultant, we are able to attack \textsc{Rain} and the full AIM. Specifically, we successfully attacked 2-round \textsc{Rain} with $2^{73.7}/2^{107.0}/2^{138.9}$ primitive calls, 3-round \textsc{Rain} with $2^{160.6}/2^{236.0}/2^{311.1}$ primitive calls, for the $128/192/256$-bit key. For the full AIM, we successfully attacked it with $2^{114.0}/2^{163.2}/2^{228.3}$ primitive calls for the $128/192/256$-bit key. The attack complexities mainly lie in solving univariate polynomial equations and computing resultants, and hence the complexity evaluations are accurate.
Last updated:  2025-05-28
Formal Security and Functional Verification of Cryptographic Protocol Implementations in Rust
Karthikeyan Bhargavan, Lasse Letager Hansen, Franziskus Kiefer, Jonas Schneider-Bensch, and Bas Spitters
We present an effective methodology for the formal verification of practical cryptographic protocol implementations written in Rust. Within a single proof framework, we show how to develop machine-checked proofs of diverse properties like runtime safety, parsing correctness, and cryptographic protocol security. All analysis tasks are driven by the software developer who writes annotations in the Rust source code and chooses a backend prover for each task, ranging from a generic proof assistant like F$\star$ to dedicated crypto-oriented provers like ProVerif and SSProve Our main contribution is a demonstration of this methodology on Bert13, a portable, post-quantum implementation of TLS 1.3 written in Rust and verified both for security and functional correctness. To our knowledge, this is the first security verification result for a protocol implementation written in Rust, and the first verified post-quantum TLS 1.3 library.
Last updated:  2025-05-28
Collision Attacks on Reduced RIPEMD-128
Zhengrong Lu, Hongbo Yu, Xiaoen Lin, and Sitong Yuan
RIPEMD-128 is an ISO/IEC standard hash function based on a double-branch Merkle-Damgård structure. Its compression function includes two branches with distinct Boolean functions and message expansion permutations. To perform a collision attack, differential characteristics must be constructed simultaneously for both branches under the same message word difference, and the message modification order must align with conditions in both branches. These factors make collision attacks on (reduced) RIPEMD-128 highly challenging. In 2014, an attack on 40 steps of RIPEMD-128 was achieved by Wang with no state differences in round 3. In this work, we analyze message permutation properties and propose two new structures for creating message differences. These structures enable high-probability local collisions in both branches of round 3, extending the attack to more steps. Notably, the second structure can eliminate all state differences in round 3, allowing the attack to cover more than three whole rounds. To ensure practical attacks, we limit the number of conditions based on our message modification strategy and use multi-step message modification techniques to control more conditions. As a result, we successfully generate colliding message pairs for 46-step and 54-step reduced RIPEMD-128, with time complexities of approximately $2^{42}$ and $2^{54}$, respectively.
Last updated:  2025-05-28
Multi-Party Distributed Point Functions with Polylogarithmic Key Size from Invariants of Matrices
Toomas Krips and Pille Pullonen-Raudvere
Distributed point functions (DPFs), introduced in 2014, are a widely used primitive in secure computation for a wide variety of applications. However, until now, constructions for DPFs with polylogarithmic-size keys have been known only for the two-party setting. We propose a scheme for a polylogarithmic-size DPF for an arbitrary number of parties. We use a technique where a secret-shared vector is mapped to collinear vectors by public matrices serves as an invariant for off-path leaves. We show, using a technique by Shamir, that when we work over Z_pq , these vectors are hard to compute if factoring is hard. We also show that our scheme is a secure DPF, provided that two new assumptions hold, one of which is related to Generic Group Model and the other to MinRank. The output of our scheme is in the exponent in some group where Diffie-Hellman type problems are hard. Although this limits the usability of our scheme, we believe that our scheme is the first distributed point function for more than two parties with a key size that is polylogarithmic in the size of the domain and that does not use fully homomorphic encryption.
Last updated:  2025-05-28
A Novel Leakage Model in OpenSSL’s Miller-Rabin Primality Test
Xiaolin Duan, Fan Huang, Yaqi Wang, and Honggang Hu
At Crypto 2009, Heninger and Shacham presented a branch-and-prune algorithm for reconstructing RSA private keys given a random fraction of its private components. This method is widely adopted in side-channel attacks, and its complexity is closely related to the specific leakage pattern encountered. In this work, we identified a novel leakage model in the Miller-Rabin primality test implemented in OpenSSL. Under certain side-channel attacks against fixed-window modular exponentiation (e.g., recovering the least significant $b$ bits from each window), the proposed model enables staggered recovery of bits in $p$ and $q$, reducing uncertainty in key reconstruction. In particular, this model includes previously undocumented scenarios where full key recovery is achievable without branching. To understand how the proposed leakage model could contribute to attacks on modular exponentiation, we investigated the global and local behavior of key reconstruction. Our evaluation demonstrates that the proposed scenarios enable more efficient key reconstruction and retain this advantage when additional erasure bits are introduced. Moreover, in specific cases, successful reconstruction remains achievable within practical time even if the bits obtained are less than 50%. Finally, we conducted a series of experiments to confirm the practicality of our assumption, successfully recovering the lower 4 bits from each 6-bit window.
Last updated:  2025-06-02
The Large Block Cipher Family Vistrutah
Roberto Avanzi, Bishwajit Chakraborty, and Eik List
Vistrutah is a large block cipher with block sizes of 256 and 512 bits. It iterates a step function that applies two AES rounds to each 128-bit block of the state, followed by a state-wide cell permutation. Like Simpira, Haraka, Pholkos, and ASURA, Vistrutah leverages AES instructions to achieve high performance. For each component of Vistrutah, we conduct a systematic evaluation of functions that can be efficiently implemented on both Intel and Arm architectures. We therefore expect them to perform efficiently on any recent vector instruction set architecture (ISA) with AES support. Our evaluation methodology combines latency estimation on an abstracted vector ISA with security analysis. The goal is to maximize the ratio of "bits of security per unit of time", i.e., to achieve the highest security for a given performance target, or equivalently, the best performance for a given security level within this class of designs. Implementations confirm the accuracy of our latency model. Vistrutah even performs significantly better than Rijndael-256-256. We support our security claims with a comprehensive ad-hoc cryptanalysis. An isomorphism between Vistrutah-512, the 512-bit wide variant, and the AES, allows us to also leverage the extensive cryptanalysis of AES and apply it to Vistrutah-512. A core design principle is the use of an inline key schedule: all round keys are computed during each encryption or decryption operation without requiring memory storage. In fact, rekeying has no associated overheads. Key schedules like the AES’s must precompute and store round keys in memory for acceptable performance. However, in 2010 Kamal and Youssef showed this makes cold boot attacks more effective. Vistrutah’s approach minimizes leakage to at most one value during context switches. Furthermore, expensive key schedules reduce key agility, limiting the design of modes of operation. Vistrutah is particularly well-suited for Birthday-Bound modes of operation, including Synthetic IV modes and Accordion modes for 256-bit block ciphers. It can serve as a building block for compression functions (such as Matyas-Meyer-Oseas) in wide Merkle-Damgard hash functions. Additionally, it can implement "ZIP" wide pseudo-random functions as recently proposed by Florez-Gutierrez et al. in 2024. We include related-key security analysis for two critical reasons. First, strong related-key security demonstrates the robustness of both the key schedule and of the cipher as a whole. Second, Vistrutah’s key agility enables mode designers to place values from counters (or other update functions) in the key input rather than the plaintext input. This approach simplifies achieving Beyond the Birthday Bound security. Finally, we present short, i.e., reduced-round versions of Vistrutah which are analyzed taking into account the restrictions posed on attackers by specific modes of operation. In particular, we model the use of the block ciphers in Hash-Encrypt-Hash (HEH) constructions such as HCTR2 as well as in ForkCiphers. These short versions of Vistrutah can be used to accelerate modes of operation without sacrificing security.
Last updated:  2025-05-28
Incompressible Encryption with Everlasting Security
Eylon Yogev and Shany Ben-David
Recently, the concept of incompressible encryption has emerged as a powerful enhancement to key-leakage resilience. In an incompressible encryption scheme, an adversary who intercepts ciphertexts is forced to dedicate a significant amount of memory to store them in full if they wish to extract any information about the plaintext later when the secret key becomes available. Given two messages, the security game involves two adversaries: the first adversary receives an encryption of one of the messages and produces a compressed state. Then, the second adversary, given both the secret key and the compressed state, attempts to determine which message was encrypted. Several positive results exist in incompressible cryptography. On the one hand, there are constructions based on minimal assumptions but with a poor rate (i.e., rate tends to 0). On the other hand, there are rate-1 constructions that achieve optimal efficiency but rely on strong cryptographic assumptions, such as obfuscation. A stronger security notion, known as everlasting security, has been proposed for incompressible encryption. In this formulation, the second adversary, who receives the compressed state and the secret key, is allowed to be computationally unbounded. While this notion is conceptually appealing, no constructions of everlasting incompressible encryption are currently known, regardless of the underlying assumption or even in idealized models. In this work, we give the first construction of everlasting incompressible encryption. In fact, we show that everlasting incompressible encryption is inherent in any sufficiently secure public-key encryption scheme. Specifically, we prove that any public-key encryption scheme with subexponential security (when instantiated with an appropriate security parameter) already satisfies the definition of everlasting incompressible encryption with subexponential security. Furthermore, our scheme achieves rate-1, improving upon existing results even for the weaker notion of standard incompressible encryption.
Last updated:  2025-05-28
OptAttest: Verifying Multi-List Multi-Hop History via a Hybrid Zero-Knowledge Architecture
Joshua G. Stern
To prevent privacy-preserving digital assets from becoming instruments of despotism via unitary-executivist compliance regimes, we propose OptAttest, a hybrid zero-knowledge architecture. This system empowers users to optionally generate verifiable attestation history for the current (Hop 0) and immediately preceding (Hop 1) transactions involving their private commitments. For crucial 0-hop multi-list attestations, users employ Zero-Knowledge Proofs (ZKPs) of claims from selected Verifiable Credentials (VCs). Users achieve per-transaction efficiency with diverse VC types by pre-computing and caching proofs of their VC validity. This approach avoids mandated adherence to singular, fallible external standards. Opted-in lightweight updates create cryptographic accumulator summaries, verified by network infrastructure (e.g., Layer 2 scaling solutions using Zero-Knowledge Virtual Machines), and are paired with user-managed Intermediate Attestation Data Packets (IADPs) containing detailed evidence. For comprehensive verification, users can then generate full recursive proofs from these IADPs for their attestation-enabled funds, leveraging native zkVM recursion. The protocol facilitates optional attestation generation, not enforcement, allowing downstream policy application. Aiming to cultivate a permissionless ethos, we propose a user-centric balance between privacy and verifiable accountability, distinct from models compelling broader data access. Folding schemes are noted as potential future enhancements for recursive proof efficiency.
Last updated:  2025-05-28
On Proving Equivalence Class Signatures Secure from Non-interactive Assumptions
Balthazar Bauer, Georg Fuchsbauer, and Fabian Regen
Equivalence class signatures (EQS), introduced by Hanser and Slamanig (AC’14, J.Crypto’19), sign vectors of elements from a bi- linear group. Their main feature is “adaptivity”: given a signature on a vector, anyone can transform it to a (uniformly random) signature on any multiple of the vector. A signature thus authenticates equivalence classes and unforgeability is defined accordingly. EQS have been used to improve the efficiency of many cryptographic applications, notably (delegatable) anonymous credentials, (round-optimal) blind signatures, group signa- tures and anonymous tokens. EQS security implies strong anonymity (or blindness) guarantees for these schemes which hold against malicious signers without trust assumptions. Unforgeability of the original EQS construction is proven directly in the generic group model. While there are constructions from standard assumptions, these either achieve prohibitively weak security notions (PKC’18) or they require a common reference string (AC’19, PKC’22), which reintroduces trust assumptions avoided by EQS. In this work we ask whether EQS schemes that satisfy the original secu- rity model can be proved secure under standard (or even non-interactive) assumptions with standard techniques. Our answer is negative: assum- ing a reduction that, after running once an adversary breaking unforge- ability, breaks a non-interactive computational assumption, we construct efficient meta-reductions that either break the assumption or break class- hiding, another security requirement for EQS.
Last updated:  2025-05-28
Generalized BGV, BFV, and CKKS for Homomorphic Encryption over Matrix Rings
Bence Mali
Some of the most valuable applications of homomorphic encryption, such as encrypted machine learning inference, require efficient large-scale plaintext-ciphertext and ciphertext-ciphertext matrix multiplications. Current state-of-the-art techniques for matrix multiplications all build on the ability to pack many ciphertexts into a ciphertext and compute on them in a Single Instruction, Multiple Data (SIMD) manner. However, to fit the operation of matrix multiplication into this computational model, a large number of additional costly operations need to be performed, such as the rotation of elements between the plaintext slots. In this work, we propose an orthogonal approach to performing encrypted matrix operations with BGV-like encryption schemes, where the plaintext and ciphertext spaces are generalized to a matrix ring of arbitrary dimension. To deal with the inherent problem of noncommutativity in the case of matrix rings, we present a new superoperator technique to better represent linear and quadratic expressions in the secret key, which allows for the relinearization of ciphertexts after multiplication. The security of the modified encryption schemes is based on Module-LWE with module rank equal to the dimension of the matrices. With this construction, we demonstrate that Ring-LWE, Module-LWE, and LWE are potentially equally efficient for homomorphic encryption, both in terms of useful information density and noise growth, only for different sizes of matrices.
Last updated:  2025-05-27
Sabot: Efficient and Strongly Anonymous Bootstrapping of Communication Channels
Christoph Coijanovic, Laura Hetz, Kenneth G. Paterson, and Thorsten Strufe
Anonymous communication is vital for enabling individuals to participate in social discourse without fear of marginalization or persecution. An important but often overlooked part of anonymous communication is the bootstrapping of new communication channels, generally assumed to occur out-of-band. However, if the bootstrapping discloses metadata, communication partners are revealed even if the channel itself is fully anonymized. We propose Sabot, the first anonymous bootstrapping protocol that achieves both strong cryptographic privacy guarantees and bandwidth-efficient communication. In Sabot, clients cooperatively generate a private relationship matrix, which encodes who wants to contact whom. Clients communicate with k ≥ 2 servers to obtain “their” part of the matrix and augment the received information using Private Information Retrieval (PIR) to learn about their prospective communication partners. Compared to previous solutions, Sabot achieves stronger privacy guarantees and reduces the bandwidth overhead by an order of magnitude.
Last updated:  2025-05-27
How to Verify that a Small Device is Quantum, Unconditionally
Giulio Malavolta and Tamer Mour
A proof of quantumness (PoQ) allows a classical verifier to efficiently test if a quantum machine is performing a computation that is infeasible for any classical machine. In this work, we propose a new approach for constructing PoQ protocols where soundness holds unconditionally assuming a bound on the memory of the prover, but otherwise no restrictions on its runtime. In this model, we propose two protocols: 1. A simple protocol with a quadratic gap between the memory required by the honest parties and the memory bound of the adversary. The soundness of this protocol relies on Raz's (classical) memory lower bound for matrix inversion (Raz, FOCS 2016). 2. A protocol that achieves an exponential gap, building on techniques from the literature on the bounded storage model (Dodis et al., Eurocrypt 2023). Both protocols are also efficiently verifiable. Despite having worse asymptotics, our first protocol is conceptually simple and relies only on arithmetic modulo 2, which can be implemented with one-qubit Hadamard and CNOT gates, plus a single one-qubit non-Clifford gate.
Last updated:  2025-05-27
Decentralized Data Archival: New Definitions and Constructions
Elaine Shi, Rose Silver, and Changrui Mu
We initiate the study of a new abstraction called incremental decentralized data archival (${\sf iDDA}$). Specifically, imagine that there is an ever-growing, massive database such as a blockchain, a comprehensive human knowledge base like Wikipedia, or the Internet archive. We want to build a decentralized archival of such datasets to ensure long-term robustness and sustainability. We identify several important properties that an ${\sf iDDA}$ scheme should satisfy. First, to promote heterogeneity and decentralization, we want to encourage even weak nodes with limited space (e.g., users' home computers) to contribute. The minimum space requirement to contribute should be approximately independent of the data size. Second, if a collection of nodes together receive rewards commensurate with contributing a total of $m$ blocks of space, then we want the following reassurances: 1) if $m$ is at least the database size, we should be able to reconstruct the entire dataset; and 2) these nodes should actually be commiting roughly $m$ space in aggregate --- even when $m$ is much larger than the data size, the nodes should be storing redundant copies of the database rather than storing just one copy, and yet impersonating arbitrarily many pseudonyms to get unbounded rewards. We propose new definitions that mathematically formalize the aforementioned requirements of an ${\sf iDDA}$ scheme. We also devise an efficient construction in the random oracle model which satisfies the desired security requirements. Our scheme incurs only $\widetilde{O}(1)$ audit cost, as well as $\widetilde{O}(1)$ update cost for both the publisher and each node, where $\widetilde{O}(\cdot)$ hides polylogarithmic factors. Further, the minimum space provisioning required to contribute is as small as polylogarithmic. Our construction exposes several interesting technical challenges. Specifically, we show that a straightforward application of the standard hierarchical data structure fails, since both our security definition and the underlying cryptographic primitives we employ lack the desired compositional guarantees. We devise novel techniques to overcome these compositional issues, resulting in a construction with provable security while still retaining efficiency. Finally, our new definitions also make a conceptual contribution, and lay the theoretical groundwork for the study of ${\sf iDDA}$. We raise several interesting open problems along this direction.
Last updated:  2025-05-27
Learning with Alternating Moduli, Arora-Ge over Composite Moduli, and Weak PRFs
Yilei Chen, Liheng Ji, and Wenjie Li
In TCC 2018, Boneh, Ishai, Passelègue, Sahai, and Wu propose candidates of weak and strong PRFs by evaluating linear functions over coprime moduli alternatively. Such PRFs can be evaluated by low-depth circuits and are MPC-friendly. However, they have not been able to base the security of their PRFs on well-formed assumptions other than assuming that the PRF constructions themselves are secure. In this paper, we formalize a new assumption called Learning with Alternating Moduli (LAM). We show that over certain large moduli, the LAM assumption is as hard as the Learning with Errors (LWE) assumption. For LAM over constant moduli, we do not know how to base its hardness on the LWE assumption. Instead, we provide (i) polynomial-time attacks on LAM with constant prime-power moduli and certain constant non-prime-power moduli, and (ii) evidence of the sub-exponential hardness of LAM with other moduli by analyzing the effect of typical attacks. More specifically, we put forward two new attacks. The first attack is a recursive algorithm that solves LWE with certain constant composite moduli and error distributions. The algorithm extends the Arora-Ge algorithm for LWE from prime moduli to composite moduli, and it also solves LAM for certain parameters. The second attack is a polynomial-time attack that rules out the existence of weak PRFs in $\mathsf{NC}^0[p]$ for any prime $p$. Based on our studies, we propose candidate weak PRFs in $\mathsf{NC}^0[p_1,p_2]$ for some distinct primes $p_1,p_2$ based on LAM over constant moduli, or the Learning with Rounding (LWR) assumption over constant moduli. Compared to the weak PRF candidates by Boneh et al., our weak PRF candidates live in the same complexity class while having the advantage of being based on well-formed assumptions.
Last updated:  2025-05-27
Registered Functional Encryption for Pseudorandom Functionalities from Lattices: Registered ABE for Unbounded Depth Circuits and Turing Machines, and More
Tapas Pal, Robert Schädlich, and Erkan Tairi
Registered functional encryption (RFE) is a generalization of public-key encryption that enables computation on encrypted data (like classical FE), but without needing a central trusted authority. Concretely, the users choose their own public keys and register their keys together with a function with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key, which serves as the public key of the FE scheme. Currently, we only know RFE constructions for restricted functionalities using standard assumptions, or for all circuits using powerful tools such as indistinguishability obfuscation, and only in the non-uniform model. In this work, we make progress on this front by providing the first lattice-based constructions of RFE for pseudorandom functionalities, where the model of computation is either non-uniform (unbounded depth circuits) or uniform (Turing machines). Intuitively, we call a functionality pseudorandom if the output of the circuit is indistinguishable from uniform for every input seen by the adversary. Security relies on LWE and a recently introduced primitive called pseudorandom FE (prFE), which currently can be instantiated from evasive LWE. We illustrate the versatility of these new functionalities for RFE by leveraging them to achieve key-policy and ciphertext-policy registered attribute-based encryption and registered predicate encryption schemes (KP-RABE, CP-RABE and RPE) for both unbounded depth circuits and Turing machines. Existing RABE constructions support only bounded depth circuits, and prior to our work there neither existed RABE for uniform models of computation nor RPE. As an appealing feature, all our constructions enjoy asymptotic optimality in the sense that their parameters depend neither on the length of public attributes nor the size of policies. Along the way, we can also improve on the state-of-the-art for classical attribute-based encryption (ABE) and predicate encryption (PE). Specifically, we obtain new constructions for KP-ABE, CP-ABE and PE for Turing machines with optimal asymptotic parameters. For KP-ABE, this is an in improvement in terms of efficiency, whereas for CP-ABE and PE we are not aware of any prior purely lattice-based construction supporting Turing machines.
Last updated:  2025-05-27
Multiparty Homomorphic Secret Sharing and More from LPN and MQ
Geoffroy Couteau, Naman Kumar, and Xiaxi Ye
We give the first constructions of multiparty pseudorandom correlation generators, distributed point functions, and (negligible-error) homomorphic secret sharing for constant-degree polynomials for any number of parties without using LWE or iO. Our constructions are proven secure under the combination of LPN with dimension $n$, $2n$ samples, and noise rate $n^{\varepsilon-1}$ for a small constant $\varepsilon$, and MQ with $n$ variables and $n^{1+\delta}$ equations. As applications of our results, we obtain from the same assumptions secure multiparty computation protocols with sublinear communication and silent preprocessing, as well as private information retrieval for $M$ servers and size-$\lambda^d$ databases with optimal download rate and client-to-server communication $M^d\cdot \lambda^3$.
Last updated:  2025-05-27
Multiparty FHE Redefined: A Framework for Unlimited Participants
Robin Jadoul, Barry van Leeuwen, and Oliver Zajonc
Multiparty fully homomorphic encryption (MPFHE) is a generalization of (multi-key) fully homomorphic encryption ((MK)FHE) that lives on the cusp between multiparty computation (MPC) and FHE, enabling a computation over encrypted data using multiple keys. However, contrary to MKFHE it seeks to reduce the noise inflation based on the number of parties by allowing the parties to first compute shared data in MPC before executing the computation in FHE. Generally, MPFHE protocols have required ad-hoc constructions and adaptations to already existing protocols. In this work we present a new framework that standardizes the approach of MPFHE to allow the use of a broad spectrum of MPC and FHE protocols, while eliminating the noise inflation based on the participating number of parties. This presents the first ever multiparty FHE protocol which allows an arbitrary number of participants. We then show a case study of this using the FINAL scheme and show that we reduce the required key material by 40-99.9% compared to the MKFHE FINAL scheme, FINALLY, 8-71% compared to the AKÖ scheme, and 65-70% compared to the Park-Rovira scheme. Moreover, we reduce the bootstrapping time for the AKÖ, Park-Rovira, and KMS schemes by 75-99.7%.
Last updated:  2025-06-03
TOOP: A transfer of ownership protocol over Bitcoin
Ariel Futoransky, Fadi Barbara, Ramses Fernandez, Gabriel Larotonda, and Sergio Demian Lerner
We present the Transfer of Ownership Protocol (TOOP). TOOP solves a limitation of all existing BitVM-like protocols (and UTxO blockchains at large) that restricts the unlocking transfers to addresses known and preregistered during lock and setup. Accordingly, our protocol avoids the financially costly, regulatory problematic, and congestion-prone front-and-reimburse paradigm. Furthermore, we note that one of the main applications of TOOP is as an enabler of secure transfer of assets between UTxO blockchains, and back. We showcase this via sketching a committee-based validation protocol that requires only 1-out-of-n honest security. This protocol operates in distinct phases: the lock phase, where the initial setup and individual assets are locked on Bitcoin, and the unlocking with the ownership transfer phase, where the asset is transferred to a possibly different legitimate owner. This cross-chain bridge protocol, where TOOP plays a key role, is being formalized in concurrent work, and has been implemented for the first time in Cardinal, a protocol for wrapping Bitcoin Unspent Transaction Outputs (UTxOs) onto the Cardano blockchain, with Bitcoin Ordinals represented as Cardano Non-Fungible Tokens (NFTs).
Last updated:  2025-05-26
Permutation-Based Hashing with Stronger (Second) Preimage Resistance - Application to Hash-Based Signature Schemes
Siwei Sun, Shun Li, Zhiyu Zhang, Charlotte Lefevre, Bart Mennink, Zhen Qin, and Dengguo Feng
The sponge is a popular construction of hash function design. It operates with a $b$-bit permutation on a $b$-bit state, that is split into a $c$-bit inner part and an $r$-bit outer part. However, the security bounds of the sponge are most often dominated by the capacity $c$: If the length of the digest is $n$ bits, the construction achieves $\min\{n/2,c/2\}$-bit collision resistance and $\min\{n,c/2\}$-bit second preimage resistance (and a slightly more complex but similar bound for preimage resistance). In certain settings, these bounds are too restrictive. For example, the recently announced Chinese call for a new generation of cryptographic algorithms expects hash functions with 1024-bit digests and 1024-bit preimage and second preimage resistance, rendering the classical sponge design basically unusable, except with an excessively large permutation. We present the SPONGE-DM construction to salvage the sponge in these settings. This construction differs from the sponge by evaluating the permutation during absorption in a Davies-Meyer mode. We also present SPONGE-EDM, that evaluates potentially round-reduced permutations during absorption in Encrypted Davies-Meyer mode, and SPONGE-EDM$^c$, that optimizes the amount of feed-forward data in this construction. We prove that these constructions generically achieve $\min\{n/2,c/2\}$-bit collision resistance as the sponge does, but they achieve $n$-bit preimage resistance and $\min\{n,c-\log_2(\alpha)\}$-bit second preimage resistance, where $\alpha$ is the maximum size of the first preimage in blocks. With such constructions, one could improve the security (resp., efficiency) without sacrificing the efficiency (resp., security) of hash-based signature schemes whose security relies solely on the (second) preimage resistance of the underlying hash functions. Also, one could use the $1600$-bit Keccak permutation with capacity $c=1088$ and rate $r=512$ to achieve $512$-bit collision resistance and $1024$-bit preimage and second preimage resistance, without making extra permutation calls. To encourage further cryptanalysis, we propose two concrete families of instances of SPONGE-EDM (expected to be weaker than SPONGE-DM), using SHA3 and Ascon. Moreover, we concretely demonstrate the security and performance advantages of these instances in the context of hashing and hash-based signing.
Last updated:  2025-05-26
An almost key-homomorphic post-quantum block cipher with key rotation and security update for long-term secret storage
Thomas Prévost, Bruno Martin, and Olivier Alibart
In this paper, we propose a new block cipher primitive, based on ring-LWE, which allows key rotation with a possible security update. This makes it possible to double the security of the ciphertext with each key rotation. Our scheme could therefore be used for long-term secret storage, allowing the security of the ciphertext to be adapted to the attacker's computing power, without the need for decryption. We propose an implementation of our cryptographic scheme and prove its security.
Last updated:  2025-05-26
Addendum to How Small Can S-boxes Be?
Yu Sun, Lixuan Wu, Chenhao Jia, Tingting Cui, Kai Hu, and Meiqin Wang
In ToSC 2025(1), Jia et al. proposed an SAT-aided automatic search tool for the S-box design. A part of the functionality of this tool is to search for implementations of an S-box with good area and gate-depth complexity. However, it is well-known that the gate depth complexity cannot precisely reflect the latency of an implementation. To overcome this problem, Rasoolzadeh introduced the concept of latency complexity, a more precise metric for the latency cost of implementing an S-box than the gate depth complexity in the real world. In this addendum, we adapt Jia et al.'s tool to prioritize latency as the primary metric and area as the secondary metric to search for good implementations for existing S-boxes. The results show that the combination of Jia et al.'s tool and Rasoolzadeh's latency complexity can lead to lower-latency S-box implementations. For S-boxes used in LBlock, Piccolo, SKINNY-64, RECTANGLE, PRESENT and TWINE, which are popular targets in this research line, we find new implementations with lower latency. We conducted synthesis comparisons of the area and latency under multiple standard libraries, where our results consistently outperformed in terms of latency. For example, for LBlock-S0, our solution reduces latency by around 50.0% ∼73.8% compared to previous implementations in TSMC 90nm library with the latency-optimized synthesis option.
Last updated:  2025-05-26
A Framework for Advanced Signature Notions
Patrick Struck and Maximiliane Weishäupl
The beyond unforgeability features formalize additional security properties for signature schemes. We develop a general framework of binding properties for signature schemes that encompasses existing beyond unforgeability features and reveals new notions. Furthermore, we give new results regarding various transforms: We show that the transform by Cremers et al. (SP'21) achieves all of our security notions and provide requirements such that this is also the case for the transform by Pornin and Stern (ACNS'05). Finally, we connect our framework to unforgeability notions.
Last updated:  2025-05-26
Zero-Trust Post-quantum Cryptography Implementation Using Category Theory
Ilias Cherkaoui, Ciaran Clarke, Jerry Horgan, and Indrakshi Dey
This paper blends post-quantum cryptography (PQC) and zero trust architecture (ZTA) to secure the access for AI models, formalized through the abstract mathematical lens of category theory. In this work, latticebased PQC primitives are assigned ZTA components that include microsegmentation and context-aware authentication, leading to a visual compositional framework that describes cryptographic workflows as morphisms and trust policies as functors, showing how category theory allows for fine-grained policies and adaptive trust. This quantum-resistant algorithm viewing perspective offers an ease for protection against adversarial AI threats. The paper uses a concrete implementation to attest to the effectiveness of the theoretical contribution, rendering it a crypto-agile transition using categorical proofs for AI security .
Last updated:  2025-05-26
Efficient Pairings Final Exponentiation Using Cyclotomic Cubing for Odd Embedding Degrees Curves
Uncategorized
Walid Haddaji, Loubna Ghammam, Nadia El Mrabet, and Leila Ben Abdelghani
Show abstract
Uncategorized
In pairings-based cryptographic applications, final exponentiation with a large fixed exponent ensures distinct outputs for the Tate pairing and its derivatives. Despite notable advancements in optimizing elliptic curves with even embedding degrees, improvements for those with odd embedding degrees, particularly those divisible by \(3\), remain underexplored. This paper introduces three methods for applying cyclotomic cubing in final exponentiation and enhancing computational efficiency. The first allows for the execution of one cyclotomic cubing based on the final exponentiation structure. The second leverages some existing seeds structure to enable the use of cyclotomic cubing and extends this strategy to generate new seeds. The third allows generating sparse ternary representation seeds to apply cyclotomic cubing as an alternative to squaring. These optimizations improve performance by up to $19.3\%$ when computing the final exponentiation for the optimal Ate pairing on $BLS15$ and $BLS27$, the target elliptic curves of this study.
Last updated:  2025-05-26
Laurent Polynomial-Based Linear Transformations for Improved Functional Bootstrapping
San Ling, Benjamin Hong Meng Tan, Huaxiong Wang, and Allen Siwei Yang
Following Gentry's seminal work (STOC 2009), Fully Homomorphic Encryption (FHE) has made significant advancements and can even evaluate functions in the bootstrapping process, called functional bootstrapping. Recently, Liu and Wang (ASIACRYPT 2023) proposed a new approach to functional bootstrapping, which bootstrapped ciphertexts in 7ms amortized time. Their methods packed the secret key of the TFHE cryptosystem into a ciphertext of the BFV cryptosystem, followed by performing functional bootstrapping of TFHE within BFV. However, while this yields high amortized efficiency, it faces high latency and computational complexity of $\mathcal{O}(\sqrt{t})$ ciphertext-ciphertext multiplications due to use of large BFV plaintext primes that serve as the TFHE ciphertext modulus, $t = 65537$, to maximize SIMD slots. In this work, we adapt their techniques to achieve lower latency functional bootstrapping by relaxing the requirement for prime BFV plaintext modulus to prime powers, $t = p^r$. We first introduce an improved linear transformation stage, multiplying Laurent Polynomial packed secret key and ciphertexts, $a_{ij}$ and $sk_j$, evaluating a $\mathbb{Z}_{p^r}$ linear map. With this, we reduce the number of operations needed to evaluate the linear phase of bootstrapping. Finally, we generalize their functional bootstrapping procedure from plaintext space $\mathbb{Z}_t$ to $\mathbb{Z}_{p^r}$ via leveraging the digit extraction algorithm, achieving a theoretical complexity of $\mathcal{O}(r^2\sqrt{p})$ ciphertext-ciphertext multiplications. Additionally, we enable a multi-valued bootstrapping scheme that permits the evaluation of multiple functions over a shared input. To the best of our knowledge, this is the first demonstration of such a method for TFHE ciphertexts that relies predominantly on BFV-based techniques. In our experiments, we achieve overall runtimes as low as 49.873s, representing latency reductions of at least $26\times$, while noting a $19\times$ slowdown in amortized performance.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.