Papers updated in last 31 days (456 results)
Blockchain Governance via Sharp Anonymous Multisignatures
Electronic voting has occupied a large part of the cryptographic protocols literature. The recent reality of blockchains---in particular, their need for online governance mechanisms---has brought new parameters and requirements to the problem. We identify the key requirements of a blockchain governance mechanism, namely correctness (including eliminative double votes), voter anonymity, and traceability, and investigate mechanisms that can achieve them with minimal interaction and under assumptions that fit the blockchain setting.
First, we define a signature-like primitive, which we term \textit{sharp anonymous multisignatures} (in short, $\sharp$AMS) that tightly meets the needs of blockchain governance. In a nutshell, $\sharp$AMSs allow any set of parties to generate a signature, e.g., on a proposal to be voted upon, which, if posted on the blockchain, hides the identities of the signers/voters but reveals their number. This can be seen as a (strict) generalization of threshold ring signatures (TRS).
We next turn to constructing such $\sharp$AMSs and using them in various governance scenarios---e.g., single vote vs. multiple votes per voter. In this direction, although the definition of TRS does not imply $\sharp$AMS, one can compile some existing TRS constructions into $\sharp$AMS. This raises the question: What is the TRS structure that allows such a compilation?
To answer the above, we devise templates for TRSs. Our templates encapsulate and abstract the structure that allows for the above compilation---most of the TRS schemes that can be compiled into $\sharp$AMS are, in fact, instantiations of our template. This abstraction makes our template generic for instantiating TRSs and $\sharp$AMSs from different cryptographic assumptions (e.g., DDH, LWE, etc.). One of our templates is based on chameleon hashes, and we explore a framework of lossy chameleon hashes to understand their nature fully.
Finally, we turn to how $\sharp$AMS schemes can be used in our applications. We provide fast (in some cases non-interactive) $\sharp$AMS-based blockchain governance mechanisms for a wide spectrum of assumptions on the honesty (semi-honest vs malicious) and availability of voters and proposers.
Multi-Holder Anonymous Credentials from BBS Signatures
The eIDAS 2.0 regulation aims to develop interoperable digital identities for European citizens, and it has recently become law. One of its requirements is that credentials be unlinkable. Anonymous credentials (AC) allow holders to prove statements about their identity in a way that does not require to reveal their identity and does not enable linking different usages of the same credential. As a result, they are likely to become the technology that provides digital identity for Europeans.
Any digital credential system, including anonymous credentials, needs to be secured against identity theft and fraud. In this work, we introduce the notion of a multi-holder anonymous credential scheme that allows issuing shares of credentials to different authentication factors (or "holders"). To present the credential, the user's authentication factors jointly run a threshold presentation protocol. Our definition of security requires that the scheme provide unforgeability: the adversary cannot succeed in presenting a credential with identity attributes that do not correspond to an identity for which the adversary controls at least $t$ shares; this is true even if the adversary can obtain credentials of its choice and cause concurrent executions of the presentation protocol. Further, our definition requires that the presentation protocol provide security with identifiable abort. Finally, presentations generated by all honest holders must be unlinkable and must not reveal the user's secret identity attributes even to an adversary that controls some of the user's authentication factors.
We design and prove the (concurrent) security of a multi-holder version of the BBS anonymous credential scheme. In our construction, each holder is issued a secret share of a BBS credential. Using these shares, the holders jointly compute a credential presentation that is identical to (and therefore compatible with) the traditional, single-holder variant (due to Tessaro and Zhu, Eurocrypt'23) of a BBS credential presentation.
HI-CKKS: Is High-Throughput Neglected? Reimagining CKKS Efficiency with Parallelism
The proliferation of data outsourcing and cloud services has heightened privacy vulnerabilities. CKKS, among the most prominent homomorphic encryption schemes, allows computations on encrypted data, serving as a critical privacy safeguard. However, performance remains a central bottleneck, hindering widespread adoption. Existing optimization efforts often prioritize latency reduction over throughput performance. This paper presents HI-CKKS, a throughput-oriented High-performance Implementation of CKKS homomorphic encryption, addressing these challenges. Our HI-CKKS introduces a batch-supporting asynchronous execution scheme, effectively mitigating frequent data interactions and high waiting delays between hosts and servers in service-oriented scenarios. We analyze the fundamental (I)NTT primitive, which is critical in CKKS, and develop a hierarchical, hybrid high-throughput implementation. This includes efficient arithmetic module instruction set implementations, unified kernel fusion, and hybrid memory optimization strategies that significantly improve memory access efficiency and the performance of (I)NTT operations. Additionally, we propose a multi-dimensional parallel homomorphic multiplication scheme aimed at maximizing throughput and enhancing the performance of (I)NTT and homomorphic multiplication. In conclusion, our implementation is deployed on the RTX 4090, where we conduct a thorough throughput performance evaluation of HI-CKKS, enabling us to pinpoint the most effective parallel parameter settings. Compared to the CPU implementation, our system achieves throughput increases of $175.08\times$, $191.27\times$, and $679.57\times$ for NTT, INTT, and HMult, respectively. And our throughput performance still demonstrates a significant improvement, ranging from $1.54\times$ to $693.17\times$ compared to the latest GPU-based works.
Separating Pseudorandom Codes from Local Oracles
Pseudorandom codes (PRCs) are error-correcting codes with the distinguishing feature that their codewords are computationally indistinguishable from random strings. Introduced by Christ and Gunn (CRYPTO 2024), PRCs have found applications in areas such as AI watermarking, where both robustness and pseudorandomness are essential. All known constructions of PRCs rely on coding-theoretic hardness assumptions. In this work, we study how inherent the use of coding-theoretic hardness is in the construction of pseudorandom codes.
We show that there is no black-box construction of PRCs with binary alphabets capable of decoding from a constant fraction of Bernoulli noise from a class of oracles we call local oracles. The class of local oracles includes random oracles and trapdoor permutation oracles, and can be interpreted as a meaningful notion of oracles that are not resilient against noise. Our separation result is cast in the Impagliazzo-Rudich framework and crucially relies on the Bonami-Beckner hypercontractivity theorem on the Boolean hypercube.
As a complementary result, we show that PRCs with large alphabets that can tolerate high error rates can indeed be constructed in a black-box manner from one-way functions.
SCIF: Privacy-Preserving Statistics Collection with Input Validation and Full Security
Secure aggregation is the distributed task of securely computing a sum of values (or a vector of values) held by a set of parties, revealing only the output (i.e., the sum) in the computation. Existing protocols, such as Prio (NDSI’17), Prio+ (SCN’22), Elsa (S&P’23), and Whisper (S&P’24), support secure aggregation with input validation to ensure inputs belong to a specified domain. However, when malicious servers are present, these protocols primarily guarantee privacy but not input validity. Also, malicious server(s) can cause the protocol to abort. We introduce SCIF, a novel multi-server secure aggregation protocol with input validation, that remains secure even in the presence of malicious actors, provided fewer than one-third of the servers are malicious. Our protocol overcomes previous limitations by providing two key properties: (1) guaranteed output delivery, ensuring malicious parties cannot prevent the protocol from completing, and (2) guaranteed input inclusion, ensuring no malicious party can prevent an honest party’s input from being included in the computation. Together, these guarantees provide strong resilience against denial-of-service attacks. Moreover, SCIF offers these guarantees without increasing client costs over Prio and keeps server costs moderate. We present a robust end-to-end implementation of SCIF and demonstrate the ease with which it can be instrumented by integrating it in a simulated Tor network for privacy-preserving measurement.
Key Exchange with Tight (Full) Forward Secrecy via Key Confirmation
Weak forward secrecy (wFS) of authenticated key exchange (AKE) protocols is a passive variant of (full) forward secrecy (FS). A natural mechanism to upgrade from wFS to FS is the use of key confirmation messages which compute a message authentication code (MAC) over the transcript. Unfortunately, Gellert, Gjøsteen, Jacobson and Jager (GGJJ, CRYPTO 2023) show that this mechanism inherently incurs a loss proportional to the number of users, leading to an overall non-tight reduction, even if wFS was established using a tight reduction.
Inspired by GGJJ, we propose a new notion, called one-way verifiable weak forward secrecy (OW-VwFS), and prove that OW-VwFS can be transformed tightly to FS using key confirmation in the random oracle model (ROM). To implement our generic transformation, we show that several tightly wFS AKE protocols additionally satisfy our OW-VwFS notion tightly. We highlight that using the recent lattice-based protocol from Pan, Wagner, and Zeng (CRYPTO 2023) can give us the first lattice-based tightly FS AKE via key confirmation in the classical random oracle model. Besides this, we also obtain a Decisional-Diffie-Hellman-based protocol that is considerably more efficient than the previous ones.
Finally, we lift our study on FS via key confirmation to the quantum random oracle model (QROM). While our security reduction is overall non-tight, it matches the best existing bound for wFS in the QROM (Pan, Wagner, and Zeng, ASIACRYPT 2023), namely, it is square-root- and session-tight. Our analysis is in the multi-challenge setting, and it is more realistic than the single-challenge setting as in Pan et al..
Highway to Hull: An Algorithm for Solving the General Matrix Code Equivalence Problem
The matrix code equivalence problem consists, given two matrix spaces $\mathcal{C},\mathcal{D} \subset \mathbb{F}_q^{m\times n}$ of dimension $k$, in finding invertible matrices $P\in\mathrm{GL}_m(\mathbb{F}_q)$ and $Q\in\mathrm{GL}_n(\mathbb{F}_q)$ such that $\mathcal{D}=P\mathcal{C} Q^{-1}$. Recent signature schemes such as MEDS and ALTEQ relate their security to the hardness of this problem. Recent works by Narayanan, Qiao and Tang on the one hand and by Ran and Samardjiska on the other hand tackle this problem. The former is restricted to the ``cubic'' case $k = m =n$ and succeeds in $\widetilde{\mathcal{O}}(q^{\frac k 2})$ operations. The latter is an algebraic attack on the general problem whose complexity is not fully understood and which succeeds only on $\mathcal{O}(1/q)$ instances. We present a novel algorithm which solves the problem in the general case. Our approach consists in reducing the problem to the matrix code conjugacy problem, \emph{i.e.} the case $P=Q$. For the latter problem, similarly to the permutation code equivalence problem in Hamming metric, a natural invariant based on the \emph{Hull} of the code can be used. Next, the equivalence of codes can be deduced using a usual list collision argument. For $k=m=n$, our algorithm achieves the same time complexity as Narayanan \emph{et al.} but with a lower space complexity. Moreover, ours extends to a much broader range of parameters.
Achieving "beyond CCA1" security for linearly homomorphic encryption, without SNARKs?
In the wake of Manulis and Nguyen's Eurocrypt'24 paper, new CCA security notions, vCCA and vCCAD, and associated construction blueprints have been proposed to leverage either CPA or CPAD secure FHE beyond the CCA1 security barrier. These two notions are the strongest CCA security notions so far achievable, respectively, by correct and approximate homomorphic schemes. However, the only known construction strategies intimately require advanced SNARK machinery, undermining their practicality. In this context, this paper is an attempt to achieve these advanced CCA security notions in the restricted case of linearly homomorphic encryption, without resorting to SNARKs. To do so, we investigate the relationship between the Linear-Only Homomorphism (LOH) assumption, an assumption that has been used for more than a decade at the core of several proof-of-knowledge constructions, and these two recent security notions (vCCA and vCCAD). On the bright side, when working under the correctness assumption, we establish that the LOH property is sufficient to achieve vCCA security in both the private and public key settings. In the public key setting, we further show that a surprisingly simple and previously known Paillier-based construction also achieves this level of security, at only twice the cost of the baseline scheme. We then turn our attention to LWE-based schemes for which the Pandora box of decryption errors opens up. In the private key setting, we are able to achieve CPAD and vCCAD security but only in a fairly restrictive non-adaptive setting, in which vCCAD collapses onto a weak relaxation of CCA1. Finally, we eventually achieve adaptive vCCAD security provided that the number of ciphertexts given to the adversary is suitably restricted. While bridging the gap towards credible practicality requires further work, this is a first step towards obtaining linear homomorphic schemes achieving these recent CCA security notions by means only of relatively lightweight machinery.
Non-interactive Anonymous Tokens with Private Metadata Bit
Anonymous tokens with private metadata bit (ATPM) have received increased interest as a method for anonymous user authentication while also allowing the issuer to embed trust signals inside the token that are only readable by the authority who holds the secret key. A drawback of all existing ATPM constructions is that they require interaction between the client and the issuer during the issuance process. In this work, we build the first non-interactive anonymous tokens (NIAT) with private metadata bit, inspired by the recent work of Hanzlik (Eurocrypt '23) on non-interactive blind signatures. We discuss how the property of non-interactivity during issuance allows for more efficient protocols that avoid the need for online signing. We construct an efficient NIAT scheme based on Structure-preserving Signatures on Equivalence Classes (SPS-EQ) and experimentally evaluate its performance. We also present an extension to our NIAT construction that allows the identification of clients who attempt to double-spend a token (i.e., present the same token twice) and argue that non-interactive schemes are uniquely positioned to offer this essential feature.
Single-server Stateful PIR with Verifiability and Balanced Efficiency
Recent stateful private information retrieval (PIR) schemes have significantly improved amortized computation and amortized communication while aiming to keep client storage minimal. However, all the schemes in the literature still suffer from a poor tradeoff between client storage and computation.
We present BALANCED-PIR, a stateful PIR scheme that effectively balances computation and client storage. For a database of a million entries, each of 8 bytes, our scheme requires 0.2 MB of client storage, 0.2 ms of amortized computation, and 11.14 KB of amortized communication. Compared with the state-of-the-art scheme using a similar storage setting, our scheme is almost 9x better in amortized computation and 40x better in offline computation.
Verifiable private information retrieval has been gaining more attention recently. However, all existing schemes require linear amortized computation and huge client storage. We present Verifiable BALANCED-PIR, a verifiable stateful PIR scheme with sublinear amortized computation and small client storage. In fact, our Verifiable BALANCED-PIR adds modest computation, communication, and storage costs on top of BALANCED-PIR. Compared with the state-of-the-art verifiable scheme, the client storage of our scheme is 100x smaller, the amortized computation is 15x less, and the amortized communication is 2.5x better.
Private Signaling Secure Against Actively Corrupted Servers
Private signaling allows servers to identify a recipient's messages on a public bulletin board without knowing the recipient's metadata. It is a central tool for systems like privacy-preserving blockchains and anonymous messaging. However, unless with TEE, current constructions all assume that the servers are only passively corrupted, which significantly limits their practical relevance. In this work, we present a TEE-free simulation-secure private signaling protocol assuming two non-colluding servers, either of which can be actively corrupted.
Crucially, we convert signal retrieval into a problem similar to private set intersection and use custom-built zero-knowledge proofs to ensure consistency with the public bulletin board. As a result, our protocol achieves lower server-to-server communication overhead and a much smaller digest compared to state-of-the-art semi-honest protocol. For example, for a board size of $2^{19}$ messages, the resulting digest size is only 33.57KB. Our protocol is also computationally efficient: retrieving private signals only takes about 2 minutes, using 16 threads and a LAN network.
Arc: Accumulation for Reed--Solomon Codes
Proof-Carrying Data (PCD) is a foundational tool for ensuring the correctness of incremental distributed computations that has found numerous applications in theory and practice. The state-of-the-art PCD constructions are obtained via accumulation or folding schemes. Unfortunately, almost all known constructions of accumulation schemes rely on homomorphic vector commitments (VCs), which results in relatively high computational costs and insecurity in the face of quantum adversaries. A recent work of Bünz, Mishra, Nguyen, and Wang removes the dependence on homomorphic VCs by relying only on the random oracle model, but introduces a bound on the number of consecutive accumulation steps, which in turn bounds the depth of the PCD computation graph and greatly affects prover and verifier efficiency.
In this work, we propose Arc, a novel hash-based accumulation scheme that overcomes this restriction and supports an unbounded number of accumulation steps. The core building block underlying Arc is a new accumulation scheme for claims about proximity of claimed codewords to the Reed--Solomon code. Our approach achieves near-optimal efficiency, requiring a small number of Merkle tree openings relative to the code rate, and avoids the efficiency loss associated with bounded accumulation depth. Unlike prior work, our scheme is also able to accumulate claims up to list-decoding radius, resulting in concrete efficiency improvements.
We use this accumulation scheme to construct two distinct accumulation schemes, again relying solely on random oracles. The first approach accumulates RS proximity claims and can be used as an almost-drop-in replacement in existing PCD deployments based on IOP-based SNARKs.
The second approach directly constructs an accumulation scheme for rank-1 constraint systems (and more generally polynomial constraint systems) that is simpler and more efficient than the former and prior approaches.
We introduce the notion of Interactive Oracle Reductions (IORs) to enable a modular and simple security analysis. These extend prior notions of Reductions of Knowledge to the setting of IOPs.
The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). Most notably, we factor $n$-bit integers of the form $P^2 Q$ with $\log Q = \Theta(n^a)$ for $a \in (2/3, 1)$ in space and depth sublinear in n (specifically, $\widetilde{O}(\log Q)$) using $\widetilde{O}(n)$ quantum gates; for these integers, no known classical algorithms exploit the relatively small size of $Q$ to run asymptotically faster than general-purpose factoring algorithms. To our knowledge, this is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known.
Our circuit builds on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du, and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of $A$ mod $B$, in the regime where $B$ is classical and much larger than $A$. Our circuit for computing the Jacobi symbol generalizes to related problems such as computing the greatest common divisor and modular inverses, and thus could be of independent interest.
Rewardable Naysayer Proofs
Combining verifiable computation with optimistic approaches is a promising direction to scale blockchain applications.
The basic idea consists of saving computations by avoiding the verification of proofs unless there are complaints.
A key tool to design systems in the above direction has been recently proposed by Seres, Glaeser and Bonneau [FC'24] who formalized the concept of a Naysayer proof: an efficient to verify proof disproving a more demanding to verify original proof.
In this work, we discuss the need of rewarding naysayer provers, the risks deriving from front-running attacks, and the failures of generic approaches trying to defeat them.
Next, we introduce the concept of verifiable delayed naysayer proofs and show a construction leveraging proofs of sequential work, without relying on any additional infrastructure.
Breaking the 1/λ-Rate Barrier for Arithmetic Garbling
Garbled circuits, introduced in the seminal work of Yao (FOCS, 1986), have received considerable attention in the boolean setting due to their efficiency and application to round-efficient secure computation. In contrast, arithmetic garbling schemes have received much less scrutiny. The main efficiency measure of garbling schemes is their rate, defined as the bit size of each gate's output divided by the size of the (amortized) garbled gate. Despite recent progress, state-of-the-art garbling schemes for arithmetic circuits suffer from important limitations: all existing schemes are either restricted to $B$-bounded integer arithmetic circuits (a computational model where the arithmetic is performed over $\mathbb{Z}$ and correctness is only guaranteed if no intermediate computation exceeds the bound $B$) and achieve constant rate only for very large bounds $B = 2^{\Omega(\lambda^3)}$, or have a rate at most $O(1/\lambda)$ otherwise, where $\lambda$ denotes a security parameter. In this work, we improve this state of affairs in both settings.
- As our main contribution, we introduce the first arithmetic garbling scheme over modular rings $\mathbb{Z}_B$ with rate $O(\log\lambda/\lambda)$, breaking for the first time the $1/\lambda$-rate barrier for modular arithmetic garbling. Our construction relies on the power-DDH assumption.
- As a secondary contribution, we introduce a new arithmetic garbling scheme for $B$-bounded integer arithmetic that achieves a constant rate for bounds $B$ as low as $2^{O(\lambda)}$. Our construction relies on a new non-standard KDM-security assumption on Paillier encryption with small exponents.
How to Trace Viral Content in End-to-End Encrypted Messaging
We study the problem of combating *viral* misinformation campaigns in end-to-end encrypted (E2EE) messaging systems such as WhatsApp. We propose a new notion of Hop Tracking Signatures (HTS) that allows for tracing originators of messages that have been propagated on long forwarding paths (i.e., gone viral), while preserving anonymity of everyone else. We define security for HTS against malicious servers.
We present both negative and positive results for HTS: on the one hand, we show that HTS does not admit succinct constructions if tracing and anonymity thresholds differ by exactly one "hop". On the other hand, by allowing for a larger gap between tracing and anonymity thresholds, we can build succinct HTS schemes where the signature size does not grow with the forwarding path. Our positive result relies on streaming algorithms and strong cryptographic assumptions.
Prior works on tracing within E2EE messaging systems either do not achieve security against malicious servers or focus only on tracing originators of pre-defined banned content.
Foundations of Platform-Assisted Auctions
Today, many auctions are carried out with the help of intermediary platforms like Google and eBay. These platforms serve as a rendezvous point for the buyers and sellers, and charge a fee for its service. We refer to such auctions as platform-assisted auctions. Traditionally, the auction theory literature mainly focuses on designing auctions that incentivize the buyers to bid truthfully, assuming that the platform always faithfully implements the auction. In practice, however, the platforms have been found to manipulate the auctions to earn more profit, resulting in high-profile anti-trust lawsuits.
We propose a new model for studying platform-assisted auctions in the permissionless setting, where anyone can register and participate in the auction. We explore whether it is possible to design a dream auction in this new model, such that honest behavior is the utility-maximizing strategy for each individual buyer, the platform, the seller, as well as platform-seller or platform-buyer coalitions. Through a collection of feasibility and infeasibility results, we carefully characterize the mathematical landscape of platform-assisted auctions.
Interestingly, our work reveals exciting connections between cryptography and mechanism design. We show how cryptography can lend to the design of an efficient platform-assisted auction with dream properties. Although a line of works have also used multi-party computation (MPC) or the blockchain to remove the reliance on a trusted auctioneer, our work is distinct in nature in several dimensions. First, we initiate a systematic exploration of the game theoretic implications when the service providers (e.g., nodes that implement the MPC or blockchain protocol) are strategic and can collude with sellers or buyers. Second, we observe that the full simulation paradigm used in the standard MPC literature is too stringent and leads to high asymptotical costs. Specifically, because every player has a different private outcome in an auction protocol, to the best of our knowledge, running any generic MPC protocol among the players would incur at least $n^2$ total cost where $n$ is the number of buyers.We propose a new notion of simulation called {\it utility-dominated emulation} that is sufficient for guaranteeing the game-theoretic properties needed in an auction. Under this new notion of simulation, we show how to design efficient auction protocols with quasilinear efficiency, which gives an $n$-fold improvement over any generic approach.
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Fixed Point Arithmetic (FPA) is widely used in Privacy-Preserving Machine Learning (PPML) to efficiently handle decimal values. However, repeated multiplications in FPA can lead to overflow, as the fractional part doubles in size with each multiplication. To address this, truncation is applied post-multiplication to maintain precision. Various truncation schemes based on Secure Multiparty Computation (MPC) exist, but trade-offs between accuracy and efficiency in PPML models and datasets remain underexplored. In this work, we analyze and consolidate different truncation approaches from the MPC literature.
We conduct the first large-scale systematic evaluation of PPML inference accuracy across truncation schemes, ring sizes, neural network architectures, and datasets. Our study provides clear guidelines for selecting the optimal truncation scheme and parameters for PPML inference. All evaluations are implemented in the open-source HPMPC MPC framework, facilitating future research and adoption.
Beyond our large scale evaluation, we also present improved constructions for each truncation scheme, achieving up to a threefold reduction in communication and round complexity over existing schemes. Additionally, we introduce optimizations tailored for PPML, such as strategically fusing different neural network layers. This leads to a mixed-truncation scheme that balances truncation costs with accuracy, eliminating communication overhead in the online phase while matching the accuracy of plaintext floating-point PyTorch inference for VGG-16 on the ImageNet dataset.
Cryptographic Commitments on Anonymizable Data
Local Differential Privacy (LDP) mechanisms consist of (locally) adding controlled noise to data in order to protect the privacy of their owner. In this paper, we introduce a new cryptographic primitive called LDP commitment. Usually, a commitment ensures that the committed value cannot be modified before it is revealed. In the case of an LDP commitment, however, the value is revealed after being perturbed by an LDP mechanism. Opening an LDP commitment therefore requires a proof that the mechanism has been correctly applied to the value, to ensure that the value is still usable for statistical purposes. We also present a security model for this primitive, in which we define the hiding and binding properties. Finally, we present a concrete scheme for an LDP staircase mechanism (generalizing the randomized response technique), based on classical cryptographic tools and standard assumptions. We provide an implementation in Rust that demonstrates its practical efficiency (the generation of a commitment requires just a few milliseconds). On the application side, we show how our primitive can be used to ensure simultaneously privacy, usability and traceability of medical data when it is used for statistical studies in an open science context. We consider a scenario where a hospital provides sensitive patients data signed by doctors to a research center after it has been anonymized, so that the research center can verify both the provenance of the data (i.e. verify the doctors’ signatures even though the data has been noised) and that the data has been correctly anonymized (i.e. is usable even though it has been anonymized).
State Machine Replication Among Strangers, Fast and Self-Sufficient
A set of unacquainted parties, some of which may misbehave, communicate with each other over an unauthenticated and unreliable gossip network. They wish to jointly replicate a state machine $\Pi$ so that each one of them has fair access to its operation. Specifically, assuming parties' computational power is measured as queries to an oracle machine $H(\cdot)$, parties can issue symbols to the state machine in proportion to their queries to $H(\cdot)$ at a given fixed rate. Moreover, if such access to the state machine is provided continuously in expected constant time installments we qualify it as fast fairness.
A state machine replication (SMR) protocol in this permissionless setting is expected to offer consistency across parties and reliably process all symbols that honest parties wish to add to it in a timely manner despite continuously fluctuating participation and in the presence of an adversary who commands less than half of the total queries to $H(\cdot)$ per unit of time.
A number of protocols strive to offer the above guarantee together with fast settlement --- notably, the Bitcoin blockchain offers a protocol that settles against Byzantine adversaries in polylogarithmic rounds, while fairness only holds in a fail-stop adversarial model (due to the fact that Byzantine behavior can bias access to the state machine in the adversary's favor). In this work, we put forth the first Byzantine-resilient protocol solving SMR in this setting with both expected-constant-time settlement and fast fairness. Furthermore, our protocol is self-sufficient in the sense of performing its own time keeping while tolerating an adaptively fluctuating set of parties.
Breaktooth: Breaking Security and Privacy in Bluetooth Power-Saving Mode
With the increasing demand for Bluetooth devices, various Bluetooth devices support a power-saving mode to reduce power consumption. One of the features of the power-saving mode is that the Bluetooth sessions among devices are temporarily disconnected or are close to being disconnected. Prior works have analyzed that the power-saving mode is vulnerable to denial of sleep (DoSL) attacks that interfere with the transition to the power-saving mode of Bluetooth devices, thereby increasing its power consumption. However, to the best of our knowledge, no prior work has analyzed vulnerabilities or attacks on the state after transitioning to the power-saving mode.
To address this issue, we present an attack that abuses two novel vulnerabilities in sleep mode, which is one of the Bluetooth power-saving modes, to break Bluetooth sessions. We name the attack Breaktooth. The attack is the first to abuse the vulnerabilities as an entry point to hijack Bluetooth sessions between victims. The attack also allows overwriting the link key between the victims using the hijacked session, enabling arbitrary command injection on the victims. Furthermore, while many prior attacks assume that attackers can forcibly disconnect the Bluetooth session using methods such as jamming to launch their attacks, our attack does not require such assumptions, making it more realistic.
In this paper, we present the root causes of the Breaktooth attack and their impact. We also provide the technical details of how attackers can secretly detect the sleep mode of their victims. The attackers can easily recognize the state of the victim's Bluetooth session remotely using a standard Linux command. Additionally, we develop a low-cost toolkit to perform our attack and confirm the effectiveness of our attack. Then, we evaluate the attack on 17 types of commodity Bluetooth keyboards, mice and audio devices that support the sleep mode and show that the attack poses a serious threat to Bluetooth devices supporting the sleep mode. To prevent our attack, we present defenses and their proof-of-concept. We responsibly disclosed our findings to the Bluetooth SIG. We also released the toolkit as open-source.
Synergy: A Lightweight Block Cipher with Variable Bit Rotation Feistel Network
Synergy is a lightweight block cipher designed for resource-constrained environments such as IoT devices, embedded systems, and mobile applications. Built around a 16-round Feistel network, 8 independent pseudorandom number generators (PRNGs) ensure strong diffusion and confusion through the generation of per-block unique round keys. With a 1024-bit key and a 64-bit block size, Synergy mitigates vulnerabilities to ML-based cryptanalysis by using a large key size in combination with key- and data-dependent bit rotations, which reduce statistical biases and increase unpredictability. By utilizing 32-bit arithmetic for efficient processing, Synergy achieves high throughput, low latency, and low power consumption, providing performance and security for applications where both are critical.
Improved Resultant Attack against Arithmetization-Oriented Primitives
In the last decade, the introduction of advanced cryptographic protocols operating on large finite fields $\mathbb{F}_q$ has raised the need for efficient cryptographic primitives in this setting, commonly referred to as Arithmetization-Oriented (AO). The cryptanalysis of AO hash functions is essentially done through the study of the CICO problem on the underlying permutation. Two recent works at Crypto 2024 and Asiacrypt 2024 managed to solve the CICO problem much more efficiently than traditional Gröbner basis methods, using respectively advanced Gröbner basis techniques and resultants.
In this paper, we propose an attack framework based on resultants that applies to a wide range of AO permutations and improves significantly upon these two recent works. Our improvements mainly come from an efficient reduction procedure that we propose and rigorously analyze, taking advantage of fast multivariate multiplication. We present the most efficient attacks on Griffin, Arion, Anemoi, and Rescue. We show that most variants of Griffin, Arion and Anemoi fail to reach the claimed security level. For the first time, we successfully break a parameter set of Rescue, namely its $512$-bit security variant. The presented theory and complexity estimates are backed up with experimental attacks. Notably, we practically find CICO solutions for $8$ out of $10$ rounds of Griffin, $11$ out of $20$ rounds of Anemoi, $6$ out of $18$ rounds of Rescue, improving by respectively $1$, $3$ and $1$ rounds on the previous best practical attacks.
Integral Resistance of Block Ciphers with Key Whitening by Modular Addition
Integral attacks exploit structural weaknesses in symmetric cryptographic primitives by analyzing how subsets of inputs propagate to produce outputs with specific algebraic properties. For the case of (XOR) key-alternating block ciphers using (independent) round keys, at ASIACRYPT'21, Hebborn et al. established the first non-trivial lower bounds on the number of rounds required for ensuring integral resistance in a quite general sense. For the case of adding keys by modular addition, no security arguments are known so far. Here, we present a unified framework for analyzing the integral resistance of primitives using (word-wise) modular addition for key whitening, allowing us to not only fill the gap for security arguments, but also to overcome the heavy computational cost inherent in the case of XOR-whitening.
XHMQV: Better Efficiency and Stronger Security for Signal’s Initial Handshake based on HMQV
The Signal protocol is the most widely deployed end-to-end-encrypted messaging protocol. Its initial handshake protocol X3DH allows parties to asynchronously derive a shared session key without the need to be online simultaneously, while providing implicit authentication, forward secrecy, and a form of offline deniability. The X3DH protocol has been extensively studied in the cryptographic literature and is acclaimed for its strong "maximum-exposure" security guarantees, hedging against compromises of users' long-term keys and medium-term keys but also the ephemeral randomness used in the handshake. This maximum-exposure security is achieved by deriving keys from the concatenation of 3–4 Diffie–Hellman (DH) secrets, each combining two long-term, medium-term, or ephemeral DH shares.
Remarkably, X3DH's approach of concatenating plain DH combinations is sub-optimal, both in terms of maximum-exposure security and performance. Indeed, Krawczyk's well-known HMQV protocol (Crypto '05) is a high-performance, DH-based key exchange that provides strong security against long-term and ephemeral key compromise. One might hence wonder: why not base Signal's initial handshake on HMQV?
In this work, we study this question and show that a carefully adapted variant of HMQV, which we call XHMQV, indeed enables stronger security and efficiency while matching the constraints of Signal's initial handshake. Most notably, HMQV does not work as a drop-in replacement for X3DH, as the latter's asynchronicity requires the protocol to handle cases where one party runs out of ephemeral keys (pre-uploaded to the Signal server). Our XHMQV design hence augments HMQV with medium-term keys analogous to those used in X3DH. We prove that XHMQV provides security in all 3–4 compromise scenarios where X3DH does and additionally in 1–2 further scenarios, strengthening the handshake's maximum-exposure guarantees while using more efficient group operations. We further confirm that our XHMQV design achieves deniability guarantees comparable to X3DH. Our security model is the first to capture Signal's long-term key reuse between DH key exchange and signatures, which may be of independent interest.
The Algebraic One-More MISIS Problem and Applications to Threshold Signatures
This paper introduces a new one-more computational problem for lattice-based cryptography, which we refer to as the Algebraic One-More MISIS problem, or AOM-MISIS for short. It is a modification of the AOM-MLWE problem recently introduced by Espitau et al. (CRYPTO '24) to prove security of new two-round threshold signatures.
Our first main result establishes that the hardness of AOM-MISIS is implied by the hardness of MSIS and MLWE (with suitable parameters), both of which are standard assumptions for efficient lattice-based cryptography. We prove this result via a new generalization of a technique by Tessaro and Zhu (EUROCRYPT '23) used to prove hardness of a one-more problem for linear hash functions assuming their collision resistance, for which no clear lattice analogue was known. Since the hardness of AOM-MISIS implies the hardness of AOM-MLWE, our result resolves the main open question from the work of Espitau et al., who only provided a similar result for AOM-MLWE restricted to selective adversaries, a class which does not cover the use for threshold signatures.
Furthermore, we show that our novel formulation of AOM-MISIS offers a better interface to develop tighter security bounds for state-of-the-art two-round threshold signatures. We exemplify this by providing new proofs of security, assuming the hardness of MLWE and MSIS, for two threshold signatures, the one proposed in the same work by Espitau et al., as well as a recent construction by Chairattana-Apirom et al. (ASIACRYPT 2024). For the former scheme, we also show that it satisfies the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO '22), as a result of independent interest.
The Nonlinear Filter Model of Stream Cipher Redivivus
The nonlinear filter model is an old and well understood approach to the design of secure stream ciphers. Extensive research over several decades has shown how to attack stream ciphers based on this model and has identified the security properties required of the Boolean function used as the filtering function to resist such attacks. This led to the problem of constructing Boolean functions which provide adequate security and at the same time are efficient to implement. Unfortunately, over the last two decades no good solutions to this problem appeared in the literature. The lack of good solutions has effectively led to nonlinear filter model becoming more or less obsolete. This is a big loss to the cryptographic design toolkit, since the great advantages of the nonlinear filter model are its simplicity, well understood security and the potential to provide low cost solutions for hardware oriented stream ciphers. In this paper we construct balanced functions on an odd number $n\geq 5$ of variables with the following provable properties: linear bias equal to $2^{-\lfloor n/2\rfloor -1}$, algebraic degree equal to $2^{\lfloor \log_2\lfloor n/2\rfloor \rfloor}$, algebraic immunity at least $\lceil (n-1)/4\rceil$, fast algebraic immunity at least $1+\lceil (n-1)/4\rceil $, and can be implemented using $O(n)$ NAND gates. The functions are obtained from a simple modification of the well known class of Maiorana-McFarland bent functions. By appropriately choosing $n$ and the length $L$ of the linear feedback shift register, we show that it is possible to obtain examples of stream ciphers which are $\kappa$-bit secure against known types of attacks for various values of $\kappa$. We provide concrete proposals for $\kappa=80,128,160,192,224$ and $256$. For the $80$-bit, $128$-bit, and the $256$-bit security levels, the circuits for the corresponding stream ciphers require about 1743.5, 2771.5, and 5607.5 NAND gates respectively. For the $80$-bit and the $128$-bit security levels, the gate count estimates compare quite well to the famous ciphers Trivium and Grain-128a respectively, while for the $256$-bit security level, we do not know of any other stream cipher design which has such a low gate count.
Designated-Verifier SNARGs with One Group Element
We revisit the question of minimizing the proof length of designated-verifier succinct non-interactive arguments (dv-SNARGs) in the generic group model. Barta et al. (Crypto 2020) constructed such dv-SNARGs with inverse-polynomial soundness in which the proof consists of only two group elements. For negligible soundness, all previous constructions required a super-constant number of group elements.
We show that one group element suffices for negligible soundness. Concretely, we obtain dv-SNARGs (in fact, dv-SNARKs) with $2^{-\tau}$ soundness where proofs consist of one element of a generic group $\mathbb G$ and $O(\tau)$ additional bits. In particular, the proof length in group elements is constant even with $1/|\mathbb G|$ soundness error.
In more concrete terms, compared to the best known SNARGs using bilinear groups, we get dv-SNARGs with roughly $2$x shorter proofs (with $2^{-80}$ soundness at a $128$-bit security level). We are not aware of any practically feasible proof systems that achieve similar succinctness, even fully interactive or heuristic ones.
Our technical approach is based on a novel combination of techniques for trapdoor hash functions and group-based homomorphic secret sharing with linear multi-prover interactive proofs.
A Complete Characterization of One-More Assumptions In the Algebraic Group Model
One-more problems like One-More Discrete Logarithm (OMDL) and One-More Diffie--Hellman (OMDH) have found wide use in cryptography, due to their ability to naturally model security definitions for interactive primitives like blind signatures and oblivious PRF. Furthermore, a generalization of OMDH called Threshold OMDH (TOMDH) has proven useful for building threshold versions of interactive protocols. However, due to their complexity it is often unclear how hard such problems actually are, leading cryptographers to analyze them in idealized models like the Generic Group Model (GGM) and Algebraic Group Model (AGM). In this work we give a complete characterization of known group-based one-more problems in the AGM, using the $Q$-DL hierarchy of assumptions defined in the work of Bauer, Fuchsbauer and Loss (CRYPTO '20).
1. Regarding (T)OMDH, we show (T)OMDH is part of the $Q$-DL hierarchy in the AGM; in particular, $Q$-OMDH is equivalent to $Q$-DL. Along the way we find and repair a flaw in the original GGM hardness proof of TOMDH, thereby giving the first correct proof that TOMDH is hard in the GGM.
2. Regarding OMDL, we show the $Q$-OMDL problems constitute an infinite hierarchy of problems in the AGM incomparable to the $Q$-DL hierarchy; that is, $Q$-OMDL is separate from $Q'$-OMDL if $Q' \neq Q$, and also separate from $Q'$-DL unless $Q = Q' = 0$.
Compiled Nonlocal Games from any Trapdoor Claw-Free Function
A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computations done by a quantum server — classical verification of quantum computations, for short. Their compiler relies on the existence of quantum fully homomorphic encryption.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols.
Our compiler is based on a novel blind classical delegation of quantum computation protocol, constructed within the framework of measurement-based quantum computation. The latter construction combines a new blind remote state preparation protocol with a generalization of the universal blind quantum computation protocol by Broadbent, Fitzsimons, and Kashefi (FOCS 2009).
These two constructions may also be of independent interest, as the techniques employed could enable the blind construction of more specific quantum states, and the generalized framework allows for the use of blind quantum computation in a broader range of applications, particularly in quantum cryptographic protocols.
The compiler can be instantiated assuming the existence of any plain trapdoor claw-free function.
Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols for classically verifying quantum computations based on a broader range of computational assumptions, potentially weaker than those previously known.
One-way multilinear functions of the second order with linear shifts
We introduce and analyze a novel class of binary operations on finite-dimensional vector spaces over a field \( K \), defined by second-order multilinear expressions with linear shifts. These operations generate polynomials whose degree increases linearly with each iterated application, while the number of distinct monomials grows combinatorially. We demonstrate that, despite the non-associative and non-commutative nature in general, these operations exhibit power associativity and internal commutativity when iterated on a single vector. This allows for well-defined exponentiation \( a^n \). Crucially, the absence of a simple closed-form expression for \( a^n \) suggests a one-way property: computing \( a^n \) from \( a \) and \( n \) is straightforward, but recovering \( n \) from \( a^n \) (the Discrete Iteration Problem) appears computationally hard. We propose a Diffie–Hellman-like key exchange protocol utilizing these properties over finite fields, defining an Algebraic Diffie–Hellman Problem (ADHP). The proposed structures are of interest for cryptographic primitives, algebraic dynamics, and computational algebra.
Orient Express: Using Frobenius to Express Oriented Isogenies
In this paper we study supersingular elliptic curves primitively oriented by an imaginary quadratic order, where the orientation is determined by an endomorphism that factors through the Frobenius isogeny. In this way, we partly recycle one of the main features of CSIDH, namely the fact that the Frobenius orientation can be represented for free. This leads to the most efficient family of ideal-class group actions in a range where the discriminant is significantly larger than the field characteristic $p$. Moreover, if we orient with a non-maximal order $\mathcal{O} \subset \mathbb{Q}(\sqrt{-p})$ and we assume that it is feasible to compute the ideal-class group of the maximal order, then also the ideal-class group of $\mathcal{O}$ is known and we recover the central feature of SCALLOP-like constructions.
We propose two variants of our scheme. In the first one, the orientation is by a suborder of the form $\mathbb{Z}[f\sqrt{-p}]$ for some $f$ coprime to $p$, so this is similar to SCALLOP. In the second one, inspired by the work of Chenu and Smith, the orientation is by an order of the form $\mathbb{Z}[\sqrt{-dp}]$ where $d$ is square-free and not a multiple of $p$. We give practical ways of generating parameters, together with a proof-of-concept SageMath implementation of both variants, which shows the effectiveness of our construction.
A Quasi-polynomial Time Algorithm for the Extrapolated Dihedral Coset Problem over Power-of-Two Moduli
The Learning With Errors (LWE) problem, introduced by Regev (STOC'05), is one of the fundamental problems in lattice-based cryptography, believed to be hard even for quantum adversaries. Regev (FOCS'02) showed that LWE reduces to the quantum Dihedral Coset Problem (DCP). Later, Brakerski, Kirshanova, Stehlé and Wen (PKC'18) showed that LWE reduces to a generalization known as the Extrapolated Dihedral Coset Problem (EDCP). We present a quasi-polynomial time quantum algorithm for the EDCP problems over power-of-two moduli using a quasi-polynomial number of samples, which also applies to the SLWE problem defined by Chen, Liu, and Zhandry (Eurocrypt'22). Our EDCP algorithm can be viewed as a provable variant to the "Simon-meets-Kuperberg" algorithm introduced by Bonnetain and Naya-Plasencia (Asiacrypt'18), adapted to the EDCP setting. We stress that our algorithm does not affect the security of LWE with standard parameters, as the reduction from standard LWE to EDCP limits the number of samples to be polynomial.
Silent Circuit Relinearisation: Sublinear-Size (Boolean and Arithmetic) Garbled Circuits from DCR
We introduce a general template for building garbled circuits with low communication, under the decisional composite residuosity (DCR) assumption. For the case of layered Boolean circuits, we can garble a circuit of size $s$ with communication proportional to $O(s/\log\log s)$ bits, plus an additive factor that is polynomial in the security parameter. For layered arithmetic circuits with $B$-bounded integer computation, we obtain a similar result: the garbled arithmetic circuit has size $O(s/\log\log s) \cdot (\lambda + \log B)$ bits, where $\lambda$ is the security parameter. These are the first constructions of general-purpose, garbled circuits with sublinear size, without relying on heavy tools like indistinguishability obfuscation or attribute-based and fully homomorphic encryption.
To achieve these results, our main technical tool is a new construction of a form of homomorphic secret sharing where some of the inputs are semi-private, that is, known to one of the evaluating parties. Through a new relinearisation technique that allows performing arbitrary additions and multiplications on semi-private shares, we build such an HSS scheme that supports evaluating any function of the form $C(x) \cdot C'(y)$, where $C$ is any polynomially-sized circuit applied to the semi-private input $y$, and $C'$ is a restricted-multiplication (or, NC1) circuit applied to the private input $x$. This significantly broadens the expressiveness of prior known HSS constructions.
Privacy and Security of FIDO2 Revisited
We revisit the privacy and security analyses of FIDO2, a widely deployed standard for passwordless authentication on the Web. We discuss previous works and conclude that each of them has at least one of the following limitations:
(i) impractical trusted setup assumptions,
(ii) security models that are inadequate in light of state of the art of practical attacks,
(iii) not analyzing FIDO2 as a whole, especially for its privacy guarantees.
Our work addresses these gaps and proposes revised security models for privacy and authentication. Equipped with our new models, we analyze FIDO2 modularly and focus on its component protocols, WebAuthn and CTAP2, clarifying their exact security guarantees. In particular, our results, for the first time, establish privacy guarantees for FIDO2 as a whole. Furthermore, we suggest minor modifications that can help FIDO2 provably meet stronger privacy and authentication definitions and withstand known and novel attacks.
Fully Homomorphic Encryption for Cyclotomic Prime Moduli
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form $\mathbb{Z}[x]/(\Phi_m(x), t(x))$ with $\Phi_m(x)$ the $m$-th cyclotomic polynomial and $t(x)$ an arbitrary polynomial. GBFV encompasses both BFV where $t(x) = p$ is a constant, and the CLPX scheme (CT-RSA 2018) where $m = 2^k$ and $t(x) = x - b$ is a linear polynomial. The latter can encrypt a single huge integer modulo $\Phi_m(b)$, has much lower noise growth than BFV, but it is not known to be efficiently bootstrappable.
We show that by a clever choice of $m$ and higher degree polynomial $t(x)$, our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime $\Phi_2(2^{16}) = 2^{16} + 1$ and the Goldilocks prime $\Phi_6(2^{32}) = 2^{64} - 2^{32} + 1$. These primes are often used in homomorphic encryption applications and zero-knowledge proof systems.
Due to the lower noise growth, GBFV can evaluate much deeper circuits compared to native BFV in the same ring dimension. As a result, we can evaluate either larger circuits or work with smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security already at ring dimension $n = 2^{14}$, which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only 2 seconds to bootstrap a ciphertext encrypting up to 8192 elements modulo $2^{16} + 1$.
Finding and Protecting the Weakest Link - On Side-Channel Attacks on y in Masked ML-DSA
NIST standardized ML-KEM and ML-DSA as post-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected ML-DSA implementations is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on physical devices, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on the physical devices validate our simulations.
Unlocking Mix-Basis Potential: Geometric Approach for Combined Attacks
This paper explores the possibility of using different bases in Beyne's geometric approach, a flexibility that was theoretically proposed in Beyne's doctoral thesis but has not been adopted in real cryptanalytic attacks despite its potential to unify multiple attack paradigms.
We revisit three bases from previous geometric approach papers and extend them to four extra ones determined by simple rules. With the final seven bases, we can obtain $7^{2d}$ different basis-based attacks in the $d$-th-order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack. All these attacks can be studied in unified automatic search methods.
We provide several demonstrative applications of this framework.
First, we show that by choosing an alternative pair of bases, the divisibility property analyzed by Beyne and Verbauwhede with ultrametric integral cryptanalysis (ASIACRYPT 2024) can be interpreted as a single element rather than as a linear combination of elements of the transition matrix; thus, the property can be studied in a unified way as other geometric approach applications.
Second, we revisit the multiple-of-$2^t$ property (EUROCRYPT 2017) under our new framework and present new multiple-of-$2^t$ distinguishers for \skinny-64 that surpass the state-of-the-art results,
from the perspectives of both first-order and second-order attacks.
Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values.
At the Top of the Hypercube -- Better Size-Time Tradeoffs for Hash-Based Signatures
Hash-based signatures have been studied for decades and have recently gained renewed attention due to their post-quantum security. At the core of the most prominent hash-based signature schemes, XMSS and SPHINCS+, lies a one-time signature scheme based on hash chains due to Winternitz. In this scheme, messages are encoded into vectors of positions (i.e., vertices in a hypercube) in the hash chains, and the signature contains the respective chain elements. The encoding process is crucial for the efficiency and security of this construction. In particular, it determines a tradeoff between signature size and computational costs. Researchers have been trying to improve this size-time tradeoff curve for decades, but all improvements have been arguably marginal.
In this work, we revisit the encoding process with the goal of minimizing verification costs and signature sizes. As our first result, we present a novel lower bound for the verification cost given a fixed signature size. Our lower bound is the first to directly apply to general encodings including randomized, non-uniform, and non-injective ones.
Then, we present new encodings and prove their security. Inspired by our lower bound, these encodings follow a counterintuitive approach: we map messages non-uniformly into the top layers of a much bigger hypercube than needed but the encoding itself has (hard to find) collisions. With this, we get a 20 % to 40 % improvement in the verification cost of the signature while keeping the same security level and the same size.
Our constructions can be directly plugged into any signature scheme based on hash chains, which includes SPHINCS+ and XMSS.
Constrained Verifiable Random Functions Without Obfuscation and Friends
CVRFs are PRFs that unify the properties of verifiable and constrained PRFs. Since they were introduced concurrently by Fuchsbauer and Chandran-Raghuraman-Vinayagamurthy in 2014, it has been an open problem to construct CVRFs without using heavy machinery such as multilinear maps, obfuscation or functional encryption.
We solve this problem by constructing a prefix-constrained verifiable PRF that does not rely on the aforementioned assumptions. Essentially, our construction is a verifiable version of the Goldreich-Goldwasser-Micali PRF. To achieve verifiability we leverage degree-2 algebraic PRGs and bilinear groups. In short, proofs consist of intermediate values of the Goldreich-Goldwasser-Micali PRF raised to the exponents of group elements. These outputs can be verified using pairings since the underlying PRG is of degree 2.
We prove the selective security of our construction under the Decisional Square Diffie-Hellman (DSDH) assumption and a new assumption, which we dub recursive Decisional Diffie-Hellman (recursive DDH).
We prove the soundness of recursive DDH in the generic group model assuming the hardness of the Multivariate Quadratic (MQ) problem and a new variant thereof, which we call MQ+.
Last, in terms of applications, we observe that our CVRF is also an exponent (C)VRF in the plain model. Exponent VRFs were recently introduced by Boneh et al. (Eurocrypt’25) with various applications to threshold cryptography in mind. In addition to that, we give further applications for prefix-CVRFs in the blockchain setting, namely, stake-pooling and compressible randomness beacons.
When Threshold Meets Anamorphic Signatures: What is Possible and What is Not!
Anamorphic signatures allow covert communication through signatures in environments where encryption is restricted. They enable trusted recipients with a double key to extract hidden messages while the signature remains indistinguishable from a fresh and regular one. However, the traditional notion of anamorphic signatures suffers from vulnerabilities, particularly when a single recipient or sender is compromised, exposing all hidden messages and providing undeniable proof that citizens are part of the anamorphic exchange.
To address these limitations, we explore a threshold-based approach to distribute trust among multiple recipients, preventing adversaries from decrypting anamorphic messages even if some recipients are compromised. Our first contribution is the formalization of the notion of \emph{threshold-recipient anamorphic signatures}, where decryption is possible only through collaboration among a subset of recipients.
We then explore a \emph{stronger model} where the dictator controls the key generation process through which it learns all secret keys and how citizens store cryptographic keys. A particular example of this model in the real world is a dictator providing citizens with electronic identity documents (eIDs) and blocking all other usage of cryptography. We demonstrate that anamorphic communication is still possible even in such a scenario. Our construction is secure against post-quantum adversaries and does not rely on any computational assumptions except the random oracle model.
Finally, we show an \emph{impossibility result} for encoding anamorphic messages with a threshold-sender model when using many existing threshold signature schemes and the adversary is part of the signing group. Our work outlines both the possibilities and limitations of extending anamorphic signatures with threshold cryptography, offering new insights into improving the security and privacy of individuals under authoritarian regimes.
On Deniable Authentication against Malicious Verifiers
Deniable authentication allows Alice to authenticate a message to Bob, while retaining deniability towards third parties. In particular, not even Bob can convince a third party that Alice authenticated that message. Clearly, in this setting Bob should not be considered trustworthy. Furthermore, deniable authentication is necessary for deniable key exchange, as explicitly desired by Signal and off-the-record (OTR) messaging.
In this work we focus on (publicly verifiable) designated verifier signatures (DVS), which are a widely used primitive to achieve deniable authentication. We propose a definition of deniability against malicious verifiers for DVS. We give a construction that achieves this notion in the random oracle (RO) model. Moreover, we show that our notion is not achievable in the standard model with a concrete attack; thereby giving a non-contrived example of the RO heuristic failing.
All previous protocols that claim to achieve deniable authentication against malicious verifiers (like Signal's initial handshake protocols X3DH and PQXDH) rely on the Extended Knowledge of Diffie–Hellman (EKDH) assumption. We show that this assumption is broken and that these protocols do not achieve deniability against malicious verifiers.
Constant-Size zk-SNARKs in ROM from Falsifiable Assumptions
We prove that the seminal KZG polynomial commitment scheme (PCS) is black-box extractable under a simple falsifiable assumption ARSDH. To create an interactive argument, we construct a compiler that combines a black-box extractable non-interactive PCS and a polynomial IOP (PIOP). The compiler incurs a minor cost per every committed polynomial. Applying the Fiat-Shamir transformation, we obtain slightly less efficient variants of well-known PIOP-based zk-SNARKs, such as Plonk, that are knowledge-sound in the ROM under the ARSDH assumption. Importantly, there is no need for idealized group models or knowledge assumptions. This results in the first known zk-SNARKs in the ROM from falsifiable assumptions with both an efficient prover and constant-size argument.
Designing QC-MDPC Public Key Encryption Schemes with Niederreiter's Construction and a Bit Flipping Decoder with Bounded DFR
Post-quantum public key encryption (PKE) schemes employing Quasi-cyclic (QC) sparse
parity-check matrix codes are enjoying significant success, thanks to their
good performance profile and reduction to believed-hard problems from coding
theory.
However, using QC sparse parity-check matrix codes (i.e., QC-MDPC/LDPC codes)
comes with a significant challenge: determining in closed-form their decoding
failure rate (DFR), as decoding failures are known to leak information on the
private key.
Furthermore, there is no formal proof that changing the (constant) rate of the
employed codes does not change the nature of the underlying hard problem, nor
of the hardness of decoding random QC codes is formally related to the
decoding hardness of random codes.
In this work, we address and solve these challenges, providing a novel
closed-form estimation of the decoding failure rate for three-iteration bit
flipping decoders, and proving computational equivalences among the
aforementioned problems.
This allows us to design systematically a Niederreiter-style QC-MDPC PKE,
enjoying the flexibility granted by freely choosing the code rate, and the
significant improvements in tightness of our DFR bound.
We report a $2\times$ improvement in public key and ciphertext size
w.r.t. the previous best cryptosystem design with DFR closed-form bounds,
LEDAcrypt-KEM. Furthermore, we show that our PKE parameters yield $30$% smaller
public key size and $2.6\times$ smaller ciphertexts w.r.t. HQC,
which is the key encapsulation method employing a code based PKE, recently selected by the US NIST for standardization.
MultiCent: Secure and Scalable Computation of Centrality Measures on Multilayer Graphs
As real-world networks such as social networks and computer networks are often complex and distributed, modeling them as multilayer graphs is gaining popularity. For instance, when studying social interactions across platforms like LinkedIn, Facebook, TikTok, and Bluesky, users may be connected on several of these platforms. To identify important nodes/users, the platforms might wish to analyze user interactions using, e.g., centrality measures when accounting for connections across all platforms. This raises the challenge for platforms to perform such computation while simultaneously protecting their user data to shelter their own business as well as uphold data protection laws. Hence, it necessitates designing solutions that allow for performing secure computation on a multilayer graph which is distributed among mutually distrusting parties while keeping each party's data hidden.
The work of Asharov et al. (WWW'17) addresses this problem by designing secure solutions for centrality measures that involve computing the truncated Katz score and reach score on multilayer graphs. However, we identify several limitations in that work which render the solution inefficient or even unfeasible for realistic networks with significantly more than $10$k nodes. We address these limitations by designing secure solutions that are significantly more efficient and scalable. In more detail, given that real-world graphs are known to be sparse, our solutions move away from an expensive matrix-based representation to a more efficient list-based representation. We design novel, secure, and efficient solutions for computing centrality measures and prove their correctness. Our solutions drastically reduce the asymptotic complexity from the prohibitive $\mathcal{O}(|\mathsf{V}|^2)$ even for the fastest solution by Asharov et al. down to $\mathcal{O}(|\mathsf{V}|\log |\mathsf{V}|)$, for $|\mathsf{V}|$ nodes. To design our solutions, we extend upon the secure graph computation framework of Koti et al. (CCS'24), providing a novel framework with improved capabilities in multiple directions. Finally, we provide an end-to-end implementation of our secure graph analysis framework and establish concrete efficiency improvements over prior work, observing several orders of magnitude improvement.
How to Recover the Full Plaintext of XCB
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the \emph{full} plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure random-IV-based stream cipher.
HiAE: A High-Throughput Authenticated Encryption Algorithm for Cross-Platform Efficiency
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion.
Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 340 Gbps on x86 processors and 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips and setting a new performance record on the latest x86 processors.
uKNIT: Breaking Round-alignment for Cipher Design -- Featuring uKNIT-BC, an Ultra Low-Latency Block Cipher
Automated cryptanalysis has seen a lot of attraction and success in the past decade, leading to new distinguishers or key-recovery attacks against various ciphers. We argue that the improved efficiency and usability of these new tools have been undervalued, especially for design processes. In this article, we break for the first time the classical iterative design paradigm for symmetric-key primitives, where constructions are built around the repetition of a round function. We propose instead a new design framework, so-called uKNIT, that allows a round-by-round optimization-led automated construction of the primitives where each round can be entirely different from the others (the security/performance trade-off actually benefiting from this non-alignment).
This new design framework being non-trivial to instantiate, we further propose a method for SPN ciphers using a genetic algorithm and leveraging advances in automated cryptanalysis: given a pool of good cipher candidates on $x$ rounds, our algorithm automatically generates and selects $(x+1)$-round candidates by evaluating their security and performance. We emphasize that our design pipeline is also the first to propose a fully automated design process, with completely integrated implementation and security analysis.
We finally exemplify our new design strategy on the important use-case of low-latency cryptography, by proposing the uKNIT-BC block cipher, together with a complete security analysis and benchmarks. Compared to the state-of-the-art in low-latency ciphers (PRINCEv2), uKNIT-BC improves on all crucial security and performance directions at the same time, reducing latency by 10%, while increasing resistance against classical differential/linear cryptanalysis by more than 10%. It also reduces area by 17% and energy consumption by 44% when fixing the latency of both ciphers. As a contribution of independent interest, we discovered a generalization of the Superposition-Tweakey (STK) construction for key schedules, unlocking its application to bit-oriented ciphers. We also discuss the benefits of uKNIT to many other possible use-cases.
Assessing the Impact of a Variant of MATZOV's Attack
The dual attacks on the Learning With Errors (LWE) problem are currently a subject of controversy. In particular, the results of [Matzov,2022], which claim to significantly lower the security level of CRYSTALS-Kyber, a lattice-based cryptosystem currently being standardized by NIST, are not widely accepted. The analysis behind their attack depends on a series of assumptions that, in certain scenarios, have been shown to contradict established theorems or well-tested heuristics [Ducas,Pulles,CRYPTO2023].
In this paper, we introduce a new dual lattice attack on LWE, drawing from ideas in coding theory. Our approach revisits the dual attack proposed by [Matzov,2022], replacing modulus switching with an efficient decoding algorithm. This decoding is achieved by generalizing polar codes over $\mathbb{Z}_{q}$, and we confirm their strong distortion properties through benchmarks. This modification enables a reduction from small-LWE to plain-LWE, with a notable decrease in the secret dimension. Additionally, we replace the enumeration step in the attack by assuming the secret is zero for the portion being enumerated, iterating this assumption over various choices for the enumeration part.
We make an analysis of our attack without using the flawed independence assumptions used in [Matzov,2022] and we fully back up our analysis with experimental evidences.
Lastly, we assess the complexity of our attack on CRYSTALS-Kyber; showing that the security levels for Kyber-512/768/1024 are 3.5/11.9/12.3 bits below the NIST requirements (143/207/272 bits) in the same nearest-neighbor cost model as in the Kyber submission.
All in all the cost of our attack matches and even slightly beat in some cases the complexities originally claimed by the attack of [Matzov,2022].
Crowhammer: Full Key Recovery Attack on Falcon with a Single Rowhammer Bit Flip
The Rowhammer attack is a fault-injection technique leveraging the density of RAM modules to trigger persistent hardware bit flips that can be used for probing or modifying protected data. In this paper, we show that Falcon, the hash-and-sign signature scheme over NTRU lattices selected by NIST for standardization, is vulnerable to an attack using Rowhammer.
Falcon's Gaussian sampler is the core component of its security, as it allows to provably decorrelate the short basis used for signing and the generated signatures. Other schemes, lacking this guarantee (such as NTRUSign, GGH or more recently Peregrine) were proven insecure. However, performing efficient and secure lattice Gaussian sampling has proved to be a difficult task, fraught with numerous potential vulnerabilities to be exploited. To avoid timing attacks, a common technique is to use distribution tables that are traversed to output a sample. The official Falcon implementation uses this technique, employing a hardcoded reverse cumulative distribution table (RCDT). Using Rowhammer, we target Falcon's RCDT to trigger a very small number of targeted bit flips, and prove that the resulting distribution is sufficiently skewed to perform a key recovery attack.
Namely, we show that a single targeted bit flip suffices to fully recover the signing key, given a few hundred million signatures, with more bit flips enabling key recovery with fewer signatures. Interestingly, the Nguyen–Regev parallelepiped learning attack that broke NTRUSign, GGH and Peregrine does not readily adapt to this setting unless the number of bit flips is very large. However, we show that combining it with principal component analysis (PCA) yields a practical attack.
This vulnerability can also be triggered with other types of persistent fault attacks on memory like optical faults. We suggest cheap countermeasures that largely mitigate it, including rejecting signatures that are unusually short.
On the Optimal Communication Complexity of Error-Correcting Multi-Server PIR
An $\ell$-server Private Information Retrieval (PIR) scheme enables a client to retrieve a data item from a database replicated among $\ell$ servers while hiding the identity of the item. It is called $b$-error-correcting if a client can correctly compute the data item even in the presence of $b$ malicious servers. It is known that $b$-error correction is possible if and only if $\ell>2b$. In this paper, we first prove that if error correction is perfect, i.e., the client always corrects errors, the minimum communication cost of $b$-error-correcting $\ell$-server PIR is asymptotically equal to that of regular $(\ell-2b)$-server PIR as a function of the database size $n$. Secondly, we formalize a relaxed notion of statistical $b$-error-correcting PIR, which allows non-zero failure probability. We show that as a function of $n$, the minimum communication cost of statistical $b$-error-correcting $\ell$-server PIR is asymptotically equal to that of regular $(\ell-b)$-server one, which is at most that of $(\ell-2b)$-server one. Our main technical contribution is a generic construction of statistical $b$-error-correcting $\ell$-server PIR for any $\ell>2b$ from regular $(\ell-b)$-server PIR. We can therefore reduce the problem of determining the optimal communication complexity of error-correcting PIR to determining that of regular PIR. In particular, our construction instantiated with the state-of-the-art PIR schemes and the previous lower bound for single-server PIR result in a separation in terms of communication cost between perfect and statistical error correction for any $\ell>2b$.
New Generic Constructions of Error-Correcting PIR and Efficient Instantiations
A $b$-error-correcting $m$-server Private Information Retrieval (PIR) protocol enables a client to privately retrieve a data item of a database from $m$ servers even in the presence of $b$ malicious servers. List-decodable PIR is a generalization of error-correcting PIR to achieve a smaller number of servers at the cost of giving up unique decoding. Previous constructions of error-correcting and list-decodable PIR have exponential computational complexity in $m$ or cannot achieve sub-polynomial communication complexity $n^{o(1)}$, where $n$ is the database size. Recently, Zhang, Wang and Wang (ASIACCS 2022) presented a non-explicit construction of error-correcting PIR with $n^{o(1)}$ communication and polynomial computational overhead in $m$. However, their protocol requires the number of servers to be larger than the minimum one $m=2b+1$ and they left it as an open problem to reduce it. As for list-decodable PIR, there is no construction with $n^{o(1)}$ communication.
In this paper, we propose new generic constructions of error-correcting and list-decodable PIR from any one-round regular PIR. Our constructions increase computational complexity only by a polynomial factor in $m$ while the previous generic constructions incur $\binom{m}{b}$ multiplicative overheads. Instantiated with the best-known protocols, our construction provides for the first time an explicit error-correcting PIR protocol with $n^{o(1)}$ communication, which reduces the number of servers of the protocol by Zhang, Wang and Wang (ASIACCS 2022). For sufficiently large $b$, we also show the existence of $b$-error-correcting PIR with $n^{o(1)}$ communication achieving the minimum number of servers, by allowing for two rounds of interaction. Furthermore, we show an extension to list-decodable PIR and obtain for the first time a protocol with $n^{o(1)}$ communication. Other instantiations improve the communication complexity of the state-of-the-art $t$-private protocols in which $t$ servers may collude. Along the way, we formalize the notion of \textit{locally surjective map families}, which generalize perfect hash families and may be of independent interest.
Side-Channel Power Trace Dataset for Kyber Pair-Pointwise Multiplication on Cortex-M4
We present a dataset of side-channel power measurements captured during pair-pointwise multiplication in the decapsulation procedure of the Kyber Key Encapsulation Mechanism (KEM). The dataset targets the pair-pointwise multiplication step in the NTT domain, a key computational component of Kyber. The dataset is collected using the reference implementation from the PQClean project. We hope the dataset helps in research in ``classical'' power analysis and deep learning-based side-channel attacks on post-quantum cryptography (PQC).
Multi-Party Private Set Operations from Predicative Zero-Sharing
Typical protocols in the multi-party private set operations (MPSO) setting enable $m > 2$ parties to perform certain secure computation on the intersection or union of their private sets, realizing a very limited range of MPSO functionalities. Most works in this field focus on just one or two specific functionalities, resulting in a large variety of isolated schemes and a lack of a unified framework in MPSO research. In this work, we present an MPSO framework, which allows $m$ parties, each holding a set, to securely compute any set formulas (arbitrary compositions of a finite number of binary set operations, including intersection, union and difference) on their private sets. Our framework is highly versatile and can be instantiated to accommodate a broad spectrum of MPSO functionalities. To the best of our knowledge, this is the first framework to achieve such a level of flexibility and generality in MPSO, without relying on generic secure multi-party computation (MPC) techniques.
Our framework exhibits favorable theoretical and practical performance. With computation and communication complexity scaling linearly with the set size $n$, it achieves optimal complexity that is on par with the naive solution for widely used functionalities, such as multi-party private set intersection (MPSI), MPSI with cardinality output (MPSI-card), and MPSI with cardinality and sum (MPSI-card-sum), for the first time in the standard semi-honest model. Furthermore, the instantiations of our framework, which primarily rely on symmetric-key techniques, provide efficient protocols for MPSI, MPSI-card, MPSI-card-sum, and multi-party private set union (MPSU), with online performance that either surpasses or matches the state of the art in standard semi-honest model.
At the technical core of our framework is a newly introduced primitive called predicative zero-sharing. This primitive captures the universality of a number of MPC protocols and is composable. We believe it may be of independent interest.
Scalable Collaborative zk-SNARK and Its Application to Fully Distributed Proof Delegation
Collaborative zk-SNARK (USENIX'22) allows multiple parties to compute a proof over distributed witness. It offers a promising application called proof delegation (USENIX'23), where a client delegates the tedious proof generation to many servers while ensuring no one can learn the witness. Unfortunately, existing works suffer from significant efficiency issues and face challenges when scaling to complex applications.
In this work, we introduce the first scalable collaborative zk-SNARK for general circuits, built upon HyperPlonk (Eurocrypt'23). Our result overcomes existing barriers, offering fully distributed workload and small communication. For data-parallel circuits, the communication overhead is even sublinear.
We propose several efficient collaborative and distributed protocols for multivariate primitives, which form the main building blocks of our results and may be of independent interest.
In addition, we design a new permutation check protocol for Plonk arithmetization, which is MPC-friendly and suitable for collaborative zk-SNARKs.
With 128 servers jointly generating a proof for a circuit of size $2^{21}$ gates, the experiment demonstrates over $30\times$ speedup and reduced RAM requirements compared to a local prover, while the witness is still private. Previous works were unable to achieve such savings in both time and memory efficiency. Moreover, our protocol performs well under various network conditions, making it practical for real-world applications.
Rubato: Provably Post-Quantum Secure and Batched Asynchronous Randomness Beacon
Distributed Randomness Beacons (DRBs) provide secure, unbiased random numbers for decentralized systems. However, existing protocols face critical limitations. Most rely on cryptographic assumptions which are vulnerable to quantum attacks, risking long-term security in asynchronous networks where unbounded delays may allow attackers time to exploit these weaknesses. Many achieve low beacon generation rates, often below 100 beacons per minute in moderate-scale networks (e.g., Spurt IEEE S&P’22), hindering their use in applications requiring high-throughput randomness. Additionally, traditional Verifiable Secret Sharing (VSS)-based DRBs, using a share-consensus-reconstruct paradigm, are unsuitable for asynchronous networks due to circular dependencies between beacon generation and consensus. Given these limitations, we propose Rubato, the first provably post-quantum secure DRB for asynchronous environments, incorporating a lattice-based batched Asynchronous Verifiable Secret Sharing scheme (bAVSS-PQ). Rubato supports batching of $\mathcal{O}(\lambda^2)$ secrets with communication complexity $\mathcal{O}(\lambda n^3 \log n)$ and tolerates Byzantine faults in up to one-third of the nodes. Integrated with DAG-based consensus protocols like Bullshark or Tusk, its epoch-staggered architecture resolves circular dependencies, enabling efficient and secure randomness generation. Evaluations across 10 to 50 nodes show Rubato generates 5200 to 350 beacons per minute with per-beacon latencies of 11.60 to 96.37 milliseconds, achieving a consensus throughput of 186,088 transactions per second with a latency of 16.78 seconds at 30 nodes. Rubato offers robust post-quantum security and high performance for small-to-medium-scale decentralized systems.
New Exchanged Boomerang Distinguishers for 5-Round AES
In block ciphers, the attacker should not be able to distinguish a block cipher from a random permutation; therefore the existence of a distinguisher is important. Cryptanalysis of the reduced-round variants of block ciphers is also important in cryptographic design. AES is the most widely used block cipher, and currently, the best-known distinguisher for 5-round AES has a data and time complexity of $2^{29.95}$ with a success probability of 55\%. In this paper, we propose the massive exchanged boomerang and multiple exchanged boomerang distinguishers for 5-round AES. The massive exchanged boomerang distinguisher utilizes the probability that the truncated difference for the returned plaintext pairs is such that, in each of its diagonals, the 4 bytes are either all active, or all inactive. Although this probability is very high for a random permutation, we significantly reduce it using the friend pairs technique, while keeping the boomerang probability unchanged. This enables us to distinguish a block cipher from a random permutation. The massive exchanged boomerang distinguisher for 5-round AES has a data and time complexity of $2^{31}$ with a success probability of 70\%. The multiple exchanged boomerang distinguisher is constructed by clustering four trails that have the same input and output truncated differences, enabling it to distinguish a block cipher from a random permutation with lower complexity and higher success probability. The multiple exchanged boomerang distinguisher for 5-round AES has a data and time complexity of $2^{27.1}$ and a success probability of 79.6\%, which represents a new best-known result for the secret-key distinguisher on 5-round AES.
Key-Homomorphic Computations for RAM: Fully Succinct Randomised Encodings and More
We propose a new method to construct a public-key encryption scheme, where one can homomorphically transform a ciphertext encrypted under a key $\mathbf{x}$ into a ciphertext under $(P, P(\mathbf{x}))$, for any polynomial-time RAM program $P: \mathbf{x} \mapsto \mathbf{y}$ with runtime $T$ and memory $L$. Combined with other lattice techniques, this allows us to construct:
1) Succinct-randomised encodings from RAM programs with encoder complexity $(|\mathbf{x}| + |\mathbf{y}|)\cdot \text{poly}(\log T, \log L)$ and rate-1 encodings.
2) Laconic function evaluation for RAM programs, with encoder runtime bounded by $(|\mathbf{x}| + |\mathbf{y}|)\cdot\text{poly}(\log T, \log L)$ and rate-1 encodings.
3) Key-policy attribute-based encryption for RAM programs, with ciphertexts of size $O(T)$. The same scheme can be converted to the register setting, obtaining linear CRS size in the number of parties.
All of our schemes rely on the hardness of the \emph{decomposed learning with errors} (LWE) problem, along with other standard computational assumptions on lattices. The decomposed LWE problem can be interpreted as postulating the circular-security of a natural lattice-based public-key encryption scheme. To gain confidence in the assumption, we show that it is implied by the hardness of the succinct LWE problem of Wee (CRYPTO'24).
Delegatable ABE with $O(1)$ Delegations from Witness Encryption
Delegatable Attribute-Based Encryption (DABE) is a well-known generalization of ABE, proposed to mirror organizational hierarchies. We design a DABE scheme from witness encryption and other simple assumptions. Our construction does not rely on Random Oracles, and we provide a black-box reduction to polynomial hardness of underlying assumptions.
How to Share an NP Statement or Combiners for Zero-Knowledge Proofs
In Crypto'19, Goyal, Jain, and Sahai (GJS) introduced the elegant notion of *secret-sharing of an NP statement* (NPSS). Roughly speaking, a $t$-out-of-$n$ secret sharing of an NP statement is a reduction that maps an instance-witness pair to $n$ instance-witness pairs such that any subset of $(t-1)$ reveals no information about the original witness, while any subset of $t$ allows full recovery of the original witness. Although the notion was formulated for general $t \leq n$, the only existing construction (due to GJS) applies solely to the case where $t = n$ and provides only computational privacy. In this paper, we further explore NPSS and present the following contributions.
1. **Definition.** We revisit the notion of NPSS by formulating a new definition of information-theoretically secure NPSS. This notion serves as a cryptographic analogue of standard NP-reductions and can be compiled into the GJS definition using any one-way function.
2. **Construction.** We construct information-theoretic $t$-out-of-$n$ NPSS for any values of $t\leq n$ with complexity polynomial in $n$. Along the way, we present a new notion of secure multiparty computation that may be of independent interest.
3. **Applications.** Our NPSS framework enables the *non-interactive combination* of $n$ instances of zero-knowledge proofs, where only $t_s$ of them are sound and only $t_z$ are zero-knowledge, provided that $t_s + t_z > n$. Our combiner preserves various desirable properties, such as the succinctness of the proof. Building on this, we establish the following results under the minimal assumption of one-way functions:
(i) *Standard NIZK implies NIZK in the Multi-String Model* (Groth and Ostrovsky, J. Cryptology, 2014), where security holds as long as a majority of the $n$ common reference strings were honestly generated. Previously, such a transformation was only known in the common random string model, where the reference string is uniformly distributed.
(ii) A *Designated-Prover NIZK in the Multi-String Model*, achieving a strong form of two-round Multi-Verifier Zero-Knowledge in the honest-majority setting.
(iii) A *three-round secure multiparty computation protocol* for general functions in the honest-majority setting. The round complexity of this protocol is optimal, resolving a line of research that previously relied on stronger assumptions (Asharov et al., Eurocrypt'12; Gordon et al., Crypto'15; Ananth et al., Crypto'18; Badrinarayanan et al., Asiacrypt'20; Applebaum et al., TCC'22).
Fully Adaptive Schnorr Threshold Signatures
We prove adaptive security of a simple three-round threshold Schnorr signature scheme, which we call Sparkle+. The standard notion of security for threshold signatures considers a static adversary – one who must declare which parties are corrupt at the beginning of the protocol. The stronger adaptive adversary can at any time corrupt parties and learn their state. This notion is natural and practical, yet not proven to be met by most schemes in the literature.
In this paper, we demonstrate that Sparkle+ achieves several levels of security based on different corruption models and assumptions. To begin with, Sparkle+ is statically secure under minimal assumptions: the discrete logarithm assumption (DL) and the random oracle model (ROM). If an adaptive adversary corrupts fewer than t/2 out of a threshold of t+1 signers, then Sparkle+ is adaptively secure under a weaker variant of the one-more discrete logarithm assumption (AOMDL) in the ROM. Finally, we prove that Sparkle+ achieves full adaptive security, with a corruption threshold of t, under AOMDL in the algebraic group model (AGM) with random oracles. Importantly, we show adaptive security without requiring secure erasures. Ours is the first proof achieving full adaptive security without exponential tightness loss for any threshold Schnorr signature scheme.
Blind Signatures from Proofs of Inequality
Blind signatures are an important primitive for privacy-preserving technologies. To date, highly efficient pairing-free constructions rely on the random oracle model, and additionally, a strong assumption, such as interactive assumptions or the algebraic group model.
In contrast, for signatures we know many efficient constructions that rely on the random oracle model and standard assumptions. In this work, we develop techniques to close this gap. Compared to the most efficient pairing-free AGM-based blind signature by Crites et. al. (Crypto 2023), our construction has a relative overhead of only a factor $3\times$ and $2\times$ in terms of communication and signature size, and it is provable in the random oracle model under the DDH assumption. With one additional move and $\mathbb{Z}_p$ element, we also achieve one-more strong unforgeability.
Our construction is inspired by the recent works by Chairattana-Apirom, Tessaro, and Zhu (Crypto 2024) and Klooß, Reichle, and Wagner (Asiacrypt 2024), and we develop a tailored technique to circumvent the sources of inefficiency in their constructions. Concretely, we achieve signature and communication size of $192$ B and $608$ B, respectively.
Weave: Efficient and Expressive Oblivious Analytics at Scale
Many distributed analytics applications that are offloaded to the cloud operate on sensitive data. Even when the computations for such analytics workloads are confined to trusted hardware enclaves and all stored data and network communications are encrypted, several studies have shown that they are still vulnerable to access pattern attacks. Prior efforts towards preventing access pattern leakage often incur network and compute overheads that are logarithmic in dataset size, while also limiting the functionality of supported analytics jobs.
We present Weave, an efficient, expressive, and secure analytics platform that scales to large datasets. Weaveemploys a combination of noise injection and hardware memory isolation via enclave page caches to reduce the network and compute overheads for oblivious analytics to a constant factor. Weave also employs several optimizations and extensions that exploit dataset and workload-specific properties to ensure performance at scale without compromising on functionality. Our evaluations show that Weave reduces the end-to-end execution time for a wide range of analytics jobs on large real-world datasets by $4$--$10\times$ compared to prior state-of-the-art while providing strong obliviousness guarantees.
Unbounded Distributed Broadcast Encryption and Registered ABE from Succinct LWE
We construct distributed broadcast encryption and registered attribute-based encryption (ABE) that support an arbitrary polynomial of users from the succinct LWE assumption. Specifically, if we take $\lambda$ to be the security parameter and $N$ to be the number of users, we obtain the following:
* We obtain a distributed broadcast encryption scheme where the size of the public parameters, user public/secret keys, and ciphertexts are optimal (i.e., have size $\mathsf{poly}(\lambda, \log N)$). Security relies on the $\mathsf{poly}(\lambda, \log N)$-succinct LWE assumption. Previously, this was only known from indistinguishability obfuscation or witness encryption. All constructions that did not rely on these general tools could only support an a priori bounded number of users.
* We obtain a key-policy registered ABE scheme that supports arbitrary bounded-depth Boolean circuit policies from the $\mathsf{poly}(\lambda, d, \log N)$-succinct LWE assumption in the random oracle model, where $d$ is the depth of the circuit computing the policy. The public parameters, user public/secret keys, and ciphertexts have size $\mathsf{poly}(\lambda, d, \log N)$, which are optimal up to the $\mathsf{poly}(d)$ factor. This is the first registered ABE scheme with nearly-optimal parameters. All previous schemes (including constructions based on indistinguishability obfuscation, witness encryption, or evasive LWE) either have ciphertexts that scale with the policy size and attribute length, or can only support a bounded number of users (with long public parameters and public keys that scale with the number of users).
Security of Operations on Random Numbers: A Review
Random numbers are often used in cryptography algorithms,
protocols, and in several security and non-security applications. Such us-
ages often apply Arithmetic and Boolean operations on pseudorandom
numbers, such as addition, XOR, NOT, bit shifts, and other operations,
in order to achieve the desired amount of entropy and desired level of
security. In this paper, we have reviewed, studied, and analyzed the se-
curity properties of these operations on random numbers: do Arithmetic
and Boolean operations and other related operations on cryptograph-
ically secure pseudorandom numbers lead to cryptographically secure
pseudorandom numbers; do they lead to loss of preservation of entropy?
Rapidash: Atomic Swaps Secure under User-Miner Collusion
Cross-chain trading is fundamental to blockchains and Decentralized Finance (DeFi). A way to achieve such trading in a truly decentralized manner, i.e., without trusted third parties, is by using atomic swaps. However, recent works revealed that Hashed Time-Lock Contract, a key building block of the existing atomic swaps, is entirely insecure in the presence of user-miner collusion. Specifically, a user can bribe the miners of the blockchain to help it cheat.
In this work, we give the first and rigorous formal treatment of fair trading on blockchains, where users and miners may enter arbitrary binding contracts on the side. We propose Rapidash, a new atomic swap protocol, and prove its incentive-compatibility in the presence of user-miner collusion. Specifically, we show that Rapidash satisfies a coalition-resistant Nash equilibrium absent external incentives. We give instantiations of Rapidash that are compatible with Bitcoin and Ethereum, and incur only minimal overheads in terms of costs for the users.
Committed Vector Oblivious Linear Evaluation and Its Applications
We introduce the notion of committed vector oblivious linear evaluation (C-VOLE), which allows a party holding a pre-committed vector to generate VOLE correlations with multiple parties on the committed value. It is a unifying tool that can be found useful in zero-knowledge proofs (ZKPs) of committed values, actively secure multi-party computation, private set intersection (PSI), etc.
To achieve the best efficiency, we design a tailored commitment scheme and matching C-VOLE protocols, both based on the learning parity with noise assumption. In particular, exploiting the structures of the carefully designed LPN-based commitment minimizes the cost of ensuring consistency between the committed vector and VOLE correlation. As a result, we achieve a 28$\times$ improvement over the protocol proposed in prior work (Usenix 2021) that uses ZKP to prove the correct opening of the commitment. We also apply C-VOLE to design a PSI protocol that allows one server to run PSI repeatedly with multiple clients while ensuring that the same set is used across all executions. Compared with the state-of-the-art PSI (CCS 2024) with similar security requirements, our protocol reduces the communication overhead by a factor of 35$\times$.
Mix-Basis Geometric Approach to Boomerang Distinguishers
Differential cryptanalysis relies on assumptions like \textit{Markov ciphers} and \textit{hypothesis of stochastic equivalence}. The probability of a differential characteristic estimated by classical methods is the key-averaged probability under the two assumptions. However, the real probability can vary significantly between keys. Hence, tools for differential cryptanalysis in the fixed-key model are desirable. Recently, Beyne and Rijmen applied the geometric approach to differential cryptanalysis and proposed a systematic framework called \textit{quasi-differential} (CRYPTO 2022).
As a variant of differential cryptanalysis, boomerang attacks rely on similar assumptions, so it is important to study their probability in the fixed-key model as well. A direct extension of the quasi-differential for boomerang attacks leads to the quasi-$3$-differential framework (TIT 2024).
However, such a straightforward approach is difficult in practical applications because there are too many quasi-$3$-differential trails.
We tackle this problem by applying the mix-basis style geometric approach (CRYPTO 2025) to the boomerang attacks and construct the quasi-boomerang framework. By choosing a suitable pair of bases, the boomerang probability can be computed by summing correlations of \textit{quasi-boomerang characteristics}. The transition matrix of the key-XOR operation is also a diagonal matrix; thus, the influence of keys can be analyzed in a similar way to the quasi-differential framework.
We apply the quasi-boomerang framework to \skinny-64 and \gift-64. For \skinny-64, we check and confirm 4 boomerang distinguishers with high probability (2 with probability 1 and 2 with probability $2^{-4}$) generated from Hadipour, Bagheri, and Song's tool (ToSC 2021/1), through the analysis of key dependencies and the probability calculation from \textit{quasi-boomerang characteristics}. We also propose a divide-and-conquer approach following the sandwich framework for boomerangs with small probability or long rounds to apply the quasi-boomerang framework. After checking 2/1 boomerang distinguisher(s) of \skinny-64/\gift-64, we find that the previously considered invalid 19-round distinguisher of \gift-64 is valid.
In addition, as a contribution of independent interest, we revisit Boura, Derbez, and Germon's work by extending the quasi-differential framework to the related-key scenario (ToSC 2025/1), and show an alternative way to derive the same formulas in their paper by regarding the key-XOR as a normal cipher component.
AprèsSQI: Extra Fast Verification for SQIsign Using Extension-Field Signing
We optimise the verification of the SQIsign signature scheme. By using field extensions in the signing procedure, we are able to significantly increase the amount of available rational $2$-power torsion in verification, which achieves a significant speed-up. This, moreover, allows several other speed-ups on the level of curve arithmetic. We show that the synergy between these high-level and low-level improvements gives significant improvements, making verification $2.07$ times faster, or up to $3.41$ times when using size-speed trade-offs, compared to the state of the art, without majorly degrading the performance of signing.
A Critique on Average-Case Noise Analysis in RLWE-Based Homomorphic Encryption
Homomorphic encryption schemes based on the Ring-Learning-with-Errors problem require accurate ciphertext noise analysis to ensure correctness and security. However, ring multiplications during homomorphic computations make the noise in the result ciphertexts difficult to characterize. Existing average-case noise analyses derive a bound on the noise by either assuming it follows a Gaussian distribution, or giving empirical formulae, with strong independence assumption and the Central Limit Theorem extensively applied. In this work, we question the validity of these methods, by showing that the noise exhibits a heavy-tailed distribution via exact calculation of its variance and kurtosis, for both independent and dependent noises. The heavy-tailedness suggests the failing probability of bounds derived from these methods may not be negligible, and we experimentally demonstrate several cases where the noise growth is underestimated.
A Tool for Fast and Secure LWE Parameter Selection: the FHE case
The field of Fully Homomorphic Encryption (FHE) has seen many theoretical and computational advances in recent years, bringing the technology closer to practicality than ever before. For this reason, practitioners in related fields, such as machine learning, are increasingly interested in using FHE to provide privacy to their applications.
Despite this progress, selecting secure and efficient parameters for FHE remains a complex and challenging task due to the intricate interdependencies between parameters. In this work, we address this issue by providing a rigorous theoretical foundation for parameter selection for any LWE-based schemes, with a specific focus on FHE. Our approach starts with an in-depth analysis of lattice attacks on the LWE problem, deriving precise expressions for the most effective ones. Building on this, we introduce closed-form formulas that establish the relations among the LWE parameters.
In addition, we introduce a numerical method to enable the accurate selection of any configurable parameter to meet a desired security level.
Finally, we use our results to build a practical and efficient tool for researchers and practitioners deploying FHE and other LWE-based schemes in real-world applications, ensuring that our approach is both rigorous and efficient.
Tighter Adaptive IBEs and VRFs: Revisiting Waters' Artificial Abort
One of the most popular techniques to prove adaptive security of identity-based encryptions (IBE) and verifiable random functions (VRF) is the partitioning technique. Currently, there are only two methods to relate the adversary's advantage and runtime $(\epsilon, {\sf T})$ to those of the reduction's ($\epsilon_{\sf proof}, {\sf T}_{\sf proof}$) using this technique: One originates to Waters (Eurocrypt 2005) who introduced the famous artificial abort step to prove his IBE, achieving $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon/Q), {\sf T} + O(Q^2/\epsilon^2))$, where $Q$ is the number of key queries. Bellare and Ristenpart (Eurocrypt 2009) provide an alternative analysis for the same scheme removing the artificial abort step, resulting in $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon^2/Q), {\sf T} + O(Q))$. Importantly, the current reductions all loose quadratically in $\epsilon$.
In this paper, we revisit this two decade old problem and analyze proofs based on the partitioning technique through a new lens. For instance, the Waters IBE can now be proven secure with $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon^{3/2}/Q), {\sf T} + O(Q))$, breaking the quadratic dependence on $\epsilon$. At the core of our improvement is a finer estimation of the failing probability of the reduction in Waters' original proof relying on artificial abort. We use Bonferroni's inequality, a tunable inequality obtained by cutting off higher order terms from the equality derived by the inclusion-exclusion principle.
Our analysis not only improves the reduction of known constructions but also opens the door for new constructions. While a similar improvement to Waters IBE is possible for the lattice-based IBE by Agrawal, Boneh, and Boyen (Eurocrypt 2010), we can slightly tweak the so-called partitioning function in their construction, achieving $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon/Q), {\sf T} + O(Q))$. This is a much better reduction than the previously known $ (O(\epsilon^3/Q^2), {\sf T} + O(Q))$. We also propose the first VRF with proof and verification key sizes sublinear in the security parameter under the standard $d$-LIN assumption, while simultaneously improving the reduction cost compared to all prior constructions.
Continuous Group-Key Agreement: Concurrent Updates without Pruning
Continuous Group Key Agreement (CGKA) is the primitive underlying secure group messaging.
It allows a large group of $N$ users to maintain a shared secret key that is frequently rotated by the group members in order to achieve forward secrecy and post compromise security.
The group messaging scheme Messaging Layer Security (MLS) standardized by the IETF makes use of a CGKA called TreeKEM which arranges the $N$ group members in a binary tree.
Here, each node is associated with a public-key,
each user is assigned one of the leaves,
and a user knows the corresponding secret keys from their leaf to the root.
To update the key material known to them, a user must just replace keys at $\log(N)$ nodes, which requires them to create and upload $\log(N)$ ciphertexts. Such updates must be processed sequentially by all users, which for large groups is impractical.
To allow for concurrent updates, TreeKEM uses the ``propose and commit'' paradigm, where multiple users can concurrently propose to update (by just sampling a fresh leaf key), and a single user can then commit to all proposals at once.
Unfortunately, this process destroys the binary tree structure as the tree gets pruned and some nodes must be ``blanked'' at the cost of increasing the in-degree of others, which makes the commit operation, as well as, future commits more costly.
In the worst case, the update cost (in terms of uploaded ciphertexts) per user can grow from $\log(N)$ to $\Omega(N)$.
In this work we provide two main contributions.
First, we show that MLS' communication complexity is bad not only in the worst case but also if the proposers and committers are chosen at random:
even if there's just one update proposal for every commit the expected cost is already over $\sqrt{N}$, and it approaches $N$ as this ratio changes towards more proposals.
Our second contribution is a new variant of propose and commit for TreeKEM which for moderate amounts of update proposals per commit provably achieves an update cost of $\Theta(\log(N))$ assuming the proposers and committers are chosen at random.
JANUS: Enhancing Asynchronous Common Subset with Trusted Hardware
Asynchronous common subset (ACS) has been extensively studied since the asynchronous Byzantine fault tolerance (BFT) framework was introduced by Ben-Or, Kemler, and Rabin (BKR).
The line of work (i.e., HoneyBadgerBFT, BEAT, EPIC) uses parallel reliable broadcast (RBC) and asynchronous binary agreement (ABA) instances to reach an agreement on a subset of proposed transactions.
In this paper, we further progress the BKR paradigm by presenting Janus, the first hybrid ACS protocol leveraging trusted hardware components.
Janus is the first ACS protocol that tolerates a minority of Byzantine processes and that has O(n^2) message complexity.
Supported by trusted hardware components, we introduce a provable broadcast primitive to replace RBC, and develop a resilient binary agreement protocol. Messages for concurrent instances of agreement are aggregated into vectors. Our experimental results demonstrate significant performance improvements over predominant ACS constructions with a 92%+ increase compared to HoneyBadgerBFT and a 47%+ increase compared to BEAT.
Additionally, we provide a comparison with open-source hybrid BFT protocols that operate under a partially synchronous network, highlighting the performance enhancement compared to previous hybrid protocols that also tolerate the Byzantine minority (e.g., MinBFT and Damysus, by 49%+).
Trusted Hardware-Assisted Leaderless Byzantine Fault Tolerance Consensus
Byzantine Fault Tolerance (BFT) Consensus protocols with trusted hardware assistance have been extensively explored for their improved resilience to tolerate more faulty processes. Nonetheless, the potential of trust hardware has been scarcely investigated in leaderless BFT protocols. RedBelly is assumed to be the first blockchain network whose consensus is based on a truly leaderless BFT algorithm. This paper proposes a trusted hardware-assisted leaderless BFT consensus protocol by offering a hybrid solution for the set BFT problem defined in the RedBelly blockchain. Drawing on previous studies, we present two crucial trusted services: the counter and the collector. Based on these two services, we introduce two primitives to formulate our leaderless BFT protocol: a hybrid verified broadcast (VRB) protocol and a hybrid binary agreement. The hybrid VRB protocol enhances the hybrid reliable broadcast protocol by integrating a verification function. This addition ensures that a broadcast message is verified not only for authentication but also for the correctness of its content. Our hybrid BFT consensus is integrated with these broadcast protocols to deliver binary decisions on all proposals. We prove the correctness of the proposed hybrid protocol and demonstrate its enhanced performance in comparison to the prior trusted BFT protocol.
Error floor prediction with Markov models for QC-MDPC codes
Quasi-cyclic moderate-density parity check (QC-MDPC) code-based encryption schemes under iterative decoders offer highly-competitive performance in quantum-resistant cryptography, but their IND-CCA2 security is an open question because the decoding failure rate (DFR) of these algorithms is not well-understood. The DFR decreases extremely rapidly as the blocklength increases, then decreases much more slowly in regimes known as the waterfall and error floor, respectively. The waterfall behavior is rather well predicted by a Markov model introduced by Sendrier and Vasseur [SV19] but it does not capture the error floor behavior. Assessing precisely for which blocklength this error floor begins is crucial for the low DFRs sought the context of cryptography.
By enriching the Markov model [SV19] with information about near codewords we are able to capture this error-floor behavior for a step-by-step decoder. This decoder displays worse decoding performance than the parallel decoders used in practice but is more amenable to a Markov chain analysis. We already capture the error floor with a simplified model. A refined model taking into account certain structural features of the secret key is even able to give accurate key dependent predictions both in the waterfall and error floor regimes. We show that error floor behavior is governed by convergence to a near codeword when decoding fails.
We ran this model for the BIKE cryptosystem with this simpler step by step decoder to better ascertain whether the DFR is low enough to achieve IND-CCA2 security. Our model gives a DFR below $2^{-131.2}$, using a block length $r=13477$ instead of the BIKE parameter $r=12323$. This paper gives some strong evidence that the IND-CCA2 requirement can be met at the cost of a modest increase of less than 10% in the key size.
Polylogarithmic Proofs for Multilinears over Binary Towers
The use of small fields has come to typify the design of modern, efficient SNARKs. In recent work, Diamond and Posen (EUROCRYPT '25) break a key trace-length barrier, by treating multilinear polynomials even over tiny fields—fields with fewer elements than the polynomial has coefficients. In this work, we make that advance applicable globally, by generically reducing the problem of tiny-field commitment to that of large-field commitment. We introduce a sumcheck-based technique—called "ring-switching"—which, on input a multilinear polynomial commitment scheme over a large extension field, yields a further scheme over that field's ground field. The resulting tiny-field scheme, like Diamond and Posen's, lacks "embedding overhead", in the sense that its commitment cost is identical to that of the large-field scheme on each input size (measured in bits). Its evaluation protocol's overhead is linear for the prover and logarithmic for the verifier, and is essentially optimal.
Instantiating our compiler on the BaseFold (CRYPTO '24) large-field multilinear polynomial commitment scheme—or more precisely, on a characteristic-2 adaptation of that scheme, which we develop at length—we obtain an extremely efficient scheme for multilinears over tiny binary fields. Our scheme outperforms Diamond–Posen and Hashcaster ('24) and represents a new state-of-the-art.
Constant-Round Asynchronous MPC with Optimal Resilience and Linear Communication
In this work, we consider secure multiparty computation (MPC) in the asynchronous network setting. MPC allows $n$ parties to compute a public function on their private inputs against an adversary corrupting at most $t$ of them. We consider both communication complexity and round complexity of asynchronous MPC (AMPC) with the optimal resilience $n=3t+1$.
Without fully homomorphic encryptions, the best-known result in this setting is achieved by Coretti, Garay, Hirt, and Zikas (ASIACRYPT 2016), which requires $O(|C|n^3\kappa)$ bits of communication assuming one-way functions, where $\kappa$ is the security parameter. On the other hand, the best-known non-constant-round AMPC by Goyal, Liu, and Song (CRYPTO 2024) can achieve $O(|C|n)$ communication even in the information-theoretic setting. In this work, we give the first construction of a constant-round AMPC with $O(|C|n\kappa)$ bits of communication that achieves malicious security with abort assuming random oracles. We provide new techniques for adapting the MPC-in-the-head framework in the asynchronous network to compute a constant-size garbled circuit.
Quasidifferential Saves Infeasible Differential: Improved Weak-Key Key-Recovery Attacks on Round-Reduced GIFT
\gift, including \gift-64 and \gift-128, is a family of lightweight block ciphers with outstanding implementation performance and high security, which is a popular underlying primitive chosen by many AEADs such as \sundae. Currently, differential cryptanalysis is the best key-recovery attack on both ciphers, but they have stuck at 21 and 27 rounds for \gift-64 and \gift-128, respectively. Recently, Beyne and Rijmen proposed the quasidifferential transition matrix for differential cryptanalysis at CRYPTO 2022 and showed that the fixed-key probability of a differential (characteristic) can be expressed as the sum of correlations of all quasidifferential trails corresponding to this differential (characteristic). As pointed out by Beyne and Rijmen in their paper, the quasidifferential methodology is useful in identifying weak-key differential attacks.
In this paper, we apply Beyne and Rijmen's method to \gift. Some differential characteristics with small (average) probabilities can have much larger probabilities when weak-key conditions hold. Improved weak-key differential attacks on \gift-64 and \gift-128 are thus obtained. For \gift-64, the probability of a 13-round differential is improved from $2^{-62.06}$ to $2^{-57.82}$ with 4 bits of weak-key conditions, then an improved differential key-recovery attack on 21-round \gift-64 is obtained with $2^{117.42}/2^{64}$ time/data complexities; the probability of a 13-round multiple differential (containing 33 characteristics) is improved from $2^{-58.96}$ to $2^{-55.67}$ with 4 bits of weak-key conditions, then an improved multiple differential key-recovery attack on 21-round \gift-64 is obtained with $2^{123.27}/2^{64}$ time/data complexities. For \gift-128, the probability of a 20-round differential is improved from $2^{-121.83}$ to $2^{-114.77}$ with 6 bits of weak-key conditions; the probability of a 21-round multiple differential (containing 2 differentials) is improved from $2^{-128.38}$ to $2^{-122.77}$ with 4 bits of weak-key conditions. Improved (multiple) differential weak-key key-recovery attacks are obtained for 27 and 28 rounds of \gift-128 with $2^{115.77}$/$2^{115.77}$ and $2^{123.77}$/$2^{123.77}$ time/data complexities, respectively. As far as we know, this is the first time that a (weak-key) key-recovery attack can reach 28 rounds of \gift-128.
Additionally, as an independent interest, we perform the first differential attack on \sundae. The differential used in this attack is checked with quasidifferential trails, thus the probability is reliable. Our attack is nonce-respecting and has significantly better complexities than the currently best attack.
TOOP: A transfer of ownership protocol over Bitcoin
We present the Transfer of Ownership Protocol (TOOP). TOOP solves a limitation of all existing BitVM-like protocols (and UTxO blockchains at large) that restricts the unlocking transfers to addresses known and preregistered during lock and setup. Accordingly, our protocol avoids the financially costly, regulatory problematic, and congestion-prone front-and-reimburse paradigm.
Furthermore, we note that one of the main applications of TOOP is as an enabler of secure transfer of assets between UTxO blockchains, and back. We showcase this via sketching a committee-based validation protocol that requires only 1-out-of-n honest security. This protocol operates in distinct phases: the lock phase, where the initial setup and individual assets are locked on Bitcoin, and the unlocking with the ownership transfer phase, where the asset is transferred to a possibly different legitimate owner.
This cross-chain bridge protocol, where TOOP plays a key role, is being formalized in concurrent work, and has been implemented for the first time in Cardinal, a protocol for wrapping Bitcoin Unspent Transaction Outputs (UTxOs) onto the Cardano blockchain, with Bitcoin Ordinals represented as Cardano Non-Fungible Tokens (NFTs).
Everlasting Anonymous Rate-Limited Tokens
Anonymous rate-limited tokens are a special type of credential that can be used to improve the efficiency of privacy-preserving authentication systems like Privacy Pass. In such a scheme, a user obtains a "token dispenser" by interacting with an issuer, and the dispenser allows the user to create up to a pre-determined number $k$ of unlinkable and publicly verifiable tokens. Unlinkable means that one should not be able to tell that two tokens originate from the same dispenser, but also they cannot be linked to the interaction that generated the dispenser. Furthermore, we can limit the rate at which these tokens are created by linking each token to a context (e.g., the service we are authenticating to), and imposing a limit $N \leq k$ such that seeing more than $N$ tokens for the same context will reveal the identity of the user.
Constructions of such tokens were first given by Camenisch, Hohenberger and Lysyanskaya (EUROCRYPT '05) and Camenisch, Hohenberger, Kohlweiss, Lysyanskaya, and Meyerovich (CCS '06).
In this work, we present the first construction of \emph{everlasting} anonymous rate-limited tokens, for which unlinkability holds against computationally unbounded adversaries, whereas other security properties (e.g., unforgeability) remain computational. Our construction relies on pairings. While several parameters in our construction unavoidably grow with $k$, the key challenge we resolve is ensuring that the complexity of dispensing a token is independent of the parameter $k$.
We are motivated here by the goal of providing solutions that are robust to potential future quantum attacks against the anonymity of previously stored tokens. A construction based on post-quantum secure assumptions (e.g., based on lattices) would be rather inefficient---instead, we take a pragmatic approach dispensing with post-quantum security for properties not related to privacy.
Fine-Grained Non-Interactive Key-Exchange without Idealized Assumptions, and Lower Bounds
In this paper, we study multi-party non-interactive key exchange (NIKE) in the fine-grained setting. More precisely, we propose three multi-party NIKE schemes in three computation models, namely, the bounded parallel-time, bounded time, and bounded storage models. Their security is based on a very mild assumption (e.g., NC1 ⊊ ⊕L/poly) or even without any complexity assumption. This improves the recent work of Afshar, Couteau, Mahmoody, and Sadeghi (EUROCRYPT 2023) that requires idealized assumptions, such as random oracles or generic groups.
Additionally, we show that all our constructions satisfy a natural desirable property that we refer to as extendability, and we give generic transformations from extendable multi-party NIKE to multi-party identity-based NIKEs in the fine-grained settings.
Furthermore, we generalize the lower bound on users’ storage consumption in the bounded storage model by Dziembowski and Maurer (Eurocrypt 2004) to encompass any multi-party NIKE with extendability. This new lower bound suggests that the users’ storage consumption of our multi-party NIKE in the bounded storage model is optimal.
Dynamic zk-SNARKs
In this work, we introduce the notion of dynamic zk-SNARKs. A dynamic zk-SNARK is a zk-SNARK that has an additional update algorithm. The update algorithm takes as input a valid source statement-witness pair $(x,w)\in R$ along with a verifying proof $\pi$, and a valid target statement-witness pair $(x',w')\in R$. It outputs a verifying proof $\pi'$ for $(x',w')$ in sublinear time (for $(x,w)$ and $(x',w')$ with small Hamming distance) potentially with the help of a data structure. To the best of our knowledge, none of the commonly-used zk-SNARKs are dynamic---a single update in $(x,w)$ can be handled only by recomputing the proof, which requires at least linear time. After formally defining dynamic zk-SNARKs, we present three constructions.
The first one, Dynarec, is based on recursive zk-SNARKs, has $O(\log n)$ update time and is folklore, in the sense that it shares similarities (and limitations such as small number of compositions and heuristic security) with existing tree-based Incremental Verifiable Computation (IVC) schemes.
Our second contribution is Dynaverse, a dynamic zk-SNARK based on a new dynamic permutation argument that we propose---Dynamo. Dynaverse has $O(\sqrt{n}\log n)$ update time and proofs of $O(1)$ size. The security of Dynaverse rests upon the $q$-DLOG assumption in the Algebraic Group Model (AGM)---the same assumption used to prove security of the celebrated Groth16 zk-SNARK.
Our final construction, Dynalog, further advances Dynaverse via a data structure of exponentially-increasing levels and maintainable vector commitments to finally achieve polylogarithmic update time and proofs of $O(\log n)$ size, again under $q$-DLOG in the AGM (With a single level of SNARK composition, therefore not affecting security, Dynalog's proofs can be further compressed down to $O(1)$.)
As a central application of dynamic zk-SNARKs, we build a compiler from any dynamic zk-SNARK to bounded-step and recursion-free IVC, a model introduced by Tyagi et al. in 2022. Interestingly, when this transformation is instantiated with a slight modification of Dynaverse, we derive the first bounded-step and recursion-free IVC with sublinear space, removing the need for maintaining a linear-time server for retrieving proofs. We also detail additional applications of dynamic zk-SNARKs such as dynamic state proofs and keyless authentication.
Our preliminary evaluation shows that Dynaverse outperforms baseline PLONK proof recomputation by up to approximately 500x as well as heuristically-secure and asymptotically-superior Dynarec by up to one order of magnitude.
Improved Key Recovery Attacks of Ascon
Ascon, a family of algorithms that support hashing and Authenticated Encryption with Associated Data (AEAD), is the final winner of the NIST Lightweight Cryptography Project.
As a research hotspot, Ascon has received substantial third-party security evaluation. Among all the results of Ascon-128 (the primary recommendation of AEAD), the key recovery attack can only be achieved by reducing the initialization phase to 7 rounds or fewer, regardless of whether it violates the security claims made by the designers (i.e., misuse of the nonce or exceeding data limits $2^{64}$).
In this paper, we, from two aspects (misuse-free setting and misused setting), improve the key recovery attack on Ascon-128 using the cube attack method.
In one part, we present a faster method to recover the superpolies for a 64-dimensional cube in the output bits of the 7-round initialization, enabling us to recover the secret key with a time complexity of $2^{95.96}$ and a data complexity of $2^{64}$.
Our 7-round key recovery attack, based on the full key space, greatly improves the time complexity, making it the best result to date.
Additionally, we utilize several techniques to extend state recovery to key recovery, answering the open problem of transitioning from full state recovery in the encryption phase to key recovery for Ascon-128 (ToSc Vol 4, 2022). By combining encryption phase state recovery with initialization phase key recovery, we can achieve 8-round and 9-round initialization phase key recovery in the nonce misuse scenario, with time complexities of $2^{101}$ and $2^{123.92}$, respectively.
This represents an improvement of two rounds over previous results in the misused setting.
Our first key recovery attack is also applicable to Ascon-128a, achieving
the same result. In cases where the full state, prior to the encryption phase, can be recovered in other Ascon AEAD modes, our second key recovery attack will also be useful. It is worth noting that this work does not threaten the security of the full 12 rounds Ascon, but we expect that our results provide new insights
into the security of Ascon.
AsyRand: asynchronous distributed randomness beacon with reconfiguration
Distributed randomness beacon protocols, which continuously generate publicly verifiable randomness values, are crucial for many applications. Recently, there have been many approaches, such as Hydrand (S\&P'20), SPURT (S\&P'22), OptRand (NDSS'23) and GRandLine (CCS'24), based on publicly verifiable secret sharing (PVSS) to implement beacon protocols. However, two key challenges remain unresolved: asynchrony and reconfiguration. In this paper, we propose the $\mathsf{AsyRand}$ beacon protocol to address these challenges. We incorporate a producer-consumer model to decouple the distribution and reconstruction of PVSS secrets. Parties continuously produce and distribute new PVSS commitments, which are the encrypted shares and the proofs. Meanwhile, all parties store received commitments using first-in-first-out queues and collectively consume each commitment to recover the corresponding secret for beacon generation. To achieve asynchronous consensus, we employ reliable broadcast for distribution and apply $t$-validated asynchronous Byzantine agreement for reconstruction. To achieve reconfiguration, honest parties can collectively remove a faulty party if his queue remains empty for an extended duration, and a new party can join the system using reliable broadcast. We also introduce a novel PVSS scheme based on Sigma protocol and Fiat-Shamir heuristic, which is of independent interest. Consequently, $\mathsf{AsyRand}$ maintains state-of-the-art complexity with $O(n^2)$ communication complexity, $O(n)$ computation complexity, and $O(n)$ verification complexity while achieving asynchrony and reconfiguration. Experimental results highlight the performance of $\mathsf{AsyRand}$ compared to existing works.
Group Key Progression: Strong Security for Shared Persistent Data
End-to-end encryption allows data to be outsourced and stored on an untrusted server, such as in the cloud, without compromising data privacy. In the setting when this data is shared between a group of users, members also all share access to the same static key material used for data encryption. When the group membership changes, access control is only enforced by the server: security breaches or compelled disclosure would allow even a removed member to decrypt the current shared data.
We propose to move away from static keys and instead use a group key progression (GKP) scheme, a novel primitive that enables a dynamic group of users to agree on a persistent sequence of keys while keeping a compact local state. GKP ensures that group members can only derive keys within a certain interval of the sequence, a notion that we call interval access control (IAC), and also provide post-compromise security. Our GKP construction, called Grappa, combines continuous group key agreement (CGKA, by Alwen et al., 2020) with a new abstraction called interval scheme. The latter is a symmetric-key primitive that can derive a sequence of keys from a compact state while preserving IAC. We explore different interval scheme constructions and simulate their storage and communication costs when used in group settings. The most efficient of them is a generalization of dual key regression (Shafagh et al., 2020), which we formalize and prove secure. Overall, our protocols offer a practical and robust solution to protect persistent data shared by a group.
A Note on Adaptive Security in Hierarchical Identity-Based Encryption
We present the first construction for adaptively secure HIBE, that does not rely on bilinear pairings or random oracle heuristics. Notably, we design an adaptively secure HIBE from any selectively secure IBE system in the standard model. Combining this with known results, this gives the first adaptively secure HIBE system from a wide variety of standard assumptions such as CDH/Factoring/LWE/LPN. We also extend our adaptively secure HIBE system to satisfy full anonymity, giving the first adaptively secure anonymous HIBE under CDH/LWE assumption. All our HIBE systems support unbounded length identities as well as unbounded number of recursive delegation operations.
Parallel Repetition for Post-Quantum Arguments
In this work, we show that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least $t$ of the executions are accepted (for some threshold $t$). Prior to this work, these results were known only when the cheating prover was assumed to be classical.
We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the verifier and communication may be quantum. We consider only protocols where the verifier is classical, but obtain a simplified analysis, and for the more general setting of threshold verifiers.
Malicious Security in Collaborative zk-SNARKs: More than Meets the Eye
Collaborative zk-SNARKs (Ozdemir and Boneh, USENIX’22) are a multiparty variant of zk-SNARKs where multiple, mutually distrustful provers, each holding a private input, jointly compute a zk-SNARK using their combined inputs. A sequence of works has proposed efficient constructions of collaborative zk-SNARKs using a common template that involves designing secure multiparty computation (MPC) protocols to emulate a zk-SNARK prover without making non-black-box use of cryptography. To achieve security against malicious adversaries, these works adopt compilers from the MPC literature that transform semi-honest MPC into malicious-secure MPC.
In this work, we revisit this design template.
• Pitfalls: We demonstrate two pitfalls in the template, which can lead to a loss of input privacy. We first show that it is possible to compute collaborative proofs on invalid witnesses, which in turn can leak the inputs of honest provers. Next, we show that using state-of-the-art malicious security compilers as-is for proof computation is insecure, in general. Finally, we discuss mitigation strategies.
• Malicious Security Essentially for Free: As our main technical result, we show that in the honest-majority setting, one can forego malicious security checks performed by state-of-the-art malicious security compilers during collaborative proof generation of several widely used zk-SNARKs. In other words, we can avoid the overheads of malicious security compilers, enabling faster proof generation.
To the best of our knowledge, this is the first example of non-trivial computations where semi-honest MPC protocols achieve malicious security. The observations underlying our positive results are general and may have applications beyond collaborative zkSNARKs.
Secure Noise Sampling for Differentially Private Collaborative Learning
Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step.
We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible – while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample
noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230× runtime improvement and 3× less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.
Oracle Separation Between Quantum Commitments and Quantum One-wayness
We show that there exists a unitary quantum oracle relative to which quantum commitments exist but no (efficiently verifiable) one-way state generators exist. Both have been widely considered candidates for replacing one-way functions as the minimal assumption for cryptography—the weakest cryptographic assumption implied by all of computational cryptography. Recent work has shown that commitments can be constructed from one-way state generators, but the other direction has remained open. Our results rule out any black-box construction, and thus settle this crucial open problem, suggesting that quantum commitments (as well as its equivalency class of EFI pairs, quantum oblivious transfer, and secure quantum multiparty computation) appear to be strictly weakest among all known cryptographic primitives.
Towards Trustless Provenance: A Privacy-Preserving Framework for On-chain Media Verification
As generative models continue to evolve, verifying the authenticity, provenance, and integrity of digital media has become increasingly critical—particularly for domains like journalism, digital art, and scientific documentation.
In this work, we present a decentralized verifiable media ecosystem for managing, verifying, and transacting authentic digital media using zero-knowledge proofs (ZKPs).
Building on VIMz (Dziembowski et al., PETS'25), we extend the framework in three key directions. First, we generalize the model to support arbitrary image regions to achieve selective transformations support such as redaction and regional blurring—features commonly required in privacy-preserving applications. Second, we introduce performance optimizations that yield up to an 18% improvement in off-chain proof generation, and enhance the framework to support cost-efficient on-chain verification. Third, we design and implement a modular smart contract architecture to support a wide range of decentralized media applications.
As a flagship use case, we develop a decentralized media marketplace that enables permissionless licensing, ownership transfer, and verifiable attribution. In this setting, users can share transformed media—such as cropped, blurred, or resized previews—alongside ZKPs that prove derivation from a signed original, eliminating the need to trust the seller.
Unlike prior fair exchange protocols, which rely on trusted descriptions or encrypted payload delivery, our system enables verifiable public previews and origin-bound proofs without revealing the full content. This approach unlocks new applications beyond marketplaces, including automated infringement dispute resolution and photography contests with verifiable criteria.
Universal Channel Rebalancing: Flexible Coin Shifting in Payment Channel Networks
Payment Channel Networks (PCNs) enhance blockchain scalability by enabling off-chain transactions. However, repeated unidirectional multi-hop payments often cause channel imbalance or depletion, limiting scalability and usability. Existing rebalancing protocols, such as Horcrux [NDSS’25] and Shaduf [NDSS’22], rely on on-chain operations, which hinders efficiency and broad applicability.
We propose Universal Channel Rebalancing (UCRb), a blockchain-agnostic, fully off-chain framework that ensures correct behavior among untrusted participants without on-chain interaction.
UCRb incorporates the following core innovations:
(1) a fair and reliable incentive-compatible mechanism that encourages voluntary user participation in off-chain channel rebalancing,
(2) integration of Pedersen commitments to achieve atomic off-chain payments and rebalancing operations, while ensuring balance security, and
(3) zero-knowledge proofs to enable privacy-preserving channel initialization and coin shifting, ensuring that user identities and fund allocations remain hidden throughout the process.
We evaluate UCRb using real-world Lightning Network dataset and compare its performance against state-of-the-art solutions including Horcrux, Shaduf, and Revive [CCS'17].
UCRb exhibits a success ratio enhancement between 15% and 50%, while also reducing the required user deposits by 72%--92%. It maintains an almost negligible rate of channel depletion. Additionally, the long-term performance of UCRb is roughly 1.5 times that of its short-term performance, suggesting that continuous operation leads to improved efficiency. We implement a prototype for UCRb smart contracts and demonstrate its practicality through extensive evaluation. As \texttt{CoinShift} operations require no on-chain interaction, the protocol incurs minimal gas costs. For instance, opening and closing channels with 10 neighbors costs only 130K-160K gas—significantly lower than comparable solutions.
Burn Your Vote: Decentralized and Publicly Verifiable Anonymous Voting at Scale
Secure and trustworthy electronic voting requires more than correctness and censorship resistance, it must also ensure voter privacy, vote confidentiality, and protection against coercion. Prior work attempt to address these challenges using heavyweight cryptographic primitives such as homomorphic encryption, time-lock puzzles, or multi-party computation. These approaches often involve complex computations, depend on trusted parties, and typically do not scale well. We propose a lightweight, fully on-chain anonymous voting protocol based on a novel application of the proof-of-burn (PoB) mechanism. Voters anonymously commit to their votes by burning tokens to pseudorandom addresses and later submit zero-knowledge proofs attesting to their valid participation. Our design achieves vote integrity, coercion resistance, and unlinkability without relying on encrypted ballots, trusted third parties, or centralized tallying. The tallying process is public and operates on plaintext votes that are authenticated yet unlinkable to voters. This enables flexible voting models—including token-weighted and quadratic voting—with minimal on-chain overhead. We formally analyze the protocol’s security guarantees and demonstrate support for a broad range of voting models. We implement the protocol as an open-source library fully compatible with the Ethereum Virtual Machine (EVM), and our experimental evaluation confirms its high scalability and improved efficiency compared to the state-of-the-art.
Black-Box Crypto is Useless for Pseudorandom Codes
A pseudorandom code is a keyed error-correction scheme with the property that any polynomial number of encodings appear random to any computationally bounded adversary. We show that the pseudorandomness of any code tolerating a constant rate of random errors cannot be based on black-box reductions to almost any generic cryptographic primitive: for instance, anything that can be built from random oracles, generic multilinear groups, and virtual black-box obfuscation. Our result is optimal, as Ghentiyala and Guruswami (2024) observed that pseudorandom codes tolerating any sub-constant rate of random errors exist using a black-box reduction from one-way functions.
The key technical ingredient in our proof is the hypercontractivity theorem for Boolean functions, which we use to prove our impossibility in the random oracle model. It turns out that this easily extends to an impossibility in the presence of "crypto oracles," a notion recently introduced---and shown to be capable of implementing all the primitives mentioned above---by Lin, Mook, and Wichs (EUROCRYPT 2025).
Silent Splitter: Privacy for Payment Splitting via New Protocols for Distributed Point Functions
In a world where financial transactions are primarily performed or recorded online, protecting sensitive transaction details has become crucial. Roommates sharing housing costs or friends splitting travelling expenses may use applications such as Splitwise to easily track debts and minimize the number of individual repayments. However, these apps reveal potentially sensitive financial transaction activity to their operators. In this paper, we present Silent Splitter, a privacy-preserving payment splitting system which enables users to securely set up groups, perform transactions within those groups, and "settle up" without revealing group membership or any sensitive transaction details (such as the users involved or amount of money exchanged) to the system itself. Silent Splitter operates in the two server setting and uses Distributed Point Functions (DPFs) to securely record transactions. Of independent interest, we also present new protocols for proving knowledge of properties of DPFs as part of our system.
Adding Feeding Forward Back to the Sponge Construction
Avoiding feeding forward seems to be a major goal of the sponge construction. We make a step back and investigate adding feeding forward back to sponge. The obtained sponge-with-feeding-forward construction has a number of benefits: (1) In the random permutation model, its preimage and second preimage security bounds are much better than the standard sponge with the same capacity, while collision and indifferentiability security bounds are comparable; (2) Its collision and (second) preimage security can be reduced to well-defined properties of the underlying permutation, i.e., correlation intractability w.r.t. certain family of evasive relations.
We further incorporate several somewhat new ideas to form detailed hash and XOF constructions SpongeFwd: (1) Feeding-forward is only applied to the capacity part, and the final output(s) is the capacity part (with the rate part discarded). In this way, when $c$ equals the primary hash output size $h$, a single permutation-call suffices for squeezing. This also simplifies the underlying evasive relations for the reduction security proof. (2) We replace the hash IV with the first message block to have the best possible efficiency. (3) We employ a parallel structure to define an XOF variant. (4) We use HAIFI-style counter inputs to achieve both length-independent second-preimage security and domain separation for XOF.
The better (second) preimage security enables constructing 512-bit output hash function from Keccak-p[800]: with 512-bit capacity, its collision and (second) preimage security bounds are the same as the standard SHA-3-512, while its hardware area is reduced by roughly 38% (according to our preliminary estimation).
Towards Scalable YOSO MPC via Packed Secret-Sharing
The YOSO (You Only Speak Once) model, introduced by Gentry et al. (CRYPTO 2021), helps to achieve strong security guarantees in cryptographic protocols for distributed settings, like blockchains, with large number of parties. YOSO protocols typically employ smaller anonymous committees to execute individual rounds of the protocol instead of having all parties execute the entire protocol. After completing their tasks, parties encrypt protocol messages for the next anonymous committee and erase their internal state before publishing ciphertexts, thereby enhancing security in dynamically changing environments.
In this work, we consider the problem of secure multi-party computation (MPC), a fundamental problem in cryptography and distributed computing. We assume honest majority among the committee members, and work in the online-offline, i.e., preprocessing, setting.
In this context, we present the first YOSO MPC protocol where efficiency---measured as communication complexity---improves as the number of parties increases. Specifically, for $0<\epsilon<1/2$ and an adversary corrupting $t<n(\frac{1}{2}-\epsilon)$ out of $n$ parties, our MPC protocol exhibits enhanced scalability as $n$ increases, where the online phase communication becomes independent of $n$.
Prior YOSO MPC protocols considered $t$ as large as $(n-1)/2$, but a significant hurdle persisted in obtaining YOSO MPC with communication that does not scale linearly with the number of committee members, a challenge that is exagerbated when the committee size was large per YOSO's requirements.
We show that, by considering a small ``gap'' of $\epsilon>0$, the sizes of the committees are only marginally increased, while online communication is significantly reduced.
Furthermore, we explicitly consider fail-stop adversaries, i.e., honest participants who may inadvertently fail due to reasons such as denial of service or software/hardware errors. In prior YOSO work, these adversaries were grouped with fully malicious parties. Adding explicit support for them allows us to achieve even better scalability.
The Large Block Cipher Family Vistrutah
Vistrutah is a large block cipher with block sizes of 256 and 512 bits. It iterates a step function that applies two AES rounds to each 128-bit block of the state, followed by a state-wide cell permutation. Like Simpira, Haraka, Pholkos, and ASURA, Vistrutah leverages AES instructions to achieve high performance.
For each component of Vistrutah, we conduct a systematic evaluation of functions that can be efficiently implemented on both Intel and Arm architectures. We therefore expect them to perform efficiently on any recent vector instruction set architecture (ISA) with AES support. Our evaluation methodology combines latency estimation on an abstracted vector ISA with security analysis. The goal is to maximize the ratio
of "bits of security per unit of time", i.e., to achieve the highest security for a given performance target, or equivalently, the best performance for a given security level within this class of designs. Implementations confirm the accuracy of our latency model. Vistrutah even performs significantly better than Rijndael-256-256.
We support our security claims with a comprehensive ad-hoc cryptanalysis. An isomorphism between Vistrutah-512, the 512-bit wide variant, and the AES, allows us to also leverage the extensive cryptanalysis of AES and apply it to Vistrutah-512.
A core design principle is the use of an inline key schedule: all round keys are computed during each encryption or decryption operation without requiring memory storage. In fact, rekeying has no associated overheads. Key schedules like the AES’s must precompute and store round keys in memory for acceptable performance. However, in 2010 Kamal and Youssef showed this makes cold boot attacks more effective. Vistrutah’s approach minimizes leakage to at most one value during context switches. Furthermore, expensive key schedules reduce key agility, limiting the design of modes of operation.
Vistrutah is particularly well-suited for Birthday-Bound modes of operation, including Synthetic IV modes and Accordion modes for 256-bit block ciphers. It can serve as a building block for compression functions (such as Matyas-Meyer-Oseas) in wide Merkle-Damgard hash functions. Additionally, it can implement "ZIP" wide pseudo-random functions as recently proposed by Florez-Gutierrez et al. in 2024. We include related-key security analysis for two critical reasons. First, strong related-key security demonstrates the robustness of both the key schedule and of the cipher as a whole. Second, Vistrutah’s key agility enables mode designers to place values from counters (or other update functions) in the key input rather than the plaintext input. This approach simplifies achieving Beyond the Birthday Bound security.
Finally, we present short, i.e., reduced-round versions of Vistrutah which are analyzed taking into account the restrictions posed on attackers by specific modes of operation. In particular, we model the use of the block ciphers in Hash-Encrypt-Hash (HEH) constructions such as HCTR2 as well as in ForkCiphers. These short versions of Vistrutah can be used to accelerate modes of operation without sacrificing security.
MT-TMVP: Modular Tiled TMVP-based Polynomial Multiplication for Post-Quantum Cryptography on FPGAs
As quantum technology advances, developing cryptographic solutions resistant to quantum attacks is crucial. Post-Quantum Cryptography (PQC) provides a practical approach by running on classical computers. They rely on hard mathematical problems, with lattice-based being one of the National Institute of Standards and Technology (NIST)-recognized schemes known for its small key sizes. Hardware implementation of these schemes faces challenges due to the computational intensity of operations like polynomial multiplication, especially for resource-constrained devices. This paper proposes a novel Modular Tiled Toeplitz Matrix-Vector Polynomial Multiplication (MT-TMVP) for lattice-based PQC algorithms and presents a resource-optimized Field Programmable Gate Array (FPGA) architecture. The proposed implementation significantly reduces resource utilization and Area-Delay Product (ADP) compared to state-of-the-art polynomial multipliers. It utilizes 99.68% and 84.22% fewer Look-Up Tables (LUTs) on Artix-7 and Zynq Ultrascale+ FPGAs, respectively, and achieves 99.94% and 80.02% ADP improvements on these FPGAs compared to the best results in the literature. By leveraging Block RAM (BRAM), the proposed architecture offers robustness against timing-based Side-Channel Attacks (SCAs), and the design is modular and scalable to any polynomial degree.