The advent of quantum key distribution (QKD) has revolutionized secure communication by providing unconditional security, unlike classical cryptographic methods. However, its effectiveness relies on robust identity authentication, as vulnerabilities in the authentication process can cause a compromise with the security of the entire communication system. Over the past three decades, numerous quantum identity authentication (QIA) protocols have been proposed. This thesis first presents a chronological review of these protocols, categorizing them based on quantum resources and computational tasks involved while analyzing their strengths and limitations. Subsequently, by recognizing inherent symmetries present in the existing protocols, we design novel QIA schemes based on secure computational and communication tasks. Specifically, this work introduces a set of new QIA protocols that utilize controlled secure direct quantum communication. The proposed scheme facilitates mutual authentication between two users, Alice and Bob, with assistance from a third party, Charlie, using Bell states. A comprehensive security analysis demonstrates its robustness against impersonation, intercept-res
With the advent of advanced technology, IoT introduces a vast number of devices connecting with each other and collecting a sheer volume of data. Thus, the demands of IoT security is paramount. Cryptography is being used to secure the networks for authentication, confidentiality, data integrity and access control. However, due to the resource constraint nature of IoT devices, the traditional cryptographic protocols may not be suited in all IoT environments. Researchers, as a result, have been proposing various lightweight cryptographic algorithms and protocols. In this paper, we discuss the state of the art lightweight cryptographic protocols for IoT networks and present a comparative analysis of the existing protocols. In doing so, this paper has classified the most current algorithm into two parts, such as symmetric and asymmetric lightweight cryptography. Additionally, we consider several recent developed block cipher and stream cipher algorithms. Furthermore, various research challenges of lightweight cryptography have been addressed.
One relevant aspect in the development of the Semantic Web framework is the achievement of a real inter-agents communication capability at the semantic level. The agents should be able to communicate and understand each other using standard communication protocols freely, that is, without needing a laborious a priori preparation, before the communication takes place. For that setting we present in this paper a proposal that promotes to describe standard communication protocols using Semantic Web technology (specifically, OWL-DL and SWRL). Those protocols are constituted by communication acts. In our proposal those communication acts are described as terms that belong to a communication acts ontology, that we have developed, called CommOnt. The intended semantics associated to the communication acts in the ontology is expressed through social commitments that are formalized as fluents in the Event Calculus. In summary, OWL-DL reasoners and rule engines help in our proposal for reasoning about protocols. We define some comparison relationships (dealing with notions of equivalence and specialization) between protocols used by agents from different systems.
Quantum cryptography is now considered as a promising technology due to its promise of unconditional security. In recent years, rigorous work is being done for the experimental realization of quantum key distribution (QKD) protocols to realize secure networks. Among various QKD protocols, coherent one way and differential phase shift QKD protocols have undergone rapid experimental developments due to the ease of experimental implementations with the present available technology. In this work, we have experimentally realized optical fiber based coherent one way and differential phase shift QKD protocols at telecom wavelength. Both protocols belong to a class of protocols named as distributed phase reference protocol in which weak coherent pulses are used to encode the information. Further, we have analyzed the key rates with respect to different parameters such distance, disclose rate, compression ratio and detector dead time.
We introduce state-of-the-art protocols to distill indistinguishable photons, reducing distinguishability error rates by a factor of $n$, while using a modest amount of resources scaling only linearly in $n$. Our resource requirements are both significantly lower and have fewer hardware requirements than previous works, making large-scale distillation experimentally feasible for the first time. This efficient reduction of distinguishability error rates has direct applications to fault-tolerant linear optical quantum computation, potentially leading to improved thresholds for photon loss errors and allowing smaller code distances, thus reducing overall resource costs. Our protocols are based on Fourier transforms on finite abelian groups, special cases of which include the discrete Fourier transform and Hadamard matrices. This general perspective allows us to unify previous results on distillation protocols and introduce a large family of efficient schemes. We utilize the rich mathematical structure of Fourier transforms, including symmetries and related suppression laws, to quantify the performance of these distillation protocols both analytically and numerically. Finally, our work
One relevant aspect in the development of the Semantic Web framework is the achievement of a real inter-agents communication capability at the semantic level. Agents should be able to communicate with each other freely using different communication protocols, constituted by communication acts. For that scenario, we introduce in this paper an efficient mechanism presenting the following main features: - It promotes the description of the communication acts of protocols as classes that belong to a communication acts ontology, and associates to those acts a social commitment semantics formalized through predicates in the Event Calculus. - It is sustained on the idea that different protocols can be compared semantically by looking to the set of fluents associated to each branch of the protocols. Those sets are generated using Semantic Web technology rules. - It discovers the following types of protocol relationships: equivalence, specialization, restriction, prefix, suffix, infix and complement_to_infix.
Observable currents are spacetime local objects that induce physical observables when integrated on an auxiliary codimension one surface. Since the resulting observables are independent of local deformations of the integration surface, the currents themselves carry most of the information about the induced physical observables. I introduce observable currents in a multisymplectic framework for Lagrangian field theory over discrete spacetime. One family of examples is composed by Noether currents. A much larger family of examples is composed by currents, spacetime local objects, that encode the symplectic product between two arbitrary vectors tangent to the space of solutions. A weak version of observable currents, which in general are nonlocal, is also introduced. Weak observable currents can be used to separate points in the space of physically distinct solutions. It is shown that a large class of weak observable currents can be "improved" to become local. A Poisson bracket gives the space of observable currents the structure of a Lie algebra. Peierls bracket for bulk observables gives an algebra homomorphism mapping equivalence classes of bulk observables to weak observable curre
As conversational AI-based dialogue management has increasingly become a trending topic, the need for a standardized and reliable evaluation procedure grows even more pressing. The current state of affairs suggests various evaluation protocols to assess chat-oriented dialogue management systems, rendering it difficult to conduct fair comparative studies across different approaches and gain an insightful understanding of their values. To foster this research, a more robust evaluation protocol must be set in place. This paper presents a comprehensive synthesis of both automated and human evaluation methods on dialogue systems, identifying their shortcomings while accumulating evidence towards the most effective evaluation dimensions. A total of 20 papers from the last two years are surveyed to analyze three types of evaluation protocols: automated, static, and interactive. Finally, the evaluation dimensions used in these papers are compared against our expert evaluation on the system-user dialogue data collected from the Alexa Prize 2020.
The cost of distributed quantum operations such as the telegate and teledata protocols is high due to latencies from distributing entangled photons and classical information. This paper proposes an extension to the telegate and teledata protocols to allow for asynchronous classical communication which hides the cost of distributed quantum operations. We then discuss the benefits and limitations of these asynchronous protocols and propose a potential way to improve these asynchronous protocols using nonunitary operators. Finally, a quantum network card is described as an example of how asynchronous quantum operations might be used.
Transparency protocols are protocols whose actions can be publicly monitored by observers (such observers may include regulators, rights advocacy groups, or the general public). The observed actions are typically usages of private keys such as decryptions, and signings. Examples of transparency protocols include certificate transparency, cryptocurrency, transparent decryption, and electronic voting. These protocols usually pose a challenge for automatic verification, because they involve sophisticated data types that have strong properties, such as Merkle trees, that allow compact proofs of data presence and tree extension. We address this challenge by introducing new features in ProVerif, and a methodology for using them. With our methodology, it is possible to describe the data type quite abstractly, using ProVerif axioms, and prove the correctness of the protocol using those axioms as assumptions. Then, in separate steps, one can define one or more concrete implementations of the data type, and again use ProVerif to show that the implementations satisfy the assumptions that were coded as axioms. This helps make compositional proofs, splitting the proof burden into several manage
We present two abstractions for designing modular state machine replication (SMR) protocols: trees and turtles. A tree captures the set of possible state machine histories, while a turtle represents a subprotocol that tries to find agreement in this tree. We showcase the applicability of these abstractions by constructing crash-tolerant SMR protocols out of abstract tree turtles and providing examples of tree turtle implementations. Tree turtles can also be extended to be made Byzantine fault-tolerant (BFT). The modularity of tree turtles allows a generic approach for adding a leader for liveness. We expect that these abstractions will simplify reasoning and formal verification of SMR protocols as well as facilitate innovation in protocol designs.
Our digital technology depends on mathematics to compute current flow and design its devices. Mathematics describes current flow by an idealization, Kirchhoff's current law. All the electrons that flow into a node flow out. This idealization describes real circuits only when stray capacitances are included in the circuit design. Motivated by Maxwell's equations, we propose that current in Kirchhoff's law be defined as \[{\varepsilon }_{\mathrm{0\ \ }}\frac{\mathrm{\partial }\boldsymbol{\mathrm{E}}}{\mathrm{\partial }t}\ \mathrm{+}\mathrm{\ }\mathrm{\ }\widetilde{\boldsymbol{\mathrm{J}}},\] the sum of (1) displacement current (2) the flux of charge associated with mass. The flux of charge associated with mass $\widetilde{\boldsymbol{\mathrm{J}}}$ includes, for example, the polarization of dielectrics as well as the movement of electrons. Kirchhoff's law becomes exact and universal when current is defined this way. This current is the source of the magnetic field; it is the source of $\boldsymbol{\mathrm{curl}}\boldsymbol{\mathrm{\ }}\boldsymbol{\mathrm{B}}$ in Maxwell's equations. Kirchoff's laws and Maxwell's equations can use the same definition of current.
Differential privacy (DP) is widely employed to provide privacy protection for individuals by limiting information leakage from the aggregated data. Two well-known models of DP are the central model and the local model. The former requires a trustworthy server for data aggregation, while the latter requires individuals to add noise, significantly decreasing the utility of aggregated results. Recently, many studies have proposed to achieve DP with Secure Multi-party Computation (MPC) in distributed settings, namely, the distributed model, which has utility comparable to central model while, under specific security assumptions, preventing parties from obtaining others' information. One challenge of realizing DP in distributed model is efficiently sampling noise with MPC. Although many secure sampling methods have been proposed, they have different security assumptions and isolated theoretical analyses. There is a lack of experimental evaluations to measure and compare their performances. We fill this gap by benchmarking existing sampling protocols in MPC and performing comprehensive measurements of their efficiency. First, we present a taxonomy of the underlying techniques of these s
While the existing literature on Differential Privacy (DP) auditing predominantly focuses on the centralized model (e.g., in auditing the DP-SGD algorithm), we advocate for extending this approach to audit Local DP (LDP). To achieve this, we introduce the LDP-Auditor framework for empirically estimating the privacy loss of locally differentially private mechanisms. This approach leverages recent advances in designing privacy attacks against LDP frequency estimation protocols. More precisely, through the analysis of numerous state-of-the-art LDP protocols, we extensively explore the factors influencing the privacy audit, such as the impact of different encoding and perturbation functions. Additionally, we investigate the influence of the domain size and the theoretical privacy loss parameters $ε$ and $δ$ on local privacy estimation. In-depth case studies are also conducted to explore specific aspects of LDP auditing, including distinguishability attacks on LDP protocols for longitudinal studies and multidimensional data. Finally, we present a notable achievement of our LDP-Auditor framework, which is the discovery of a bug in a state-of-the-art LDP Python package. Overall, our LDP-A
Creativity and strategic foresight have been extensively studied through descriptive theories -- Koestler's bisociation (1964), de Bono's lateral thinking (1967), and Ansoff's weak signals (1975) explain why creative and strategic insights occur, but offer limited guidance on how to produce them on demand. This paper presents two executable protocols that bridge this theory-practice gap: GHOSTY COLLIDER, a 5-step protocol for cross-domain creative emergence through structural de-labeling and collision, and PRECOG PROTOCOL, a 5-step protocol for signal-based strategic foresight with multi-axis timing judgment. We formalize established theories into repeatable, step-by-step procedures with explicit quality criteria, anti-pattern detection, and measurable outputs. We evaluate the protocols through three complementary methods: (1) five detailed case studies across distinct domains, (2) controlled comparisons against standard methods using identical inputs, and (3) a batch experiment across eight random domain pairings (N=8, success rate 87.5%, failure rate 12.5%) with one blind evaluation. Preliminary evidence suggests that protocol-driven outputs exhibit greater structural novelty, hi
In this article, we investigate the convergence behavior of two classes of gathering protocols with fixed circulant topologies using tools from dynamical systems. Given a fixed number of mobile entities moving in the Euclidean plane, we model a gathering protocol as a system of (linear) ordinary differential equations whose equilibria are exactly all possible gathering points. Then, for a circulant topology we derive a decomposition of the state space into stable invariant subspaces with different convergence rates by utilizing tools from dynamical systems theory. It turns out, that this decomposition is identical for every linear circulant gathering protocol, whereas only the convergence rates depend on the weights in interaction graph itself. In the second part, we consider a normalized nonlinear version of the equation of motion that is obtained by scaling the speed of each entity. Again, we find a similar decomposition of the state space that is based on our findings in the linear case. Finally, we also consider visibility preservation properties of the two classes of system.
We design and analyze new protocols to verify the correctness of various computations on matrices over the ring F[x] of univariate polynomials over a field F. For the sake of efficiency, and because many of the properties we verify are specific to matrices over a principal ideal domain, we cannot simply rely on previously-developed linear algebra protocols for matrices over a field. Our protocols are interactive, often randomized, and feature a constant number of rounds of communication between the Prover and Verifier. We seek to minimize the communication cost so that the amount of data sent during the protocol is significantly smaller than the size of the result being verified, which can be useful when combining protocols or in some multi-party settings. The main tools we use are reductions to existing linear algebra verification protocols and a new protocol to verify that a given vector is in the F[x]-row space of a given matrix.
A classical $E_{d(d)}$-invariant Hamiltonian formulation of world-volume theories of half-BPS p-branes in type IIb and eleven-dimensional supergravity is proposed, extending known results to $d \leq 6$. It consists of a Hamiltonian, characterised by a generalised metric, and a current algebra constructed s.t. it reproduces the $E_{d(d)}$ generalised Lie derivative. $E_{d(d)}$-covariance necessitates the introduction of so-called charges, specifying the type of p-brane and the choice of section. For p>2, currents of p-branes are generically non-geometric due to the imposition of U-duality, e.g. the M5-currents contain coordinates associated to the M2-momentum. A derivation of the $E_{d(d)}$-invariant current algebra from a canonical Poisson structure is in general not possible. At most, one can derive a current algebra associated to para-Hermitian exceptional geometry. The membrane in the SL(5)-theory is studied in detail. It is shown that in a generalised frame the current algebra is twisted by the generalised fluxes. As a consistency check, the double dimensional reduction from membranes in M-theory to strings in type IIa string theory is performed. Many features generalise to
Although DRL (deep reinforcement learning) has emerged as a powerful tool for making better decisions than existing hand-crafted communication protocols, it faces significant limitations: 1) Selecting the appropriate neural network architecture and setting hyperparameters are crucial for achieving desired performance levels, requiring domain expertise. 2) The decision-making process in DRL models is often opaque, commonly described as a 'black box.' 3) DRL models are data hungry. In response, we propose CP-AgentNet, the first framework designed to use generative agents for developing communication network protocols. This approach addresses these challenges by creating an autonomous system for protocol design, significantly reducing human effort. We developed LLMA (LLM-agents-based multiple access) and CPTCP (CP-Agent-based TCP) for heterogeneous environments. Our comprehensive simulations have demonstrated the efficient coexistence of LLMA and CPTCP with nodes using different types of protocols, as well as enhanced explainability.
AI agents, autonomous digital actors, need agent-native protocols; existing methods include GUI automation and MCP-based skills, with defects of high token consumption, fragmented interaction, inadequate security, due to lacking a unified top-level framework and key components, each independent module flawed. To address these issues, we present ANX, an open, extensible, verifiable agent-native protocol and top-level framework integrating CLI, Skill, MCP, resolving pain points via protocol innovation, architectural optimization and tool supplementation. Its four core innovations: 1) Agent-native design (ANX Config, Markup, CLI) with high information density, flexibility and strong adaptability to reduce tokens and eliminate inconsistencies; 2) Human-agent interaction combining Skill's flexibility for dual rendering as agent-executable instructions and human-readable UI; 3) MCP-supported on-demand lightweight apps without pre-registration; 4) ANX Markup-enabled machine-executable SOPs eliminating ambiguity for reliable long-horizon tasks and multi-agent collaboration. As the first in a series, we focus on ANX's design, present its 3EX decoupled architecture with ANXHub and preliminar