AU2020336124A1 - Decentralized techniques for verification of data in transport layer security and other contexts - Google Patents

Decentralized techniques for verification of data in transport layer security and other contexts Download PDF

Info

Publication number
AU2020336124A1
AU2020336124A1 AU2020336124A AU2020336124A AU2020336124A1 AU 2020336124 A1 AU2020336124 A1 AU 2020336124A1 AU 2020336124 A AU2020336124 A AU 2020336124A AU 2020336124 A AU2020336124 A AU 2020336124A AU 2020336124 A1 AU2020336124 A1 AU 2020336124A1
Authority
AU
Australia
Prior art keywords
client device
verifier
server
secure session
session
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
AU2020336124A
Inventor
Steven Goldfeder
Ari Juels
Harjasleen Malvai
Sai Krishna Deepak MARAM
Fan Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cornell University
Original Assignee
Cornell University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cornell University filed Critical Cornell University
Publication of AU2020336124A1 publication Critical patent/AU2020336124A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information
    • H04L63/126Applying verification of the received information the source of the received data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/16Implementing security features at a particular protocol layer
    • H04L63/166Implementing security features at a particular protocol layer at the transport layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0272Virtual private networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0281Proxies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0819Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)
    • H04L9/0825Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s) using asymmetric-key encryption or public key infrastructure [PKI], e.g. key signature or public key certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0838Key agreement, i.e. key establishment technique in which a shared key is derived by parties as a function of information contributed by, or associated with, each of these
    • H04L9/0841Key agreement, i.e. key establishment technique in which a shared key is derived by parties as a function of information contributed by, or associated with, each of these involving Diffie-Hellman or related key agreement protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/085Secret sharing or secret splitting, e.g. threshold schemes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/14Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using a plurality of keys or algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3218Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using proof of knowledge, e.g. Fiat-Shamir, GQ, Schnorr, ornon-interactive zero-knowledge proofs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3247Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving digital signatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/46Secure multiparty computation, e.g. millionaire problem
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/76Proxy, i.e. using intermediary entity to perform cryptographic operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/045Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply hybrid encryption, i.e. combination of symmetric and asymmetric encryption

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Storage Device Security (AREA)
  • Computer And Data Communications (AREA)

Abstract

A verifier device in one embodiment is configured to communicate over one or more networks with a client device and a server device. The verifier device participates in a three-party handshake protocol with the client device and the server device in which the verifier device and the client device obtain respective shares of a session key of a secure session with the server device. The verifier device receives from the client device a commitment relating to the secure session with the server device, and responsive to receipt of the commitment, releases to the client device additional information relating to the secure session that was not previously accessible to the client device. The verifier device verifies correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session, based at least in part on the commitment and the additional information.

Description

DECENTRALIZED TECHNIQUES FOR VERIFICATION OF DATA IN TRANSPORT LAYER SECURITY AND OTHER CONTEXTS
Cross-Reference to Related Application's)
The present application claims priority to U.S. Provisional Patent Application Serial No. 62/894,052, filed August 30, 2019 and entitled “DECO: Decentralized Oracles for TLS,” which is incorporated by reference herein in its entirety.
Statement of Government Support
This invention was made with U.S. government support under Grant Nos. CNS-1514163, CNS-1564102, CNS-1704615 and CNS-1933655 of the National Science Foundation (NSF), and Army Research Office (ARO) Grant No. W911NF161-0145. The U.S. government has certain rights in the invention.
Field
The field relates generally to information security, including, for example, techniques for proving that data is from a particular source or otherwise verifying correctness of data obtained in the context of transport layer security (TLS) and in other contexts.
Background
Thanks to the widespread deployment of TLS, users can access private data over channels with end-to-end confidentiality and integrity. What they cannot do, however, is prove to third parties the provenance of such data, i.e., that it genuinely came from a particular website. Existing approaches either introduce undesirable trust assumptions or require server-side modifications. As a result, the value of users’ private data is locked up in its point of origin. Summary
Illustrative embodiments provide decentralized oracles for TLS and in numerous other applications in which it is necessary or otherwise desirable to prove that data is from a particular source.
For example, some embodiments overcome the above-described disadvantages of existing approaches by providing a decentralized oracle, illustratively referred to herein as DECO, that allows users to prove that a piece of data accessed via TLS came from a particular website, and to optionally prove statements about such data in zero-knowledge, keeping the data itself secret. DECO can thus liberate data from centralized web-service silos, making it accessible to a rich spectrum of applications. Advantageously, DECO in illustrative embodiments works without trusted hardware or server-side modifications.
In one embodiment, an apparatus comprises a verifier device that includes a processor and a memory coupled to the processor. The verifier device is configured to communicate over one or more networks with a client device and a server device. The verifier device participates in a three- party handshake protocol with the client device and the server device in which the verifier device and the client device obtain respective shares of a session key of a secure session with the server device. The verifier device receives from the client device a commitment relating to the secure session with the server device, and responsive to receipt of the commitment, releases to the client device additional information relating to the secure session that was not previously accessible to the client device. The verifier device verifies correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session, based at least in part on the commitment and the additional information.
It is to be appreciated that the foregoing arrangements are only examples, and numerous alternative arrangements are possible.
These and other embodiments of the invention include but are not limited to systems, methods, apparatus, processing devices, integrated circuits, and processor-readable storage media having software program code embodied therein. Brief Description of the Figures
FIG. 1 shows an information processing system implementing decentralized oracles for TLS in an illustrative embodiment.
FIG. 2 is a flow diagram of a process for implementing a decentralized oracle for TLS in an illustrative embodiment.
FIG. 3 shows an example of decentralized oracle functionality in an illustrative embodiment.
FIG. 4 illustrates multiple phases of device interaction in a decentralized oracle implementation comprising a server device, a prover device and a verifier device in an illustrative embodiment.
FIG. 5 shows example data comprising bank statement information used to demonstrate selective opening and context-integrity attacks in an illustrative embodiment.
FIG. 6 shows a detailed view of one possible implementation of a decentralized oracle protocol in an illustrative embodiment.
FIG. 7 shows a detailed example of a three-party handshake protocol utilized in implementing a decentralized oracle in an illustrative embodiment.
FIG. 8 shows a protocol carried out between a prover device and a verifier device in establishing key shares in an illustrative embodiment.
FIGS. 9 and 10 show additional examples of protocols utilized in conjunction with implementation of decentralized oracles in illustrative embodiments.
FIG. 11 shows a smart contract embodiment involving two parties in which one of the parties utilizes a decentralized oracle to obtain information that is provided to the smart contract.
FIGS. 12 and 13 show example data processed using respective reveal and redact modes of one or more decentralized oracles in illustrative embodiments.
FIG. 14 shows pseudocode for two-stage parsing of unique-key grammars in an illustrative embodiment. Detailed Description
Embodiments of the invention can be implemented, for example, in the form of information processing systems comprising computer networks or other arrangements of networks, clients, servers, processing devices and other components. Illustrative embodiments of such systems will be described in detail herein. It should be understood, however, that embodiments of the invention are more generally applicable to a wide variety of other types of information processing systems and associated networks, clients, servers, processing devices or other components. Accordingly, the term “information processing system” as used herein is intended to be broadly construed so as to encompass these and other arrangements.
FIG. 1 shows an information processing system 100 implementing decentralized oracles for TLS in an illustrative embodiment. The system 100 comprises a plurality of client devices 102- 1, 102-2, . . . 102-N and a verifier device 104 which are configured to communicate over a network 105. A given one of the client devices 102 can comprise, for example, a laptop computer, tablet computer or desktop personal computer, a mobile telephone, or another type of computer or processing device, as well as combinations of multiple such devices. The verifier device 104 can similarly comprise various types of processing devices each including at least one processor and at least one memory coupled to the at least one processor.
The network 105 can illustratively include, for example, a global computer network such as the Internet, a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network such as a 3G, 4G or 5G network, a wireless network implemented using a wireless protocol such as Bluetooth, WiFi or WiMAX, or various portions or combinations of these and other types of communication networks.
The system 100 further comprises a plurality of TLS servers 106 associated with respective protected data sources 108. The TLS servers 106 are illustratively configured to control access to their respective protected data sources 108. At least a subset of the protected data sources 108 illustratively comprise respective HTTPS-enabled websites, where HTTPS denotes Hypertext Transfer Protocol Secure, an extension of the Hypertext Transfer Protocol (HTTP). It is to be appreciated that a wide variety of additional or alternative data sources can be used in other embodiments. The protected data sources 108 of the system 100 are protected in the sense that they can be securely accessed via HTTPS through at least one of the TLS servers 106. Other data sources need not be accessible in this particular manner, and can implement additional or alternative access control mechanisms, and/or could be publicly accessible using other types of secure protocols.
Also, although illustrative embodiments herein utilize one or more servers of particular types, such as TLS servers 106, it is to be appreciated that other types of secure protocols can be used in other embodiments, and can be implemented in other types of server devices. The term “server device” as used herein is therefore intended to be broadly construed, and should not be viewed as being limited in any way to TLS servers.
The client devices 102 in some embodiments operate as respective prover devices relative to the verifier device 104 or other zero knowledge proof (ZKP) oracle nodes 110 of system 100. The verifier device 104 may therefore be viewed as a ZKP oracle node of the system 100. Accordingly, a given “verifier device” as that term is broadly used herein can comprise, for example, a particular oracle node of a set of oracle nodes of a decentralized oracle system such as system 100.
The particular numbers, types and arrangements of devices and other components in system 100 are presented by way of illustrative example only, and can be varied in other embodiments. For example, although only a single verifier device 104 is shown in this embodiment, other embodiments can include multiple verifier devices, as in an arrangement in which the other ZKP oracle nodes 110 operate as respective verifier devices. Also, one or more of the TLS servers 106 can each be configured to control access to multiple ones of the protected data sources 108. Numerous other decentralized oracle arrangements are possible.
The verifier device 104 comprises a three-party handshake module 112, a post-handshake interaction module 114, and a proof verification module 116. These modules implement respective distinct protocols of a decentralized oracle protocol, also referred to herein as a DECO protocol, as described in more detail below.
The verifier device 104 in the present embodiment further comprises a processor 120, a memory 122 and a network interface 124. The processor 120 is assumed to be operatively coupled to the memory 122 and to the network interface 124 via the simplified illustrative interconnections shown in the figure.
The processor 120 may comprise, for example, a microprocessor, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), a graphics processing unit (GPU), an arithmetic logic unit (ALU), a digital signal processor (DSP), or other similar processing device component, as well as other types and arrangements of processing circuitry, in any combination. At least a portion of the functionality of a decentralized oracle provided by a given processing device as disclosed herein can be implemented using such circuitry.
The memory 122 stores software program code for execution by the processor 120 in implementing portions of the functionality of the processing device. For example, at least portions of the functionality of the modules 112, 114 and 116 can be implemented using program code stored in memory 122.
A given such memory that stores such program code for execution by a corresponding processor is an example of what is more generally referred to herein as a processor-readable storage medium having program code embodied therein, and may comprise, for example, electronic memory such as SRAM, DRAM or other types of random access memory, read-only memory (ROM), flash memory, magnetic memory, optical memory, or other types of storage devices in any combination.
Articles of manufacture comprising such processor-readable storage media are considered embodiments of the invention. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals.
Other types of computer program products comprising processor-readable storage media can be implemented in other embodiments.
In addition, embodiments of the invention may be implemented in the form of integrated circuits comprising processing circuitry configured to implement processing operations associated with one or more of the client devices 102, verifier device 104, TLS servers 106 and other ZKP oracle nodes 110. The network interface 124 is configured to allow the verifier device 104 to communicate over the network 105 with other system elements, and may comprise one or more conventional transceivers.
In operation, the verifier device 104 in illustrative embodiments is configured to participate in a three-party handshake protocol with a given one of the client devices 102, such as client device 102-1, and a given one of the TLS servers 106. Such participation is illustratively controlled by the three-party handshake module 112 of the verifier device 104. In conjunction with execution of the three-party handshake protocol, verifier device 104 and the client device 102-1 obtain respective shares of a session key of a secure session with the given TLS server. The secure session illustratively comprises a TLS session.
The verifier device 104 receives from the client device 102-1 a commitment relating to the secure session with the given TLS server. Responsive to receipt of the commitment, the verifier device 104 releases to the client device 102-1 additional information relating to the secure session that was not previously accessible to the client device 102-1. These operations relating to the commitment are illustratively performed under the control of the post-handshake interaction module 114.
The verifier device 104 verifies correctness of at least one characterization of data obtained by the client device 102-1 from the given TLS server as part of the secure session, based at least in part on the commitment and the additional information. Such verification is illustratively performed under the control of the proof verification module 116.
In some embodiments, the verifier device 104 is further configured to initiate one or more automated actions responsive to the verification of the correctness of the at least one characterization of the data obtained by the client device 102-1 from the given TLS server. For example, the verifier device 104 can return verification information or other related information to the client device 102-1.
By way of example, the commitment relating to the secure session may comprise a commitment to query response data obtained by the client device 102-1 from the given TLS server as part of the secure session. As another example, the commitment relating to the secure session may comprise a commitment to a prover key established by the client device 102-1 in conjunction with the three- party handshake protocol but not previously accessible to the verifier device 104. Other types of commitments can be used in other embodiments.
The additional information released to the client device 102-1 responsive to receipt of the commitment in some embodiments comprises a verifier key established by the verifier device 104 in conjunction with the three-party handshake protocol but not previously accessible to the client device 102-1.
In other embodiments, the verifier device 104 is further configured to operate as a proxy for the client device 102-1 in conjunction with interactions between the client device 102-1 and the given TLS server, such that the verifier device 104 automatically obtains ciphertexts exchanged between the client device 102-1 and the given TLS server as part of the secure session via the verifier device 104 operating as the proxy. Such an embodiment is referred to herein as a “proxy mode” arrangement.
The verifier device 104 in some embodiments is further configured to receive from the client device 102-1 one or more statements characterizing the data obtained by the client device 102-1 from the given TLS server as part of the secure session.
For example, a given one of the one or more statements illustratively comprises a selectively-revealed substring of query response data obtained by the client device 102-1 from the given TLS server as part of the secure session.
As another example, a given one of the one or more statements is illustratively configured to provide context integrity through utilization of a multi-stage parsing protocol in which query response data obtained by the client device 102-1 from the given TLS server as part of the secure session is preprocessed by the client device 102-1 to generate reduced data that is subsequently parsed by the client device 102-1 in conjunction with generation of the given statement to be sent by the client device 102-1 to the verifier device 104.
A wide variety of other types of statements characterizing the data obtained by the client device 102-1 from the given TLS server as part of the secure session can be used in other embodiments. In some embodiments, in conjunction with the three-party handshake protocol, the client device 102-1 and the verifier device 104 jointly establish one or more shared session keys with the given TLS server, with the client device 102-1 having a first share of a given one of the one or more shared session keys, the verifier device 104 having a second share of the given shared session key, and the given TLS server having a composite session key combining the first and second shares.
Additionally or alternatively, in conjunction with the three-party handshake protocol, the client device 102-1 receives from the given TLS server an encryption key that is not accessible to the verifier device 104.
In some embodiments, the verifier device 104 and the client device 102-1 collaborate using their respective shares of the session key of the secure session with the given TLS server to generate a query that is provided by the client device 102-1 to the given TLS server to request that the given TLS server send the data to the client device 102-1.
The verifier device 104 and the client device 102-1 can similarly collaborate using their respective shares of the session key of the secure session with the given TLS server to validate a response that is provided by the given TLS server to the client device 102-1 responsive to the query.
In some embodiments, in conjunction with the three-party handshake protocol, the client device 102-1 and the verifier device 104 establish respective prover and verifier keys. In such an embodiment, verifying correctness of at least one characterization of data obtained by the client device 102-1 from the given TLS server as part of the secure session illustratively comprises verifying a proof provided by client device 102-1 to the verifier device 104. The proof is illustratively generated by the client device 102-1 based at least in part on (i) the prover key established by the client device 102-1 in conjunction with the three-party handshake protocol, (ii) the verifier key established by the verifier device 104 in conjunction with the three-party handshake protocol, and (iii) secret information of the client device 102-1, such as a password or passcode.
In some embodiments, verifying correctness of at least one characterization of data obtained by the client device 102-1 from the given TLS server as part of the secure session illustratively comprises obtaining data derived from at least a portion of at least one ciphertext of the secure session, and verifying correctness of at least one characterization of that data by the client device 102-1. The term “ciphertext” as used in this context and elsewhere herein is intended to be broadly construed, and should not be viewed as requiring the use of any particular cryptographic protocol.
It is to be appreciated that the particular arrangement of components and other system elements shown in FIG. 1, and their associated processing operations as described above, are presented by way of illustrative example only, and numerous alternative embodiments are possible.
For example, although the verifier device 104 in the system 100 is illustratively shown as a single processing device comprising a processor coupled to a memory, the verifier device in other embodiments can comprise a distributed verifier device in which functionality of the verifier device 104 is distributed across multiple distinct processing devices. In such embodiments, the role of the verifier in various protocols disclosed herein can be distributed across multiple distinct parties executing a multi-party protocol, with each such party being associated with a different one of the multiple processing devices of the distributed verifier device. The term “device” as used herein is therefore intended to be broadly construed, so as to encompass at least one processing device comprising a processor coupled to a memory, and therefore multiple such processing devices as in the case of the distributed verifier device.
FIG. 2 shows an exemplary process, illustratively implemented at least in part by the verifier device 104 interacting with one of the client devices 102 as a prover with respect to data controlled by one of the TLS servers 106. It is to be understood that this particular process is only an example, and additional or alternative processes can be performed at least in part by prover, verifier and server entities in other embodiments.
In this embodiment, the process illustratively comprises steps 200 through 208. As noted above, at least portions of these steps are assumed to be performed at least in part by verifier device 104 interacting with one of the client devices 102 and further involving one of the TLS servers 106. These components are also referred to in the context of the FIG. 2 process as verifier, prover and server, respectively. In step 200, the verifier participates in a three-party handshake protocol with the client and the server, with the client acting as the prover.
In step 202, in conjunction with the three-party handshake protocol, the verifier and the prover obtain respective shares of a session key of a secure session with the server.
In step 204, the verifier receives from the prover a commitment relating to the secure session with the server.
In step 206, responsive to receipt of the commitment, the verifier releases to the prover additional information relating to the secure session that was not previously accessible to the prover.
In step 208, the verifier verifies correctness of at least one characterization of data obtained by the prover from the server as part of the secure session, based at least in part on the commitment and the additional information.
Numerous other techniques can be used in association with implementation of decentralized oracles as disclosed herein.
Accordingly, the particular processing operations and other functionality described in conjunction with the flow diagram of FIG. 2 are presented by way of illustrative example only, and should not be construed as limiting the scope of the invention in any way. Alternative embodiments can use other types of processing operations involving verifier devices, prover devices and server devices. For example, the ordering of the process steps may be varied in other embodiments, or certain steps may be performed concurrently with one another rather than serially. Also, multiple instances of the process may be performed in parallel with one another within system 100 for different sets of respective prover, verifier and server devices. Accordingly, system 100 can simultaneously implement a large number of decentralized oracles using the techniques disclosed herein.
Additional aspects of illustrative embodiments will now be described with reference to FIGS. 3 through 14.
The widespread deployment of TLS allows users can access private data over channels with end-to-end confidentiality and integrity. What they cannot readily do under conventional practice, however, is prove to third parties the provenance of such data, i.e., that it genuinely came from a particular website. Existing approaches either introduce undesirable trust assumptions or require server-side modifications.
As a result, users’ private data is locked up at its point of origin. Users cannot export their data in an integrity-protected way to other applications without help and permission from the current data holder.
Illustrative embodiments herein provide techniques referred to as DECO (short for decentralized oracle) to address these and other problems. DECO allows users to prove that a piece of data accessed via TLS came from a particular website and optionally prove statements about such data in zero-knowledge, keeping the data itself secret. Advantageously, DECO in illustrative embodiments can be implemented without the need for trusted hardware or server-side modifications.
DECO can liberate data from centralized web-service silos, making it accessible to a rich spectrum of applications. To demonstrate the power of DECO, we implement three applications that are hard to achieve without it: a private financial instrument using smart contracts, converting legacy credentials to anonymous credentials, and verifiable claims against price discrimination.
It is to be appreciated that these and other references to DECO herein refer to illustrative embodiments, and the particular features, functionality, advantages and other details of those embodiments should not be construed as limiting in any way. Alternative embodiments can implement additional or alternative techniques for verifying data sources in TLS and other contexts.
As indicated above, TLS is a powerful, widely deployed protocol that allows users to access web data over confidential, integrity-protected channels. But TLS has a serious limitation: it doesn’t allow a user to prove to third parties that a piece of data she has accessed authentically came from a particular website. As a result, data use is often restricted to its point of origin, curtailing data portability by users, a right acknowledged by recent regulations such as GDPR.
Specifically, when a user accesses data online via TLS, she cannot securely export it, without help (hence permission) from the current data holder. Vast quantities of private data are thus intentionally or unintentionally locked up in the “deep web” — the part of the web that isn’t publicly accessible. To better appreciate the problem, consider an example in which Alice wants to prove to Bob that she’s over 18. Currently, age verification services typically require users to upload IDs and detailed personal information, which raises privacy concerns. But various websites, such as company payroll records or DMV websites, in principle store and serve verified birth dates. Alice could send a screenshot of her birth date from such a site, but this is easily forged. And even if the screenshot could somehow be proven authentic, it would leak information — revealing her exact birth date, not just that she’s over 18.
Initially proposed to prove provenance of online data to smart contracts, oracles are a step towards exporting TLS-protected data to other systems with provenance and integrity assurances. Existing schemes, however, have serious technical limitations. They either only work with deprecated TLS versions and offer no privacy from the oracle (e.g., TLSNotary) or rely on trusted hardware (e.g., Town Crier), against which various attacks have recently emerged. Another class of oracle schemes assumes server-side cooperation, mandating that servers install TLS extensions or change application-layer logic. Server-facilitated oracle schemes suffer from two fundamental problems. First, they break legacy compatibility, causing a significant barrier to wide adoption. Moreover, such solutions only provide conditional exportability because the web servers have the sole discretion to determine which data can be exported, and can censor export attempts at will. A mechanism that allows users to export any data they have access to would enable a whole host of currently unrealizable applications.
To address the above problems, illustrative embodiments disclosed herein provide an arrangement referred to as DECO, a decentralized oracle for TLS. Unlike oracle schemes that require per-website support, DECO is illustratively source-agnostic and supports any website running standard TLS. Unlike solutions that rely on websites’ participation, DECO requires no server-side cooperation. Thus a single instance of DECO could enable anyone to become an oracle for any website.
DECO makes rich Internet data accessible with authenticity and privacy assurances to a wide range of applications, including ones that cannot access the Internet such as smart contracts. DECO could fundamentally shift today’s model of web data dissemination by providing private data delivery with an option for transfer to third parties or public release. This technical capability highlights potential future legal and regulatory challenges, but also anticipates the creation and delivery of appealing new services. Importantly, DECO does not require trusted hardware, unlike some alternative approaches.
In some embodiments, at a high level, the prover commits to a piece of data D and proves to the verifier that D came from a TLS server S and optionally a statement nD about D. With reference again to the example of proving age, the statement nD could be the predicate “ D = y/m/d is Alice’s date of birth and the current date - D is at least 18 years.”
Informally, DECO achieves authenticity — that the verifier is convinced only if the asserted statement about D is true and D is indeed obtained from TLS server S. DECO also provides privacy in that the verifier only learns that the statement nD holds for some D obtained from S.
Designing DECO with the required security and practical performance, while using legacy TLS compatible primitives, introduces several important technical challenges. One challenge stems from the fact that TLS generates symmetric encryption and authentication keys that are shared by the client (e.g., prover in DECO) and web server. Thus, the client can forge arbitrary TLS session data, in the sense of signing the data with valid authentication keys.
To address this challenge, DECO introduces a novel three-party handshake protocol among the prover, verifier, and web server that creates an unforgeable commitment by the prover to the verifier on a piece of TLS session data D. The verifier can check that D is authentically from the TLS server. From the prover’ s perspective, the three-party handshake preserves the security of TLS in the presence of a malicious verifier.
Efficient selective opening. After committing to D , the prover proves statements about the commitment. Although arbitrary statements can be supported in theory, we optimize for what are likely to be the most popular applications — revealing only substrings of the response to the verifier. We call such statements selective opening. Fine-grained selective opening allows users to hide sensitive information and reduces the input length to the subsequent proofs.
A naive solution would involve expensive verifiable decryption of TLS records using generic zero-knowledge proofs (ZKPs), but illustrative embodiments herein achieve an orders-of- magnitude efficiency improvement by exploiting the TLS record structure. For example, a direct implementation of verifiable decryption of a TLS record would involve proving correct execution of a circuit of 1024 AES invocations in zero-knowledge, whereas by leveraging the MAC-then- encrypt structure of CBC-HMAC, illustrative embodiments herein can accomplish the same with only 3 AES invocations.
Context integrity. Selective opening allows the prover to only reveal a substring D' of the server’s response D. However, a substring may mean different things depending on when it appears and a malicious prover could cheat by quoting out of context. Therefore we need to prove not just that D' appears in D , but that it appears in the expected context, i.e., D' has context integrity with respect to D. (Note that this differs from “contextual integrity” in privacy theory.)
Context-integrity attacks can be thwarted if the session content is structured and can be parsed. Fortunately most web data takes this form (e.g., in JavaScript Object Notation (JSON) or Hypertext Markup Language (HTML)). A generic solution is to parse the entire session and prove that the revealed part belongs to the necessary branch of a parse tree. But, under certain constraints that web data generally satisfies, parsing the entire session is not necessary. Some embodiments disclosed herein provide a novel two-stage parsing scheme where the prover pre-processes the session content, and only parses the outcome that is usually much smaller. We draw from the definition of equivalence of programs, as used in programming language theory, to build a formal framework to reason about the security of two-stage parsing schemes. Illustrative embodiments disclosed herein provide several practical realizations for specific grammars. Our definitions and constructions generalize to other oracles too. For example, it could prevent a generic version of a content-hidden attack.
With regard to implementation and evaluation of illustrative embodiments, we designed and implemented DECO as a complete end-to-end system. To demonstrate the system’s power, we implemented three applications: 1) a confidentiality-preserving financial instrument using smart contracts; 2) converting legacy credentials to anonymous credentials; and 3) verifiable claims against price discrimination.
Our experiments with these applications show that DECO is highly efficient. For example, for TLS 1.2 in the WAN setting, online time is 2.85s to perform the three-party handshake and 2.52s for 2PC query execution. It takes about 3s to 13s to generate zero-knowledge proofs for the applications described above. More details are provided elsewhere herein.
DECO as disclosed in conjunction with illustrative embodiments to be described in detail below advantageously provides a provably secure decentralized oracle scheme. DECO in some embodiments provides an oracle scheme for modern TLS versions that doesn’t require trusted hardware or server-side modifications.
We also describe in detail below a broad class of statements for TLS records that can be proven efficiently in zero-knowledge using DECO. Such statements allow users to open only substrings of a session-data commitment. The optimizations achieve substantial efficiency improvement over generic ZKPs.
With regard to context-integrity attacks and mitigation, we identify a new class of context- integrity attacks universal to privacy -preserving oracles, and we describe our mitigation approach involving a novel, efficient two-stage parsing scheme.
Transport Layer Security (TLS)
We now provide some background on TLS handshake and record protocols on which DECO builds in illustrative embodiments.
TLS is a family of protocols that provides privacy and data integrity between two communicating applications. Roughly speaking, it consists of two protocols: a handshake protocol that sets up the session using asymmetric cryptography, establishing shared client and server keys for the next protocol, the record protocol, in which data is transmitted with confidentiality and integrity protection using symmetric cryptography.
Handshake. In the handshake protocol, the server and client first agree on a set of cryptographic algorithms (also known as a cipher suite). They then authenticate each other (client authentication optional), and finally securely compute a shared secret to be used for the subsequent record protocol.
DECO in illustrative embodiments utilizes elliptic curve Diffie-Hellman (DH) key exchange with ephemeral secrets (ECDHE), although this is by way of example rather than limitation. Record Protocol. To transmit application-layer data (e.g., HTTP messages) in TLS, the record protocol first fragments the application data D into fixed sized plaintext records D = (Di, ... ,Dn ). Each record is usually padded to a multiple of blocks (e.g., 128 bits). The record protocol then optionally compresses the data, applies a MAC, encrypts, and transmits the result. Received data is decrypted, verified, decompressed, reassembled, and then delivered to higher- level protocols. The specific cryptographic operations depend on the negotiated cipher suite. DECO supports the Advanced Encryption Standard (AES) cipher in two commonly used modes: CBC-HMAC and GCM, where CBC-HMAC denotes Cipher Block Chaining - Hash-based Message Authentication Code, and GCM denotes Galois/Counter Mode. Additional details regarding these and other aspects of TLS can be found in, for example, T. Dierks and E. Rescorla, “The Transport Layer Security (TLS) Protocol Version 1.2,” RFC 5246, 2008, which is incorporated by reference herein. Again, other protocols can be used in other embodiments.
Differences Between TLS 1.2 and 1.3. In some embodiments herein we focus on TLS 1.2, and later describe how to generalize our techniques to TLS 1.3. Here we briefly note the major differences between these two TLS versions. TLS 1.3 removes the support for legacy non-AEAD ciphers. The handshake flow has also been restructured. All handshake messages after the ServerHello are now encrypted. Finally, a different key derivation function is used. Additional details can be found in E. Rescorla, “The Transport Layer Security (TLS) Protocol Version 1.3,” RFC 8446, 2018, which is incorporated by reference herein. Multi-Party Computation
Consider a group of n parties 7 , ... , Pn, each of whom holds some secret sL. Secure multi party computation (MPC) allows them to jointly compute /(Sj, ...,sn) without leaking any information other than the output of /, i.e., PL learns nothing about Sj¹i. Security for MPC protocols generally considers an adversary that corrupts t players and attempts to learn the private information of an honest player. Two-party computation (2PC) refers to the special case of n = 2 and t = 1.
There are two general approaches to 2PC protocols. Garbled-circuit protocols encode / as a boolean circuit, an approach best-suited for bitwise operations (e.g., SHA-256). Other protocols leverage threshold secret sharing and are best suited for arithmetic operations. The functions we compute in some embodiments using 2PC, though, include both bitwise and arithmetic operations. We separate them into two components, and use an optimized garbled-circuit protocol for the bitwise operations and a secret-sharing based MtA protocol for the arithmetic operations. Additional details regarding an example optimized garbled-circuit protocol used in illustrative embodiments can be found in Xiao Wang, Samuel Ranellucci, and Jonathan Katz, “Authenticated Garbling and Efficient Maliciously Secure Two-Party Computation,” in ACM CCS, 2017, which is incorporated by reference herein. Additional details regarding an example secret-sharing based MtA protocol used in illustrative embodiments can be found in Rosario Gennaro and Steven Goldfeder, “Fast multiparty threshold ECDSA with fast trustless setup,” in ACM CCS, 2018, which is incorporated by reference herein. These are only examples, and other types of protocols can be used in other embodiments.
We now state a problem that is solved by illustrative embodiments of DECO and present a high-level overview of its architecture. Problem Statement: Decentralized Oracles
Illustrative embodiments herein provide protocols for building “oracles,” i.e., entities that can prove provenance and properties of online data. The goal is to allow a prover T to prove to a verifier V that a piece of data came from a particular website S and optionally prove statements about such data in zero-knowledge, keeping the data itself secret. Accessing the data may require private input (e.g., a password) from T and such private information should be kept secret from V as well.
We focus in illustrative embodiment on servers running TLS, a widely deployed security protocol suite on the Internet. However, TLS alone does not prove data provenance. Although TLS uses public-key signatures for authentication, it uses symmetric-key primitives to protect the integrity and confidentiality of exchanged messages, using a shared session key established at the beginning of each session. Hence P, who knows this symmetric key, cannot prove statements about cryptographically authenticated TLS data to a third party.
A web server itself could assume the role of an oracle, e.g., by simply signing data. However, server-facilitated oracles would not only incur a high adoption cost, but also put users at a disadvantage: the web server could impose arbitrary constraints on the oracle capability. We are interested in a scheme where anyone can prove provenance of any data she can access, without needing to rely on a single, central point of control, such as the web server providing the data.
We address these and other challenges in illustrative embodiments by introducing what we refer to herein as “decentralized oracles” that don’t rely on trusted hardware or cooperation from web servers. The problem is much more challenging than for previous oracles, as it precludes solutions that require servers to modify their code or deploy new software, or use of prediction markets, while at the same time going beyond these previous approaches by supporting proofs on arbitrary predicates over data.
Authenticated data feeds for smart contracts. An important application of illustrative embodiments disclosed herein is in constructing authenticated data feeds (ADFs), i.e., data with verifiable provenance and correctness, for smart contracts. A wide variety of other applications are advantageously supported by the techniques disclosed herein.
In the context of ADFs, since smart contracts can’t participate in 2PC protocols, they must rely on oracle nodes to participate as V on their behalf. Therefore, in some embodiments, we deploy DECO in a decentralized oracle network, where a set of independently operated oracles are available for smart contracts to use. Note that oracles running DECO are trusted only for integrity, not for privacy. Smart contracts can further hedge against integrity failures by querying multiple oracles and requiring, e.g., majority agreement. We emphasize that DECO’s privacy is preserved even if all oracles are compromised. Thus DECO enables users to provide ADFs derived from private data to smart contracts while hiding private data from oracles.
Notation and Definitions
In some embodiments, we use T to denote the prover, V the verifier and S the TLS server. We use letters in boldface (e.g., M) to denote vectors to denote the ith element in M.
We model the essential properties of an oracle using an ideal functionality ^oracle as illustrated in FIG. 3. To separate parallel runs of ^oracle, aU messages are tagged with a unique session id denoted sid. Additional or alternative oracle properties can be used in other embodiments. As indicated in FIG. 3, ^oracle in this embodiment accepts a secret parameter 0S (e.g., a password) from P, a query template Query and a statement Stmt from V. A query template is a function that takes P’s secret 6S and returns a complete query, which contains public parameters specified by V. An example query template would be Query(0s) = “stock price of GOOG on Jan. 1st, 2020 with API key = 0S”. The prover P can later prove that the query sent to the server is well-formed, i.e., built from the template, without revealing the secret. The statement Stmt is a function that V wishes to evaluate on the server’s response. Following the previous example, as the response R is a number, the following statement would compare it with a threshold: Stmt ( R ) = "R > $1,000".
After P acknowledges the query template and the statement (by sending “ok” and 0S), ^oracle retrieves a response R from S using a query built from the template. We assume an honest server, so R is the ground truth ^oracle sends Stmt(I?) and the data source to V .
We are interested in this embodiment in decentralized oracles that don’t require any server- side modifications or cooperation, i.e., S follows the unmodified TLS protocol. More particularly, a decentralized oracle protocol for TLS is a three-party protocol Prot = (Prot5, Protp, Protv) such that 1) Prot realizes ^oracle and 2) Prot5 is the standard TLS, possibly along with an application-layer protocol.
Adversarial Model and Security Properties. In illustrative embodiments, we consider a static, malicious network adversary L. Corrupted parties may deviate arbitrarily from the protocol and reveal their states to L. As a network adversary, L learns the message length from ^oracle since TLS is not length-hiding. We assume P and V choose and agree on an appropriate query (e.g., it should be idempotent for most applications) and statement according to the application- layer protocol run by S.
For a given query Q , denote the server’ s honest response by S (Q) . We require that security holds when either P or V is corrupted. The functionality ^oracle reflects the following security guarantees:
• Prover-integrity. A malicious P cannot forge content provenance, nor can she cause S to accept invalid queries or respond incorrectly to valid ones. Specifically, if the verifier inputs (Query, Stmt) and outputs ( b,S ), then P must have sent Q = Query(0s) to .S' in a TLS session, receiving response R = S(Q) such that b = Stmt(Z?).
• Verifier-integrity·. A malicious V cannot cause P to receive incorrect responses. Specifically, if P outputs ( Q , R ) then R must be the server’s response to query Q submitted by P, i .e., R = S(Q .
• Privacy : A malicious V learns only public information (Query, S) and the evaluation of Stmt(Z?).
A Strawman Protocol
We focus in illustrative embodiments on two widely used representative TLS cipher suites: CBC-HMAC and AES-GCM. Our technique generalizes to other ciphers (e.g., Chacha20- Polyl305, etc.) as well. We initially use CBC-HMAC to illustrate certain embodiments, and later describe the techniques for AES-GCM.
TLS uses separate keys for each direction of communication. Unless explicitly specified, we don’t distinguish between the two and use kEnc and kMAC to denote session keys for both directions.
In presenting illustrative embodiments of DECO, we start with a strawman protocol and incrementally build up to the full protocol.
A strawman protocol that realizes ^oracle between (P, V) is as follows. P queries the server S and records all messages sent to and received from the server in Q = (Ql ... ,Qn ) and R = (/?!, ... , bh), respectively. Let M = ( Q , R ) and (kMAC, kEnc) be the session keys.
She then proves in zero-knowledge that 1) each RL decrypts to Rt ||cr£, a plaintext record and a MAC tag; 2) each MAC tag aL for Rt verifies against kMAC; and 3) the desired statement evaluates to b on the response, i.e., b = Stmt(I?). Using the standard notation, P computes pr = ZK - PoK{kEnc,I?: Vi e [n], Dec(kEnci) = ^11^ A Verify Stmt (R) = b}. She also proves that Q is well-formed as Q = Query(0s) similarly in a proof pq and sends ( Pq.Pr , kMAC, M, b) to V.
Given that M is an authentic transcript of the TLS session, the prover-integrity property seems to hold. Intuitively, CBC-HMAC ciphertexts bind to the underlying plaintexts, thus M can be treated as secure commitments to the session data. That is, a given M can only be opened (i.e., decrypted and MAC checked) to a unique message. The binding property prevents T from opening M to a different message other than the original session with the server.
Unfortunately, this intuition is flawed. The strawman protocol fails completely because it cannot ensure the authenticity of M. The prover T has the session keys, and thus she can include the encryption of arbitrary messages in M.
Moreover, the zero-knowledge proofs that T needs to construct involve decrypting and hashing the entire transcript, which can be prohibitively expensive. For the protocol to be practical, we need to significantly reduce the cost.
Overview of DECO
The critical failing of the above-described strawman approach is that T learns the session key before she commits to the session. In illustrative embodiments of DECO, the MAC key is withheld from T until after she commits.
The TLS session between T and the server S must still provide confidentiality and integrity. Moreover, the protocol must not degrade performance below the requirements of TLS (e.g., triggering a timeout).
FIG. 4 shows an example DECO implementation comprising a server device, a prover device and a verifier device in an illustrative embodiment. DECO in this embodiment is implemented as a three-phase protocol. The first phase is a novel three-party handshake protocol in which the prover , the verifier V , and the TLS server S establish session keys that are secret- shared between T and V . After the handshake is a query execution phase during which T accesses the server following the standard TLS protocol, but with help from V . After T commits to the query and response, V reveals her key share. Finally, T proves statements about the response in a proof generation phase. Three-party handshake. Essentially, T and V jointly act as a TLS client. They negotiate a shared session key with .S' in a secret-shared form. We emphasize that this phase, like the rest of DECO, is completely transparent to , requiring no server-side modifications.
For the CBC-HMAC cipher suite, at the end of the three-party handshake, T and V receive ky AC and ky AC respectively, while S receives kMAC = k^AC + k AC. As with the standard handshake, both P and S get the encryption key kEnc.
The three-party handshake can make the aforementioned session-data commitment unforgeable as follows. At the end of the session, T first commits to the session in M as before, then V reveals her share From V’s perspective, the three-party handshake protocol ensures that a fresh MAC key (for each direction) is used for every session, despite the influence of a potential malicious prover, and that the keys are unknown to T until she commits. Without knowledge of the MAC key, T cannot forge or tamper with session data before committing to it. The unforgeability of the session-data commitment in DECO thus reduces to the unforgeability of the MAC scheme used in TLS.
Other cipher suites such as GCM can be supported similarly. In GCM, a single key (for each direction) is used for both encryption and MAC. The handshake protocol similarly secret- shares the key between T and V. The handshake protocol for GCM is described in more detail elsewhere herein.
Query execution. Since the session keys are secret- shared, as noted, T and V execute an interactive protocol to construct a TLS message encrypting the query. T then sends the message to S as a standard TLS client. For CBC-HMAC, they compute the MAC tag of the query, while for GCM they perform authenticated encryption. Note that the query is private to T and should not be leaked to V. Generic 2PC would be expensive for large queries, so we instead introduce custom 2PC protocols that are orders-of-magnitude more efficient than generic solutions, as described elsewhere herein.
As explained previously, T commits to the session data M before receiving V’s key share, making the commitment unforgeable. Then T can verify the integrity of the response, and prove statements about it, as will now be described. Proof generation. With unforgeable commitments, if P opens the commitment M completely (i.e., reveals the encryption key) then V could easily verify the authenticity of M by checking MACs on the decryption.
Revealing the encryption key for M, however, would breach privacy: it would reveal all session data exchanged between P and S. In theory, P could instead prove any statement Stmt over M in zero knowledge (i.e., without revealing the encryption key). Generic zero-knowledge proof techniques, though, would be prohibitively expensive for many natural choices of Stmt.
DECO instead introduces two techniques to support efficient proofs for a broad, general class of statement, namely what is referred to herein as “selective opening” of a TLS session transcript. Selective opening involves either revealing a substring to V or redacting, i.e., excising, a substring, concealing it from V .
FIG. 5 shows as an illustrative example a simplified JSON bank statement for a user Bob acting as a prover. This example will be used to demonstrate selective opening and context- integrity attacks. Suppose Bob (P) wants to reveal his checking account balance to V . Revealing the decryption key for his TLS session would be undesirable: it would also reveal the entire statement, including his transactions. Instead, using techniques disclosed herein, Bob can efficiently reveal only the substring in lines 5-7. Alternatively, if he doesn’t mind revealing his savings account balance, he might redact his transactions after line 7.
The two selective opening modes, revealing and redacting substrings, are useful privacy protection mechanisms. They can also serve as pre-processing for a subsequent zero-knowledge proof. For example, Bob might wish to prove that he has an account with a balance larger than $1000, without revealing the actual balance. He would then prove in zero knowledge a predicate (“balance > $1000”) over the substring that includes his checking account balance.
Selective opening alone, however, is not enough for many applications. This is because the context of a substring affects its meaning. Without what we call context integrity, P could cheat and reveal a substring that falsely appears to prove a claim to V. For example, Bob might not have a balance above $1000. After viewing his bank statement, though, he might in the same TLS session post a message to customer service with the substring “balance”: $5000 and then view his pending messages (in a form of reflection attack). He could then reveal this substring to fool V .
Various sanitization heuristics on prover-supplied inputs to V , e.g., truncating session transcripts, could potentially prevent some such attacks, but, like other forms of web application input sanitization, are fragile and prone to attack.
Instead, we introduce a rigorous technique by which session data are explicitly but confidentially parsed. We call this technique “zero-knowledge two-stage parsing.” In accordance with this technique, T parses M locally in a first stage and then proves to V a statement in zero knowledge about constraints on a resulting substring. For example, in our banking example, if bank-supplied key-value stores are always escaped with a distinguished character L, then Bob could prove a correct balance by extracting via local parsing and revealing to V a substring “balance”: $5000 preceded by l. It can be shown that for a very common class of web API grammars (unique keys) this two-phase approach yields much more efficient proofs than more generic techniques.
FIG. 6 shows a more detailed example implementation of the DECO protocol introduced above. In this embodiment, the DECO protocol comprises a three-party handshake phase, followed by 2PC protocols for a query execution phase, and a proof generation phase. Each of these phases will be described in more detail below. It is to be appreciated that the particular details of this embodiment, like other embodiments disclosed herein, are presented by way of example only, and should not be construed as limiting in any way. Those skilled in the art will recognize that additional or alternative techniques may be used.
Three-Party Handshake
The goal of the three-party handshake (3P-HS) in some embodiments is to secret-share between the prover T and verifier V the session keys used in a TLS session with server S, in a way that is completely transparent to S. We first focus on CBC-HMAC for exposition, then adapt the protocol to support GCM.
FIG. 7 shows a formal specification of the three-party handshake protocol in an illustrative embodiment. FIG. 8 shows an example ECtF protocol that is part of the three-party handshake protocol in some embodiments.
Again, the particular details of these protocols are examples only, and not limiting in any way. For example, a wide variety of other types of three-party handshake protocols involving a prover, a verifier and a server can be used in other embodiments. The term “three-party handshake protocol” as used herein is therefore intended to be broadly construed.
As with the standard TLS handshake, 3P-HS includes two steps: first, T and V compute additive shares of a secret Z e EC JFp) shared with the server through a TLS-compatible key exchange protocol. ECDHE is recommended and the focus here, although other techniques can be used to compute shares; second, T and V derive secret-shared session keys by securely evaluating the TLS-PRF with their shares of Z as inputs, where PRF denotes pseudo random function. Below we give text descriptions so formal specifications are not required for understanding.
Step 1: key exchange. Let EC( IFp) denote the EC group used in ECDHE and G its generator.
The prover T initiates the handshake by sending a regular TLS handshake request and a random nonce rc to S (in the ClientHello message). On receiving a certificate, the server nonce rs, and a signed ephemeral DH public key Ys = ss G from S (in the Server-Hello and ServerKey Exchange messages), T checks the certificate and the signature and forwards them to V . After performing the same check, V samples a secret s and sends her part of the DH public key Yv = sv G to P, who then samples another secret sP and sends the combined DH public key YP = sP G + Yv to S.
Since the server S runs the standard TLS, S will compute a DH secret as Z = ss YP.T (and V) computes its share of Z as ZP = sP Ys (and Zv = sv Ys). Note that Z = ZP + Zv where + is the group operation of EC( P). Assuming the discrete logarithm problem is hard in the chosen group, Z is unknown to either party.
Step 2: key derivation. Now that T and V have established additive shares of Z (in the form of EC points ), they proceed to derive session keys by evaluating the TLS-PRF keyed with the x coordinate of Z. A technical challenge here is to harmonize arithmetic operations (i.e., addition in EC(WP)) with bitwise operations (i.e., TLS-PRF) in 2PC. It is well-known that boolean circuits are not well- suited for arithmetic in large fields. As a concrete estimate, an EC Point addition resulting in just the x coordinate involves 4 subtractions, one modular inversion, and 2 modular multiplications. An estimate of the AND complexity based on highly optimized circuits results in over 900,000 AND gates just for the subtractions, multiplications, and modular reductions — not even including inversion, which would require running the Extended Euclidean algorithm inside a circuit.
Due to the prohibitive cost of adding EC points in a boolean circuit, T and V convert the additive shares of an EC point in EC( P) to additive shares of its x-coordinate in IFP, using the ECtF protocol shown in FIG. 8. Then the boolean circuit just involves adding two numbers in IFP, which can be done with only ~3|p| AND gates, that is ~ 768 AND gates in our implementation where p is 256-bit.
Share conversion using ECtF . The ECtF protocol converts shares in EC(WP) to shares in IFP. The inputs to the ECtF protocol are two EC points P1, P2 e EC(WP), denoted P^ = (x^i). Suppose (xs, ys) = P1 * P2 where * is the EC group operation, the output of the protocol is a, b e P such that a + b = xs. Specifically, for the curve we consider, xs — l2 — x1 — x2 where l = (y2 — Ti) / (x2 — Xi). Shares of the ys can be computed similarly but we omit that since TLS only uses the xs.
ECtF uses a Multiplicative-to-Additive (MtA) share-conversion protocol as a building block. We use a, b = MtA(cr, h ) to denote a run of MtA between Alice and Bob with inputs a and b respectively. At the end of the run, Alice and Bob receive a and b such that a b = a + b . The protocol can be generalized to handle vector inputs without increasing the communication complexity. Namely for vectors a, b e pp, if a,b := MtA(a, b ), then (a, b) = a + b .
Now we describe the protocol of ECtF. ECtF has two main ingredients. Let [ a\ denote a 2-out-of-2 sharing of a , i.e., [a] = (oq, a2 ) such that party i has at for i e {1,2} while a = ax + <x2. The first ingredient is share inversion: given [a], compute [a-1]. This can be done as follows: party i samples a random value rt and executes MtA to compute 5lt d2 '·= MtA((a1, tb), (r2, <¾)). Note that d1 + d2 = a1 r2 + a2 rx . Party i publishes = dί + aί rx and thus both parties learn v = vx + v2 Finally, party i outputs bi = rL v~x . The protocol computes a correct sharing of a~x because bc + b2 = a-1. Moreover, the protocol doesn’t leak a to any party assuming MtA is secure. In fact, party fs view consists of (a + a2)(i + rx), which is uniformly random since rL is uniformly random.
The second ingredient is share multiplication: compute [ ab ] given [a], [b] [ ab ] can be computed using MtA as follows: parties execute MtA to compute ax, a2 such that a1 + a2 = a1 - b2 + a2 b2. Then, party t outputs rrii = t + t yt. The security and correctness of the protocol can be argued similarly as above.
Secure evaluation of the TLS-PRF. Having computed shares of the x-coordinate of Z, the so called premaster secret in TLS, in ECtF, T and V evaluate the TLS-PRF in 2PC to derive session keys. Using a known SHA-256 circuit, we hand-optimized the TLS handshake circuit resulting in a circuit with total AND complexity of 779,213.
Adapting to support GCM. For GCM, a single key (for each direction) is used for both encryption and MAC. Adapting the above protocol to support GCM in TLS 1.2 is straightforward. The first step would remain identical, while output of the second step needs to be truncated, as GCM keys are shorter.
Adapting to TLS 1.3. To support TLS 1.3, the 3P-HS protocol must be adapted to a new handshake flow and a different key derivation circuit. Notably, all handshake messages after the ServerHello are now encrypted. A naive strategy would be to decrypt them in 2PC, which would be costly as certificates are usually large. However, thanks to the key independence property of TLS 1.3, we can construct a 3P-HS protocol of similar complexity to that for TLS 1.2, as described elsewhere herein.
Query Execution
After the handshake, the prover T sends her query Q to the server .S' as a standard TLS client, but with help from the verifier V . Specifically, since session keys are secret-shared, the two parties need to interact and execute a 2PC protocol to construct TLS records encrypting Q. Although generic 2PC would in theory suffice, it would be expensive for large queries. We instead introduce custom 2PC protocols that are orders-of-magnitude more efficient. We first focus on one-round sessions where P sends all queries to S before receiving any response. Most applications of DECO, e.g., proving provenance of content retrieved via HTTP GET, are one-round. Extending DECO to support multi-round sessions is described elsewhere herein.
CBC-HMAC. Recall that P and V hold shares of the MAC key, while P holds the encryption key. To construct TLS records encrypting Q — potentially private to P , the two parties first run a 2PC protocol to compute the HMAC tag t of Q , and then P encrypts Q \\T locally and sends the ciphertext to S.
Let H denote SHA-256. Recall that the HMAC of message m with key k is
HMACH(k, m) = H(k ® opad) H((k ® ipad)||m)). inner hash
The terms ipad and opad denote respective “inner” and “outer” values utilized in the HMAC algorithm. A direct 2PC implementation would be expensive for large queries, as it requires hashing the entire query in 2PC to compute the inner hash. This is advantageously avoided in illustrative embodiments by making the computation of the inner hash local to P (i.e., without 2PC). If P knew k ® ipad, she could compute the inner hash. We cannot, though, simply give k ® ipad to P, as she could then learn k and forge MACs.
Our optimization exploits the Merkle-Damgard structure in SHA-256. Suppose m1 and m2 are two correctly sized blocks. Then Wim-^ ||m2) is computed as /H (/H (IV, mi), m2) where /H denotes the one-way compression function of H, and IV the initial vector.
FIG. 9 shows the post-handshake protocols for CBC-HMAC.
After the three-party handshake, P and V execute a simple 2PC protocol to compute s0 = /H(1V, kMAC ® ipad), and reveal it to P. To compute the inner hash of a message m, P just uses s0 as the IV to compute a hash of m. Revealing s0 does not reveal kMAC, as /H is assumed to be one-way. To compute HMAC(k, m) then involves computing the outer hash in 2PC on the inner hash, a much shorter message. Thus, we manage to reduce the amount of 2PC computation to a few blocks regardless of query length, as opposed to up to 256 SHA-2 blocks in each record with generic 2PC.
AES-GCM. For GCM, P and V perform authenticated encryption of Q. 2PC-AES is straightforward with optimized circuits, but computing tags for large queries is expensive as it involves evaluating long polynomials in a large field for each record. Our optimized protocol makes polynomial evaluation local via precomputation, as described in more detail elsewhere herein. Since 2PC-GCM involves not only tag creation but also AES encryption, it incurs higher computational cost and latency than CBC-HMAC.
Other embodiments disclosed herein utilize a highly efficient alternative protocol, referred to as a proxy mode protocol, that avoids post-handshake 2PC protocols altogether, with additional trust assumptions.
As illustrated in the full DECO protocol shown in FIG. 6, after querying the server and receiving a response, P commits to the session by sending the ciphertexts to V, and receives E’s MAC key share. Then P can verify the integrity of the response, and prove statements about it. FIG. 6 specifies the full DECO protocol for CBC-HMAC. The DECO protocol for GCM is similar and described elsewhere herein.
For clarity, we abstract away the details of zero-knowledge proofs in an ideal functionality EZK On receiving (“prove”, x,w) from P, where x and w are private and public witnesses respectively, PZK sends w and the relationship p(c, w) e {0,1} (defined below) to V. Specifically, for CBC-HMAC, x, w, p are defined as follows: x = (kEnc, 0S,Q, R) and w = (Q, R, kMAC, b). The relationship p(c, w) outputs 1 if and only if (1 ) Q (and R is the CBC- HMAC ciphertext of Q (and R ) under key kEnc, kMAC; (2) Query(0s) = Q ; and (3) Stmt(I?) = b. Otherwise it outputs 0.
Assuming functionalities for secure 2PC and ZKPs, it can be shown that ProtDECO as illustrated in FIG. 6 UC-securely realizes E0racle of FIG. 3 for malicious adversaries.
More particularly, assuming the discrete log problem is hard in the group used in the three- party handshake, and that / (the compression function of SHA-256) is a random oracle, ProtDECO UC-securely realizes ^oracle in the CF2P ^ZK) hybrid world, against a static malicious adversary with abort.
The protocol for GCM has a similar flow. The GCM variants of the three-party handshake and query construction protocols were described above. FIG. 10 shows the 2PC protocols for verifying tags and decrypting records in the GCM variants. These are also referred to as post-handshake protocols for GCM.
Unlike CBC-HMAC, GCM is not committing: for a given ciphertext C encrypted with key k, one knowing k can efficiently find k' ¹ k that decrypts C to a different plaintext while passing the integrity check. To prevent such attacks, we require T to commit to her key share kP before learning V’s key share. In the proof generation phase, in addition to proving statements about Q and R,T needs to prove that the session keys used to decrypt Q and R are valid against the commitment to kj . Proof of the security of the GCM variant is like that for CBC-HMAC.
Proof Generation
Recall that the prover T commits to the ciphertext M of a TLS session and proves to V that the plaintext M satisfies certain properties. Without loss of generality, we assume M and M contain only one TLS record, and henceforth call them the ciphertext record and the plaintext record. Multi-record sessions can be handled by repeating the protocol for each record.
Proving only the provenance of M is easy: just reveal the encryption keys. But this sacrifices privacy. Alternatively, T could prove any statement about M using general zero- knowledge techniques. But such proofs are often expensive.
In the following description, we present two classes of statements optimized for example applications: revealing only a substring of the response while proving its provenance (“selective opening”), or further proving that the revealed substring appears in a context expected by V (“context integrity by two-stage parsing”) Selective Opening. Illustrative embodiments implement what is referred to herein as
“selective opening,” techniques that allow T to efficiently reveal or redact substrings in the plaintext. Suppose the plaintext record is composed of chunks M = (B1, ... , Bn) (details of chunking are discussed below). Selective opening allows T to prove that the ith chunk of M is B without revealing the rest of M; we refer to this as Reveal mode. It can also prove that M_L is the same as M but with the chunks removed. We call this Redact mode. Both modes are simple, but useful for practical privacy goals. The granularity of selective opening depends on the cipher suite, which we now discuss.
CBC-HMAC. Recall that for proof generation, T holds both the encryption and MAC keys kEnc and kMAC, while V only has the MAC key kMAC. Our performance analysis assumes a cipher suite with SHA-256 and AES-128, which matches our implementation, but the techniques are applicable to other parameters. Recall that MAC-then-encrypt is used: a plaintext record M contains up to 1024 AES blocks of data and 3 blocks of MAC tag s, which we denote as M = (B-L, ... B10 , <T) where s = (B1025,B1026, B1027).M is a CBC encryption of M, consisting of the same number of blocks:
Revealing a TLS record. A naive way to prove that M encrypts M without revealing kEnc is to prove correct encryption of each AES block in ZKP. However, this would require up to 1027 invocations of AES in ZKP, resulting in impractical performance.
Leveraging the MAC-then-encrypt structure, the same can be done using only 3 invocations of AES in ZKP. This illustratively involves proving that the last few blocks of M encrypt a tag s and revealing the plaintext directly. Specifically, T computes p = ZK - PoK{ kEnc : 8 = CBC( kEnc, s)} and sends (M, p) to V. Then V verifies p and checks the MAC tag over M (note that V knows the MAC key.) Its security relies on the collision-resistance of the underlying hash function in HMAC, i.e., P cannot find M' ¹ M with the same tag a.
Revealing a record with redacted blocks. Suppose the ith block contains sensitive information that T wants to redact. A direct strategy is to prove that B£_ = (B1, ... , Bi-1) and Bl+ = (Bi+1, ... , Bn) form the prefix and suffix of the plaintext encrypted by M, by computing This is expensive though as it would involve 3 AES and 256 SHA-256 compression in
ZKP.
Leveraging the Merkle-Damgard structure of SHA-256, several optimizations are possible. Let / denote the compression function of SHA-256, the state after applying / on BL_ First, if both and sL can be revealed, e.g., when Bi contains high-entropy data such as API keys, the above goal can be achieved using just 1 SHA-256 in ZKP. To do so, P computes p = ZK — PoK{f?i : f(si-1,Bi ') = s and sends ( n.si-1,si,Bi_ ,Bi+ ) to V, who then 1) checks by recomputing it from BL_; 2) verifies p; and 3) checks the MAC tag s by recomputing it from st and Bi+. Assuming Bt is high entropy, revealing and S; doesn’t leak Bt since / is one-way.
On the other hand, if both and S; cannot be revealed to V (e.g., when brute-force attacks against BL is feasible), we can still reduce the cost by having P redact a prefix (or suffix) of the record containing the block Bt. The cost incurred then is 256 -i SHA-2 hashes in ZKP. Additional details are provided elsewhere herein. Generally ZKP cost is proportional to record sizes so TLS fragmentation can also lower the cost by a constant factor.
Redacting a suffix. When a suffix Bi+ is to be redacted, P computes 7G = ZK — PoK{£?i+, kEnc: / (si, Bi+) = ih A tf(kMAC 0 opad||t/i) = s A B1025\\B1026\\B1027 =
CBC(kEnc, s)} and S; is the state after applying / on Bi_\\Bi.T sends ( p,Bi_\\Bi ) to V. The verifier then 1) checks by applying / on Bi_\\B^ and 2) verifies p. Essentially, the security of this follows from pre-image resistance of /. Moreover, V doesn’t learn the redacted suffix since ih = /(s, Bi+ ) is kept secret from V. The total cost is 3 AES and 256 -i SHA-2 hashes in ZKP.
Redacting a prefix. T computes two ^102511^102611^1027 = CBC(kEnc, s)}. T sends (jt1,Tt2,si-1, Bi\\Bi+) toE. The verifier checks that 1) Sj.i is correct using 77^ and then computes /(si-1,Bi||fli+) to obtain the inner hash ih, 2)p2 is verified using the computed ih. The cost incurred is 3 AES and 256 -i SHA-2 hashes in ZKP. Note that redacting a prefix/ suffix only makes sense if the revealed portion does not contain any private user data. Otherwise, P would have to find the smallest substring containing all the sensitive blocks and redact either the prefix/suffix similar to above.
GCM. Unlike CBC-HMAC, revealing a block is very efficient in GCM. First, P reveals AES(k, IV) and AES(k, 0), with proofs of correctness in ZK, to allow V to verify the integrity of the ciphertext. Then, to reveal the tth block, P just reveals the encryption of the ith counter CL = with a correctness proof. V can decrypt the tth block as IV is the public initial vector for the session, and in cl(IV) denotes incrementing IV for i times (the exact format of inc is immaterial.) To reveal a TLS record, P repeat the above protocol for each block. Again, additional details are provided elsewhere herein.
In summary, CBC-HMAC allows efficient selective revealing at the TLS record-level and redaction at block level in DECO, while GCM allows efficient revealing at block level. Selective opening can also serve as pre-processing to reduce the input length for a subsequent zero- knowledge proof.
Context Integrity by Two-Stage Parsing. For many applications, the verifier V may need to verify that the revealed substring appears in the right context. We refer to this property as “context integrity.” In the following we present techniques for V to specify contexts and for P to prove context integrity efficiently.
For ease of exposition, our description below initially focuses on the revealing mode, i.e., P reveals a substring of the server’s response to V. The redaction mode will then be described.
Specification of contexts. Our techniques for specifying contexts assume that the TLS- protected data sent to and from a given server S has a well-defined context-free grammar Q, known to both P and V . In a slight abuse of notation, we let Q denote both a grammar and the language it specifies. Thus, R e Cj denotes a string R in the language given by Q. We assume that Q is unambiguous, i.e., every R eQ has a unique associated parse-tree TR . JSON and HTML are examples of two widely used languages that satisfy these requirements, and are our focus here.
When P then presents a substring R0 pen of some response R from S , we say that R0 pen has context integrity if R0 pen is produced in a certain way expected by V. Specifically, V specifies a set S of positions in which she might expect to see a valid substring R0 pen in R. In our definition, 5 is a set of paths from the root in a parse-tree defined by Q to internal nodes. Thus s e S, which we call a permissible path, is a sequence of non-terminals. Let pR denote the root of TR (the parse- tree of R in Q). We say that a string R0 pen has context-integrity with respect to (R, S ) if TR has a subtree whose leaves yield (i.e. concatenate to form) the string R0 pen, and that there is a path s e S from pR to the root of the said subtree.
Formally, we define context integrity in terms of a predicate CTXg. More particularly, given a grammar Cj on TLS responses, R e Q, a substring Popen °f R, a set S of permissible paths, we define a context function CTXf- as a boolean function such that CTXf- : (S, R, R0 pen) ·® true if and only if there exists a sub-tree TRopen of TR with a path s e S from pTR to RtKorbh and TRopen yields R0 pen· Popen is said to have context integrity with respect to ( R,S ) if CTX^ (S, R , Pope n) = true.
Referring again to the example of FIG. 5, consider a JSON string / in accordance with that example. JSON contains (roughly) the following rules:
Start object object { pairs } pair “key” : value pairs pair | pair, pairs key chars value chars | object
In that example, V was interested in learning the derivation of the pair baiance with key “balance” in the object given by the value of the pair Pchecking with key “checking a/c”. Each of these non-terminals is the label for a node in the parse-tree 7). The path from the root Start of 7) to Pchecking requires traversing a sequence of nodes of the form Start object pairs* Pchecking. where pairs* denotes a sequence of zero or more pairs. So S is the set of such sequences and Popen is the string “checking a/c”: (“balance”: $2000} .
Two-stage parsing. Generally, proving Popen has context integrity, i.e., CTXf- (S, R, Ropen) = true, without directly revealing R would be expensive, since computing CTXf- may require computing TR for a potentially long string R. However, we observed that under certain assumptions that TLS-protected data generally satisfies, much of the overhead can be removed by having T preprocess R by applying a transformation Trans agreed upon by T and V, and prove that R0 pen has context integrity with respect to R' (a usually much shorter string) and S' (a set of permissible paths specified by V based on S and Trans).
Based on this observation, we introduce a two-stage parsing scheme for efficiently computing R0 pen and proving CTX^(5, R , R0 pen) = true. Suppose P and V agree upon Q, the grammar used by the web server, and a transformation Trans. Let Q' be the grammar of strings Trans(Z?) for all R e Q. Based on Trans, V specifies permissible paths S' and a constraint-checking function consf- f-, . In the first stage, P: (1) computes a substring i?0pen°f R by parsing R (such that CTXf-(S, R, /?0pen) = true) (2) computes another string R' = Trans (R). In the second stage, T proves to V in zero-knowledge that (1) consgg,(R, R') = true and (2) CTX^,(5', R', R0 pen) = true. Note that in addition to public parameters Q, Q', S , S', Trans, consf- f-;, the verifier only sees a commitment to R, and finally, borqh·
This protocol makes the zero-knowledge computation significantly less expensive by deferring actual parsing to a non-verifiable computation. In other words, the computation of CTX^,(5', R', Ropen) and consgg,(R, R') can be much more efficient than that of CTX^(5, R,
^open)·
We formalize the correctness condition for the two-stage parsing in an operational p semantics rule given below. Here, (/, s ) denotes applying a function / on input s, while - denotes that if the premise P is true, then the conclusion C is true.
Given a grammar Q, a context function and permissible paths CTXf-(S, , ), a transformation Trans, a grammar Q' = \R' : R' = Trans (R), R e Q} with context function and permissible paths CTXf-,(S\ , ) and a function consf- f-;, we say (consf- f-;, S') are correct with respect to S, if for all (R, R', R0 pen) such that R e Q, booleans b the following rule holds: (consff ffi, (R,R')> => true (CTXg,, ( S', R R0pen)) =>b
Below, we focus on an example grammar suitable for use in DECO applications, and present concrete constructions of two-stage parsing schemes.
Key-value grammars. A broad class of data formats, such as JSON, have a notion of key- value pairs. Thus, they are our focus in some embodiments of DECO.
A key -value grammar Cj produces key -value pairs according to the rule, “pair start key middle value end” where start middle and end are delimitors. For such grammars, an array of optimizations can greatly reduce the complexity for proving context. We discuss a few such optimizations below, with other details provided elsewhere herein.
Revelation for a globally unique key. For a key-value grammar Q, set of paths S, if for an R e S, a substring R0 pen satisfying context-integrity requires that R0 pen is parsed as a key-value pair with a globally unique key K, Ropen simply needs to be a substring of R and correctly be parsed as a pair. Specifically, Trans(/?) outputs a substring R' of R containing the desired key, i.e., a substring of the form “start K middle value end” and R can output R0 pen = R' S' can be defined by the rule Sg, pair where Sg, is the start symbol in the production rules for S'- Then (1) cons , ( R , R') checks that R ' is a substring checks that (a) R' e ' and (b) R0 pen = R'· Globally unique keys arise in some applications herein, such as when selectively opening the response for age.
Redaction in key-value grammars. Thus far, our description of two-stage parsing assumes the Reveal mode in which R reveals a substring R0 pen of R to V and proves that R0 pen has context integrity with respect to the set of permissible paths specified by V . In the Redact mode, the process is similar, but instead of revealing i?open in the clear, R generates a commitment to R0 pen using techniques described previously and reveals R, with R0 pen removed, for example, by replacing its position with a dummy character. Applications
DECO as disclosed herein can be used for any oracle-based application. To illustrate its versatility, we have implemented and evaluated three example applications that leverage its various capabilities: 1) a confidential financial instrument realized by smart contracts; 2) converting legacy credentials to anonymous credentials; and 3) privacy-preserving price discrimination reporting.
Confidential Financial Instruments. Financial derivatives are among the most commonly cited smart contract applications, and exemplify the need for authenticated data feeds (e.g., stock prices). For example, one popular financial instrument that is easy to implement in a smart contract is a binary option. This is a contract between two parties betting on whether, at a designated future time, e.g., the close of day D, the price P* of some asset N will equal or exceed a predetermined target price P, i.e., P* > P. A smart contract implementing this binary option can call an oracle 0 to determine the outcome.
In principle, 0 can conceal the underlying asset N and target price P for a binary option on chain. It simply accepts the option details off chain, and reports only a bit specifying the outcome Stmt := P* >? P. This approach is referred to as a Mixicle.
A limitation of a basic Mixicle construction is that 0 itself learns the details of the financial instrument. Prior to DECO, only oracle services that use trusted execution environments (TEEs) could conceal queries from 0. We now show how DECO can support execution of the binary option without 0 learning the details of the financial instrument, i.e., N or P. It should be noted in this regard that the predicate direction >? or <? can be randomized. Also, winner and loser identities and payment amounts can be concealed. Additional steps can be taken to conceal other metadata, e.g., the exact settlement time.
In this example application, the option winner plays the role of P, and obtains a signed result of Stmt from 0 , which plays the role of V . We now describe the protocol and its implementation. Let (sk0, pk0] denote the oracle’ s key pair. In this embodiment, a binary option is specified by an asset name N, threshold price P, and settlement date D. We denote the commitment of a message M by CM = com (M, rM) with a witness rM.
FIG. 11 illustrates two parties Alice and Bob executing a confidential binary option. Alice uses DECO to access a stock price API and convince 0 she has won. Examples of request and response are shown to the right, and shaded text in this portion of the figure is sensitive information to be redacted.
The binary option process illustrated in FIG. 11 includes the following steps:
1) Setup: Alice and Bob agree on the binary option {N, P, D} and create a smart contract SC with identifier ID5C. The contract contains pk0, addresses of the parties, and commitments to the option (CN, CP, CD] with witnesses known to both parties. They also agree on public parameters QR (e.g., the URL to retrieve asset prices).
2) Settlement: Suppose Alice wins the bet. To claim the payout, she uses DECO to generate a ZKP that the current asset price retrieved matches her position. Alice and 0 execute the DECO protocol (with 0 acting as the verifier) to retrieve the asset price from QR (the target URL). We assume the response contains (N*, P*, D*). In addition to the ZKP in DECO to prove origin QR , Alice proves the following statement: CN = com(/V*,rN) A CP = com(P,rP) A CD = com (D*, rD)} .
Upon successful proof verification, the oracle returns a signed statement with the contract ID, 5 = Sig(sk0, ID5c).
3) Payout: Alice provides the signed statement S to the contract, which verifies the signature and pays the winning party.
Alice and Bob need to trust 0 for integrity, but not for privacy. They can further hedge against integrity failure by using multiple oracles, as explained elsewhere herein. Decentralizing trust over oracles is a standard and already deployed technique. We emphasize that DECO ensures privacy even if all the oracles are malicious.
As indicated above, FIG. 11 shows the request and response of a stock price API. The user (P) also needs to reveal enough portion of the HTTP GET request to oracle (V) in order to convince access to the correct API endpoint. The GET request contains several parameters — some to be revealed like the API endpoint, and others with sensitive details like stock name and private API key. P redacts sensitive params using techniques disclosed herein and reveals the rest to V. The API key provides enough entropy preventing V from learning the sensitive params. Without additional care though, a cheating P can alter the semantics of the GET request and conceal the cheating by redacting extra parameters. To ensure this does not happen, P needs to prove that the delimiter “&” and separator “=” do not appear in the redacted text.
Let R and R denote the response ciphertext and the plaintext respectively. To settle an option, P proves to V that R contains evidence that he won the option, using the two-stage parsing scheme described previously. In the first stage, P parses R locally and identifies the smallest substring of R that can convince V . In the FIG. 11 embodiment, involving stock prices, Rpv[ce = "05. price": "1157.7500" suffices. In the second stage, P proves knowledge of (ftpri in ZK such that 1) /?price is a substring of the decryption of R ; 2) /?price starts with “05. price”; 3) the subsequent characters form a floating point number P* and that P* > P; and 4) com(P, rP) = CP.
This two-stage parsing is secure assuming the keys are unique and the key "05. price" is followed by the price, making the grammar of this response a key-value grammar with unique keys, as described above. Similarly, P proves that the stock name and date contained in R match the commitments. With the CBC-HMAC cipher suite, the zero-knowledge proof circuit involves redacting an entire record (408 bytes), computing commitments, and string processing.
HTTP GET requests (and HTML) have a special restriction: the demarcation between a key and a value (i.e., middle) and the start of a key-value pair (i.e., start) are never substrings of a key or a value. This means that to redact more than a single contiguous key or value, P must redact characters in (middle start}. So we have consgg,(R, R') check that: (1) \R\ = and (2) Vj either R'[i] = D A R [t] g (middle start! or R [t] = R'[i] ( D is a dummy character used to do in-place redaction). Checking CTX^, is then unnecessary.
Legacy Credentials to Anonymous Credentials: Age Proof. User credentials are often inaccessible outside a service provider’s environment. Some providers offer third-party API access via OAuth tokens, but such tokens reveal user identifiers. DECO allows users holding credentials in existing systems (what we call “legacy credentials”) to prove statements about them to third parties (verifiers) anonymously. Thus, DECO in some embodiments allows users to convert any web-based legacy credential into an anonymous credential without server-side support or trusted hardware.
FIG. 12 shows an example of this application, in which a student proves her/his age is over 18 using credentials (demographic details) stored on a University website. A student can provide this proof of age to any third party, such as a state issuing a driver’s license or a hospital seeking consent for a medical test. We implement this example using the AES-GCM cipher suite and two- stage parsing with optimizations based on unique keys.
In the FIG. 12 example, the demographic details of a student stored on a University website include the name, birth date, student ID among others. Highlighted text contains student age. Reveal mode is used together with two-stage parsing. The prover parses 6-7 AES blocks that contain the birth date and proves her age is above 18 in ZK to the verifier. Like other examples, due to the unique HTML tags surrounding the birth date, this is also a key-value grammar with unique keys. Similar to the binary option application, this example requires additional string processing to parse the date and compute age.
Price Discrimination. Price discrimination refers to selling the same product or service at different prices to different buyers. Ubiquitous consumer tracking enables online shopping and booking websites to employ sophisticated price discrimination, e.g., adjusting prices based on customer zip codes. Price discrimination can lead to economic efficiency, and is thus widely permissible under existing laws.
In the U.S., however, the FTC forbids price discrimination if it results in competitive injury, while new privacy-focused laws in Europe, such as the GDPR, are bringing renewed focus to the legality of the practice. Consumers in any case generally dislike being subjected to price discrimination. Currently, however, there is no trustworthy way for users to report online price discrimination.
FIG. 13 shows an example of this application, in which DECO allows a buyer to make a verifiable claim about perceived price discrimination by proving the advertised price of a good is higher than a threshold, while hiding sensitive information such as name and address. We implement this example using the AES-GCM cipher suite for the TLS session and reveal 24 AES blocks containing necessary order details and the request URL.
As illustrated in FIG. 13, parts of an order invoice page in HTML on a shopping website (e.g., Amazon) include personal details such as the name and address of the buyer. The buyer wants to convince a third-party (verifier) about the charged price of a particular product on a particular date. In this example, we use AES-GCM ciphersuite and Reveal mode to reveal the necessary text in the upper portion of the order invoice page, while the shaded sensitive text in the lower portion, including the shaded buyer name, address and city, is hidden. The number of AES blocks revealed from the response is 20 (due to a long product name). In addition, 4 AES blocks from the request are revealed to prove that the correct endpoint is accessed. Context integrity is guaranteed by revealing unique strings around, e.g., the string " < tr > Order Total : " near the item price appears only once in the entire response.
Implementation and Evaluation
We now describe implementation details and evaluation results for DECO and the three applications.
Three-Party Handshake and Query Execution. We implemented the three-party handshake protocol (3P-HS) for TLS 1.2 and query execution protocols (2PC-HMAC and 2PC-GCM) in about 4700 lines of C++ code. We built a hand-optimized TLS-PRF circuit with total AND complexity of 779,213. We also used variants of a known AES circuit. Our implementation uses Relic for the Paillier cryptosystem and the EMP toolkit for the maliciously secure 2PC protocol.
We integrated the three-party handshake and 2PC-HMAC protocols with mbedTLS, a popular TLS implementation, to build an end-to-end system. 2PC-GCM can be integrated to TLS similarly with more engineering effort. We evaluated the performance of 2PC-GCM separately. The performance impact of integration should be negligible. We did not implement 3P-HS for TLS 1.3, but it is believed that the performance should be comparable to that for TLS 1.2, since the circuit complexity is similar.
We evaluated the performance of DECO in both the LAN and WAN settings. Both the prover and verifier run on a c5.2xlarge AWS node with 8 vCPU cores and 16GB of RAM. We located the two nodes in the same region (but different availability zones) for the LAN setting, but in two distinct data centers (in Ohio and Oregon) in the WAN setting. The round-trip time between two nodes in the LAN and WAN is about 1ms and 67ms, respectively, and the bandwidth is about lGbps.
TABLE 1 below summarizes the runtime of DECO protocols during a TLS session. 50 samples were used to compute the mean and standard error of the mean (in parenthesis). The MPC protocol we used relies on offline preprocessing to improve performance. Since the offline phase is input- and target-independent, it can be done prior to the TLS session. Only the online phase is on the critical path.
TABLE 1: Run time of 3P-HS and query execution protocols. All times are in milliseconds.
As shown in TABLE 1, DECO protocols are very efficient in the LAN setting. It takes 0.37 seconds to finish the three-party handshake. For query execution, 2PC-HMAC is efficient (0.13s per record) as it only involves one SHA-2 evaluation in 2PC, regardless of record size. 2PC-GCM is generally more expensive and the cost depends on the query length, as it involves 2PC-AES over the entire query. We evaluated its performance with queries ranging from 256B to 2KB, the typical sizes seen in HTTP GET requests. In the LAN setting, the performance is efficient and comparable to 2PC-HMAC.
In the WAN setting, the runtime is dominated by the network latency because MPC involves many rounds of communication. Nonetheless, the performance is still acceptable, given that DECO is likely to see only periodic use for most applications we consider.
Proof Generation. We instantiated zero-knowledge proofs with a standard proof system in libsnark. We have devised efficiently provable statement templates, but users of DECO need to adapt them to their specific applications. SNARK compilers enable such adaptation in a high-level language, concealing low-level details from developers. We used xjsnark and its Java-like high- level language to build statement templates and libsnark compatible circuits.
Our rationale in choosing libsnark is its relatively mature tooling support. The proofs generated by libsnark are constant-size and very efficient to verify, the downside being the per- circuit trusted setup. With more effort, DECO can be adapted to use, e.g., Bulletproofs, which requires no trusted setup but has large proofs and verification time.
We measure five performance metrics for each example — prover time (the time to generate the proofs), verifier time (the time to verify proofs), proof size, number of arithmetic constraints in the circuit, and the peak memory usage during proof generation.
TABLE 2 below summarizes the results. 50 samples were used to compute the mean and its standard error. Through the use of efficient statement templates and two-stage parsing, DECO achieves very practical prover performance. Since libsnark optimizes for low verification overhead, the verifier time is negligible. The number of constraints (and prover time) is highest for the binary option application due to the extra string parsing routines. We use multiple proofs in each application to reduce peak memory usage. For the most complex application, the memory usage is 1.78GB. As libsnark proofs are of a constant size 287B, the proof sizes shown are multiples of that.
TABLE 2: Costs of generating and verifying ZKPs in proof-generation phase of
DECO for applications. End-to-End Performance. DECO end-to-end performance depends on the available TLS cipher suites, the size of private data, and the complexity of application-specific proofs. Here we present the end-to-end performance of the most complex application of the three we implemented — the binary option. It takes about 13.77s to finish the protocol, which includes the time taken to generate unforgeable commitments (0.50s), to run the first stage of two-stage parsing (0.30s), and to generate zero-knowledge proofs (12.97s). These numbers are computed in the LAN setting; in the WAN setting, MPC protocols are more time-consuming (5.37s), pushing the end- to-end time up to 18.64s.
In comparison, Town Crier uses TEEs to execute a similar application in about 0.6s, i.e., around 20x faster than DECO, but with added trust assumptions. Since DECO is likely to be used only periodically for most applications, its overhead in achieving cryptographic- strength security assurances seems reasonable.
Legal and Compliance Issues
Although users can already retrieve their data from websites, DECO allows users to export the data with integrity proofs without their explicit approval or even awareness. We now briefly discuss the resulting legal and compliance considerations.
Critically, however, DECO users cannot unilaterally export data to a third party with integrity assurance, but rely on oracles as verifiers for this purpose. While DECO keeps user data private, oracles learn what websites and types of data a user accesses. Thus oracles can enforce appropriate data use, e.g., denying transactions that may result in copyright infringement.
Both users and oracles bear legal responsibility for the data they access. Recent case law on the Computer Fraud and Abuse Act (CFAA), however, shows a shift away from criminalization of web scraping, and federal courts have ruled that violating websites’ terms of service is not a criminal act per se. Users and oracles that violate website terms of service, e.g., “click wrap” terms, instead risk civil penalties. DECO compliance with a given site’s terms of service is a site- and application-specific question.
Oracles have an incentive to establish themselves as trustworthy within smart-contract and other ecosystems. We expect that reputable oracles will provide users with menus of the particular attestations they issue and the target websites they permit, vetting these options to maximize security and minimize liability and perhaps informing or cooperating with target servers.
The legal, performance, and compliance implications of incorrect attestations based on incorrect (and potentially subverted) data are also important. Internet services today have complex, multi-site data dependencies, though, so these issues aren’t specific to DECO. Oracle services already rely on multiple data sources to help ensure correctness. Oracle services in general could ultimately spawn infrastructure like that for certificates, including online checking and revocation capabilities and different tiers of security.
DECO in illustrative embodiments disclosed herein is a privacy-preserving, decentralized oracle scheme for modern TLS versions that requires no trusted hardware or server-side modifications. DECO allows a prover to generate unforgeable commitments to TLS sessions and efficiently prove statements about session content. Some embodiments mitigate context-integrity attacks that are universal to privacy-preserving oracles, utilizing a novel two-stage parsing scheme. DECO can liberate data from centralized web-service silos, making it accessible to a rich spectrum of applications. The practicality of DECO is demonstrated herein through a fully functional implementation along with three example applications.
Protocol Details for GCM
GCM is an authenticated encryption with additional data (AEAD) cipher. To encrypt, the GCM cipher takes as inputs a tuple (k, IV, M, A): a secret key, an initial vector, a plaintext of multiple AES blocks, and additional data to be included in the integrity protection; it outputs a ciphertext C and a tag T. Decryption reverses the process. The decryption cipher takes as input (k, IV, C, A, T ) and first checks the integrity of the ciphertext by comparing a recomputed tag with T , then outputs the plaintext.
The ciphertext is computed in the counter mode: Ct = where inc1 denotes incrementing IV for i times (the exact format of inc is immaterial.)
The tag Tag(k, IV, C,A ) is computed as follows. Given a vector X e IF^ 28, the associated GHASH polynomial Px: W2i s ® W2i s is defined as Px(h hm~l+1 with addition and multiplication done in Y2ΐ b . Without loss of generality, suppose A and C are properly padded. Let £A and £c denote their length. A GCM tag is
Tag( where h = AES(k, 0).
When GCM is used in TLS, each plaintext record D is encrypted as follows. A unique nonce n is chosen and the additional data k is computed as a concatenation of the sequence number, version, and length of D. GCM encryption is invoked to generate the payload record as M = n||GCM(k, n,D, K).
Additional details regarding GCM can be found in, for example, Morris J Dworkin, SP 800-38d, “Recommendation for block cipher modes of operation: Galois/counter mode (GCM) and GMAC,” Technical Report, 2007, which is incorporated by reference herein.
Query Execution
Tag creation/verification. Computing or verifying a GCM tag involves evaluating Equation (1) above in 2PC. A challenge is that Equation (1) involves both arithmetic computation (e.g., polynomial evaluation in IF2 128) as well as binary computation (e.g., AES). Performing multiplication in a large field in a binary circuit is expensive, while computing AES (defined in GF(28)) in IF2 128 incurs high overhead. Even if the computation could somehow separated into two circuits, evaluating the polynomial alone — which takes approximately 1,000 multiplications in IF2i for each record — would be unduly expensive.
Our protocol removes the need for polynomial evaluation. The actual 2PC protocol involves only binary operations and thus can be done in a single circuit. Moreover, the per-record computation is reduced to only one invocation of 2PC-AES.
This is achieved by computing shares of {/i1} (in a 2PC protocol) in a preprocessing phase at the beginning of a session. The overhead of preprocessing is amortized over the session because the same h used for all records that follow. With shares of {/i1}, T and V can compute shares of a polynomial evaluation PA\\C\\{a\\{c^ locally. They also compute AES(k, IV) in 2PC to get a share of Tag(k, IV, C, A). In total, only one invocation of 2PC-AES in needed to check the tag for each record.
It is critical that V never responds to the same IV more than once; otherwise T would learn h. Specifically, in each response, V reveals a blinded linear combination of her shares [hv i in the form of LIVx = AES(k, IV) hv i. It is important that the value is blinded by AES(k, IV) because a single unblinded linear combination of [hv i would allow T to solve for h. Therefore, if V responds to the same IV twice, the blinding can be removed by adding the two responses (in JF2i ): LIV X ® Llvx< = åi(Xi + Xi') hv i. This follows from the nonce uniqueness requirement of GCM.
Encrypting/decrypting records. Once tags are properly checked, decryption of records is straightforward. T and V simply compute AES encryption of m ' (IV) with 2PC-AES. A subtlety to note is that V must check that the counters to be encrypted have not been used as IV previously. Otherwise T would learn h to E in a manner like that outlined above.
Proof Generation
Revealing a block. T wants to convince V that an AES block Bt is the ith block in the encrypted record rec. The proof strategy is as follows: 1) prove that AES block BL encrypts to the ciphertext block Bt and 2) prove that the tag is correct. Proving the correct encryption requires only 1 AES in ZKP. Naively done, proving the correct tag incurs evaluating the GHASH polynomial of degree 512 and 2 AES block encryptions in ZKP. We manage to achieve a much more efficient proof by allowing P to reveal two encrypted messages AES(k, IV) and AES(k, 0) to V , thus allowing V to verify the tag (see Equation (1)). P only needs to prove the correctness of encryption in ZK and that the key used corresponds to the commitment, requiring 2 AES and 1 SHA-2 (P commits to kP by revealing a hash of the key). Thus, the total cost is 3 AES and 1 SHA-2 in ZKP.
Revealing a TLS record. The proof techniques are a simple extension from the above case. P reveals the entire record rec and proves correct AES encryption of all the AES blocks, resulting in a total 514 AES and 1 SHA-2 in ZKP.
Revealing a TLS record except for a block. Similar to the above case, P proves encryption of all the blocks in the record except one, resulting in a total 513 AES and 1 SHA-2 in ZKP.
Protocol Extensions
Adapting to Support TLS 1.3. To support TLS 1.3, the 3P-HS protocol must be adapted to a new handshake flow and a different key derivation circuit. Notably, all handshake messages after the ServerHello are now encrypted. A naive strategy would be to decrypt them in 2PC, which would be costly as certificates are usually large. However, thanks to the key independence property of TLS 1.3, P and V can securely reveal the handshake encryption keys without affecting the secrecy of final session keys. Handshake integrity is preserved because the Finished message authenticates the handshake using yet another independent key.
Therefore the optimized 3P-HS work as follows. P and V perform ECDHE the same as before. Then they derive handshake and application keys by executing 2PC-HKDF, and reveal the handshake keys to P, allowing P to decrypt handshake messages locally (i.e., without 2PC). The 2PC circuit involves roughly 30 invocations of SHA-256, totaling to approximately 70k AND gates, comparable to that for TLS 1.2. Finally, since CBC-HMAC is not supported by TLS 1.3, DECO can only be used in GCM mode.
Query Construction is Optional. For applications that bind responses to queries, e.g., when a stock ticker is included with the quote, 2PC query construction protocols can be avoided altogether. Since TLS uses separate keys for each direction of communication, client-to-server keys can be revealed to T after the handshake so that T can query the server without interacting with V .
Supporting Multi-Round Sessions. DECO can be extended to support multi-round sessions where T sends further queries depending on previous responses. After each round, T executes similar 2PC protocols as above to verify MAC tags of incoming responses, since MAC verification and creation is symmetric. However an additional commitment is required to prevent T from abusing MAC verification to forge tags.
In TLS, different MAC keys are used for server-to-client and client-to-server communication. To support multi -round sessions, T and V run 2PC to verify tags for the former, and create tags on fresh messages for the latter. Previous description herein specified the protocols to create (and verify) MAC tags. Now we discuss additional security considerations for multi round sessions.
When checking tags for server-to-client messages, we must ensure that T cannot forge tags on messages that are not originally from the server. Suppose T wishes to verify a tag T on message M. We have T first commit to T , then T and V run a 2PC protocol to compute a tag T' on message M.T is asked to open the commitment to V and if T ¹ T ', V aborts the protocol. Since P doesn’t know the MAC key, T cannot compute and commit to a tag on a message that is not from the server.
When creating tags for client-to-server messages, V makes sure MAC tags are created on messages with increasing sequence numbers, as required by TLS. This also prevents a malicious T from creating two messages with the same sequence number, because there is no way for V to distinguish which one was sent to the server.
An Alternative DECO Protocol: Proxy Mode. As shown in TABLE 1, the HMAC mode of DECO is highly efficient and the runtime of creating and verifying HMAC tags in 2PC is independent of record size. The GCM mode is efficient for small requests with preprocessing, but can be expensive for large records. We now present a highly efficient alternative that avoids post handshake 2PC protocols altogether. In this alternative, the verifier V acts as a proxy between the prover T and the TLS server
S , i.e., T sends/receives messages to/from S through V. The modified flow of the DECO protocol is as follows: after the three-party handshake, T commits to her key share kT then V reveals kv to
T. Therefore T now has the entire session key k = ky> + kv As T uses k to continue the session with the server, V records the proxy traffic. After the session concludes, T proves statements about the recorded session the same as before.
In such an embodiment, the three-party handshake provides unforgeability. Unlike CBC- HMAC, GCM is not committing: for a given ciphertext and tag (C, T ) encrypted with key k, one can find k' ¹ k that decrypts C to a different plaintext while computing the same tag, as GCM MAC is not collision-resistant. To prevent such attacks, the above protocol requires T to commit to her key share before learning the session key.
Security properties and network assumptions relating to the proxy mode protocol will now be described. The verifier-integrity and privacy properties are clear, as a malicious V cannot break the integrity and privacy of TLS (by assumption).
For prover integrity, though, we need to assume that the proxy can reliably connect to S throughout the session. First, we assume the proxy can ascertain that it indeed is connected with S. Moreover, we assume messages sent between the proxy and S cannot be tampered with by , who knows the session keys and thus could modify the session content.
Note that during the three-party handshake, V can ascertain the server’s identity by checking the server’s signature over a fresh nonce (in standard TLS). After the handshake, however, V has to rely on network-layer indicators, such as IP addresses. In practice, V must therefore have correct, up-to-date DNS records, and that the network between V and the server (e.g., their ISP and the backbone network) must be properly secured against traffic injection, e.g., through Border Gateway Protocol (BGP) attacks. Eavesdropping is generally not problematic in illustrative embodiments.
These assumptions have been embraced by other systems in a similar proxy setting, as BGP attacks are challenging to mount in practice. We can further enhance our protocol against traffic interception by distributing verifier nodes geographically. Moreover, various known detection techniques can be deployed by verifiers. Often BGP attacks are documented after the fact, therefore, when applicable, applications of DECO can be enhanced to support revocation of affected sessions (for example, when DECO is used to issue credentials in an identity system.)
This alternative protocol represents a different performance-security tradeoff. It’s highly efficient because no intensive cryptography occurs after the handshake, but it requires additional assumptions about the network and therefore only withstands a weaker network adversary.
Key-Value Grammars and Two-Stage Parsing
Preliminaries and Notation. We denote context-free grammars (CFGs) as Cj = (V P, S) where E is a set of non-terminal symbols, å a set of terminal symbols, P V ® (V U å)* a set of productions or rules and S e V the start-symbol. We define production rules for CFGs in standard notation using to denote a set minus and to denote a range. For a string w, a parser determines if w E Cj by constructing a parse tree for w. The parse tree represents a sequence of production rules which can then be used to extract semantics.
Key-Value Grammars. These are grammars with the notion of key-value pairs. These grammars are particularly interesting for DECO since most API calls and responses are, in fact, key-value grammars.
Cj is said to be a key -value grammar if there exists a grammar K, such that given any s e Q, s e K, and K can be defined by the following rules:
S object object noPairsString open pair pairs close pair start key middle value end pairs pair pairs | "" key chars value chars | object chars char chars | "" char Unicode - escaped | escape escaped | addedChars special startSpecial | middleSpecial | endSpecial start unescapeds startSpecial middle unescapedm middleSpecial end unescapede endSpecial escaped special | escape | ... In the above, S is the start non-terminal (represents a sentence in K), the non-terminals open and close demarcate the opening and closing of the set of key -value pairs and start middle end are special strings demarcating the start of a key-value pair, separation between a key and a value and the end of the pair respectively.
In order to remove ambiguity in parsing special characters, i.e., characters which have special meaning in parsing a grammar, a special non-terminal, escape is used. For example, in JSON, keys are parsed when preceded by ‘whitespace double quotes’ (“) and succeeded by double quotes. If a key or value expression itself must contain double quotes, they must be preceded by a backslash (\), i.e., escaped. In the above rules, the non-terminal unescaped before special characters means that they can be parsed as special characters. So, moving forward, we can assume that the production of a key -value pair is unambiguous. So, if a substring R' of a string R in the key-value grammar Cj parses as a pair, R' must correspond to a pair in the parse tree of R.
Note that in the above key-value grammar, middle cannot derive an empty string, i.e., a non-empty string must mark middle to allow parsing keys from values. However, one of start and end can have an empty derivation, since they only demarcate the separation between value in one pair from key in the next. Finally, we note that in the two-stage parsing for key-value grammars in some embodiments, we only we consider permissible paths with the requirement that the selectively opened string, Ropen corresponds to a pair.
Two-Stage Parsing for a Locally Unique Key. Many key-value grammars enforce key uniqueness within a scope. For example, in JSON, it can be assumed that keys are unique within a JSON object, even though there might be duplicated keys across objects. The two-stage parsing for such grammars can be reduced to parsing a substring. Specifically, Trans extracts from R a continuous substring R such that the scope of a pair can be correctly determined, even within R'. For instance, in JSON, if conSggfR, R') returns true if and only if R' is a prefix of R , then only parsing R' as a JSON, up to generating the sub-tree yielding Ropen is sufficient for determining whether a string R0 pen corresponds to the correct context in R.
Grammars with Unique Keys. Given a key-value grammar Q we define a function which checks for uniqueness of keys, denoted Ug. Given a string s e Q and another string k, Ug(s, k) = true if and only if there exists at most one substring of s that can be parsed as start k middle. Since s e Q, this means, in any parse tree of s, there exists at most one branch with node key and derivation k. Let Parser^ be a function that returns true if its input is in the grammar Q. We say a grammar Cj is a key-value grammar with unique keys if for all s e Cj and all possible keys k, Ug(s, k ) = true, i.e., for all strings R, C:
Concrete Two-Stage Parsing for Unique-Key Grammars. Let V. be a unique-key grammar as given above. We assume that 1L is LL(1). This is the case for example grammars of interest described previously. General LL(1) parsing algorithms are known.
We instantiate a context function, CTXU for a set T, such that T contains the permissible paths to a pair for strings in TL. We additionally allow CTX U to take as input an auxiliary restriction, a key k (the specified key in P’s output R0 pen)· The tuple (T, k) is denoted S and CT X¾(5 , , ) CTXi ,
Let P be a grammar given by the rule Sp ® pair, where pair is the non-terminal in the production rules for T L and Sp is the start symbol in T. We define as a function that decides whether a string s is in P and if so, whether the key in s equals k. On input R , R0 pen, CTX¾ 5 checks that: (a) R0 pen is a valid key-value pair with key k by running Parser^ (b) R0 pen parses as a key-value pair in R by running an LL(1) parsing algorithm to parse R.
FIG. 14 shows example pseudocode for the function CTX¾ 5 to parse a string R in a key- value grammar T L, and search for the production of a particular key -value pair R0 pen· Here, PTable, the LL(1) parse-table for U is hard-coded into CTX¾ 5.
To avoid expensive computation of CTX¾ 5 on a long string R , we introduce the transformation Trans, to extract the substring R' of R , such that R1 = R0 pen as per the requirements. For string s, t, we also define functions substring (s, t), that returns true if t is a substring of s and equal(s, t) which returns true if s = t. We define cons¾ p - with the rule:
(sub string R, R')) = true (Parser p^ R') => true (cons U p, R, R') true and 5' = {A }. Meaning, CTXj (5, R’, R0 pen = true whenever equal R', R0 pen) and the rule holds for all strings Z?', Z?open.
It can be shown that (cons¾ are correct with respect to S. More particularly, if R' is substring of R , a key-value pair Z?open is parsed by Parser^ then the same pair must have been a substring of Ί1. Due to global uniqueness of keys in TZ, there exists only one such pair R0 pen and CT X¾(S, R, Z?open) must be true.
The additional protocol details above, like those described elsewhere herein, are presented by way of illustrative example only, and are not intended to be limiting in any way. Other embodiments can utilize alternative protocol arrangements in implementing decentralized oracles as disclosed herein.
As indicated above, illustrative embodiments of decentralized oracles as disclosed herein can be implemented in a wide variety of different applications.
For example, DECO can be used to implement a personal data marketplace, where users control and sell their personal data. It is well known that web services profit from monetizing user data. A personal data marketplace implemented using the techniques disclosed herein can disrupt this data monopoly by enabling users to sell their data in an open marketplace. DECO is a key enabler to a personal data marketplace in illustrative embodiments because DECO enables buyers to verify the origin and integrity of the data from websites. DECO also allows sellers to preprocess the data, e.g., redacting sensitive information, for privacy protection while preventing sellers from cheating. Some implementations utilize DECO to provide verifiable claims against price discrimination.
As another example, DECO can be used to provide proof of financial solvency. As a more particular illustration of an arrangement of this type, with DECO, Alice can prove to Bob that her balance with a particular bank is more than $5000. This simple proof not only shows Alice’s financial solvency but also her ability to open an account (e.g., that Alice is not on any sanctions lists, as banks perform anti-money laundering (AML) screening). Importantly, DECO protects Alice’s privacy by revealing only the fact that her balance is higher than $5,000, not her actual balance or identity.
As a further example, DECO can be used to provide proof of account ownership. In one illustration of such an arrangement, with DECO, one can prove ownership of accounts, e.g., email accounts, social media accounts, etc., in anonymity. For example, Alice can prove to Bob that she owns an email account ending with @example.org without revealing what the account name is. This proves Alice’ s affiliation with a certain organization, which is useful for, e.g., whistleblowing, anonymous complaints, etc.
Additional examples of applications of decentralized oracles as disclosed herein include credential recovery and decentralized identity. As an illustration of the former, DECO can enable a user to prove, in a privacy -preserving manner that avoids use of OAUTH, that she has access to a particular web resource, e.g., a Facebook account. This can enable a user to leverage an existing service to prove her identity for, e.g., key recovery. As an illustration of the latter, DECO can also enable to a user to prove, in a privacy-preserving manner, that she has certain characteristics as asserted by a third-party provider (e.g., she’s over 18). This is an example of what is also referred to herein as an anonymous age proof. Such a proof can be used to construct a credential in a decentralized identity system.
The foregoing decentralized oracle applications are examples only, and should not construed as limiting in any way. Additional details regarding implementation of these and other example applications of decentralized oracles as disclosed herein can be found can be found elsewhere herein. Communications between the various elements of an information processing system configured to implement one or more decentralized oracles as disclosed herein are assumed to take place over one or more networks. A given such network can illustratively include, for example, a global computer network such as the Internet, a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network such as a 3G, 4G or 5G network, a wireless network implemented using a wireless protocol such as Bluetooth, WiFi or WiMAX, or various portions or combinations of these and other types of communication networks.
A given processing device implementing at least a portion of the functionality of a decentralized oracle as disclosed herein can include components such as a processor, a memory and a network interface. The processor is assumed to be operatively coupled to the memory and to the network interface. The memory stores software program code for execution by the processor in implementing portions of the functionality of the processing device.
It is to be appreciated that the particular arrangements shown and described in conjunction with FIGS. 1 through 14 herein are presented by way of illustrative example only, and numerous alternative embodiments are possible. The various embodiments disclosed herein should therefore not be construed as limiting in any way. Numerous alternative arrangements for implementing decentralized oracles can be utilized in other embodiments. For example, those skilled in the art will recognize that alternative processing operations and associated system entity configurations can be used in other embodiments. It is therefore possible that other embodiments may include additional or alternative system entities, relative to the entities of the illustrative embodiments. Also, the particular system and device configurations and associated decentralized oracles can be varied in other embodiments.
It should also be noted that the above-described information processing system arrangements are exemplary only, and alternative system arrangements can be used in other embodiments.
A given client, server, processor or other component in an information processing system as described herein is illustratively configured utilizing a corresponding processing device comprising a processor coupled to a memory. The processor executes software program code stored in the memory in order to control the performance of processing operations and other functionality. The processing device also comprises a network interface that supports communication over one or more networks.
The processor may comprise, for example, a microprocessor, an ASIC, an FPGA, a CPU, a GPU, an ALU, a DSP, or other similar processing device component, as well as other types and arrangements of processing circuitry, in any combination. For example, at least a portion of the functionality of a decentralized oracle provided by a given processing device as disclosed herein can be implemented using such circuitry.
The memory stores software program code for execution by the processor in implementing portions of the functionality of the processing device. A given such memory that stores such program code for execution by a corresponding processor is an example of what is more generally referred to herein as a processor-readable storage medium having program code embodied therein, and may comprise, for example, electronic memory such as SRAM, DRAM or other types of RAM, ROM, flash memory, magnetic memory, optical memory, or other types of storage devices in any combination.
Articles of manufacture comprising such processor-readable storage media are considered embodiments of the invention. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals.
Other types of computer program products comprising processor-readable storage media can be implemented in other embodiments.
In addition, embodiments of the invention may be implemented in the form of integrated circuits comprising processing circuitry configured to implement processing operations associated with decentralized oracles as well as other related functionality.
Processing devices in a given embodiment can include, for example, laptop, tablet or desktop personal computers, mobile telephones, or other types of computers or communication devices, in any combination. For example, a computer or mobile telephone can be utilized as a processing device for implementing at least portions of the functionality associated with a decentralized oracle as disclosed herein. These and other communications between the various elements of an information processing system comprising processing devices associated with respective system entities may take place over one or more networks. An information processing system as disclosed herein may be implemented using one or more processing platforms, or portions thereof.
For example, one illustrative embodiment of a processing platform that may be used to implement at least a portion of an information processing system comprises cloud infrastructure including virtual machines implemented using a hypervisor that runs on physical infrastructure. Such virtual machines may comprise respective processing devices that communicate with one another over one or more networks.
The cloud infrastructure in such an embodiment may further comprise one or more sets of applications running on respective ones of the virtual machines under the control of the hypervisor. It is also possible to use multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the information processing system.
Another illustrative embodiment of a processing platform that may be used to implement at least a portion of an information processing system as disclosed herein comprises a plurality of processing devices which communicate with one another over at least one network. Each processing device of the processing platform is assumed to comprise a processor coupled to a memory.
Again, these particular processing platforms are presented by way of example only, and an information processing system may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, servers, storage devices or other processing devices.
For example, other processing platforms used to implement embodiments of the invention can comprise different types of virtualization infrastructure in place of or in addition to virtualization infrastructure comprising virtual machines. Thus, it is possible in some embodiments that system components can run at least in part in cloud infrastructure or other types of virtualization infrastructure, including virtualization infrastructure utilizing Docker containers or other types of Linux containers implemented using operating system level virtualization based on Linux control groups or other similar mechanisms. It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform.
Also, numerous other arrangements of computers, servers, storage devices or other components are possible in an information processing system. Such components can communicate with other elements of the information processing system over any type of network or other communication media.
As indicated previously, components of the system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, certain functionality associated with decentralized oracle entities or related components of a system can be implemented at least in part in the form of software.
The particular configurations of information processing systems described herein are exemplary only, and a given such system in other embodiments may include other elements in addition to or in place of those specifically shown, including one or more elements of a type commonly found in a conventional implementation of such a system.
For example, in some embodiments, an information processing system may be configured to utilize the disclosed techniques to provide additional or alternative functionality in other contexts.
Thus, techniques illustrated in some embodiments herein in the context of providing decentralized oracles for TLS can be adapted in a straightforward manner for use in other contexts. Accordingly, illustrative embodiments of the invention should not be viewed as limited to TLS or its associated processing contexts.
It is also to be appreciated that the particular process steps used in the embodiments described herein are exemplary only, and other embodiments can utilize different types and arrangements of processing operations. For example, certain process steps shown as being performed serially in the illustrative embodiments can in other embodiments be performed at least in part in parallel with one another. It should again be emphasized that the embodiments of the invention as described herein are intended to be illustrative only. Other embodiments of the invention can be implemented utilizing a wide variety of different types and arrangements of information processing systems, networks and devices than those utilized in the particular illustrative embodiments described herein, and in numerous alternative processing contexts. In addition, the particular assumptions made herein in the context of describing certain embodiments need not apply in other embodiments. These and numerous other alternative embodiments will be readily apparent to those skilled in the art.

Claims (25)

Claims What is claimed is:
1. An apparatus comprising: a verifier device comprising a processor coupled to a memory; the verifier device being configured to communicate over one or more networks with a client device and a server device; wherein the verifier device is further configured: to participate in a three-party handshake protocol with the client device and the server device in which the verifier device and the client device obtain respective shares of a session key of a secure session with the server device; to receive from the client device a commitment relating to the secure session with the server device; responsive to receipt of the commitment, to release to the client device additional information relating to the secure session that was not previously accessible to the client device; and to verify correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session, based at least in part on the commitment and the additional information.
2. The apparatus of claim 1 wherein the verifier device is further configured to initiate one or more automated actions responsive to the verification of the correctness of the at least one characterization of the data obtained by the client device from the server device.
3. The apparatus of claim 1 wherein the verifier device comprises a particular oracle node of a set of oracle nodes of a decentralized oracle system.
4. The apparatus of claim 1 wherein the verifier device comprises a distributed verifier device in which functionality of the verifier device is distributed across multiple distinct processing devices.
5. The apparatus of claim 1 wherein the server device comprises a transport layer security (TLS) enabled server device and the secure session comprises a TLS session.
6. The apparatus of claim 1 wherein the commitment relating to the secure session comprises a commitment to query response data obtained by the client device from the server device as part of the secure session.
7. The apparatus of claim 1 wherein the commitment relating to the secure session comprises a commitment to a prover key established by the client device in conjunction with the three-party handshake protocol but not previously accessible to the verifier device.
8. The apparatus of claim 1 wherein the additional information released to the client device responsive to receipt of the commitment comprises a verifier key established by the verifier device in conjunction with the three-party handshake protocol but not previously accessible to the client device.
9. The apparatus of claim 1 wherein the verifier device is further configured to operate as a proxy for the client device in conjunction with interactions between the client device and the server device such that the verifier device automatically obtains ciphertexts exchanged between the client device and the server device as part of the secure session via the verifier device operating as the proxy.
10. The apparatus of claim 1 wherein the verifier device is further configured to receive from the client device one or more statements characterizing the data obtained by the client device from the server device as part of the secure session.
11. The apparatus of claim 10 wherein a given one of the one or more statements comprises a selectively-revealed substring of query response data obtained by the client device from the server device as part of the secure session.
12. The apparatus of claim 10 wherein a given one of the one or more statements is configured to provide context integrity through utilization of a multi-stage parsing protocol in which query response data obtained by the client device from the server device as part of the secure session is preprocessed by the client device to generate reduced data that is subsequently parsed by the client device in conjunction with generation of the given statement to be sent by the client device to the verifier device.
13. The apparatus of claim 1 wherein in conjunction with the three-party handshake protocol, the client device and the verifier device jointly establish one or more shared session keys with the server device, with the client device having a first share of a given one of the one or more shared session keys, the verifier device having a second share of the given shared session key, and the server device having a composite session key combining the first and second shares.
14. The apparatus of claim 1 wherein in conjunction with the three-party handshake protocol, the client device receives from the server device an encryption key that is not accessible to the verifier device.
15. The apparatus of claim 1 wherein the verifier device and the client device collaborate using their respective shares of the session key of the secure session with the server device to generate a query that is provided by the client device to the server device to request that the server device send the data to the client device.
16. The apparatus of claim 15 wherein the verifier device and the client device collaborate using their respective shares of the session key of the secure session with the server device to validate a response that is provided by the server device to the client device responsive to the query.
17. The apparatus of claim 1 wherein in conjunction with the three-party handshake protocol, the client device and the verifier device establish respective prover and verifier keys.
18. The apparatus of claim 17 wherein verifying correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session comprises verifying a proof provided by client device to the verifier device wherein the proof is generated by the client device based at least in part on (i) the prover key established by the client device in conjunction with the three-party handshake protocol, (ii) the verifier key established by the verifier device in conjunction with the three-party handshake protocol, and (iii) secret information of the client device.
19. The apparatus of claim 1 wherein verifying correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session comprises: obtaining data derived from at least a portion of at least one ciphertext of the secure session; and verifying correctness of at least one characterization of that data by the client device.
20. A method performed by a verifier device configured to communicate over one or more networks with a client device and a server device, the method comprising: participating in a three-party handshake protocol with the client device and the server device in which the verifier device and the client device obtain respective shares of a session key of a secure session with the server device; receiving from the client device a commitment relating to the secure session with the server device; responsive to receipt of the commitment, releasing to the client device additional information relating to the secure session that was not previously accessible to the client device; and verifying correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session, based at least in part on the commitment and the additional information; wherein the verifier device performing the method comprises a processor coupled to a memory.
21. The method of claim 20 wherein verifying correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session comprises: obtaining data derived from at least a portion of at least one ciphertext of the secure session; and verifying correctness of at least one characterization of that data by the client device.
22. The method of claim 20 wherein the verifier device is further configured to operate as a proxy for the client device in conjunction with interactions between the client device and the server device such that the verifier device automatically obtains ciphertexts exchanged between the client device and the server device as part of the secure session via the verifier device operating as the proxy.
23. A computer program product comprising a non-transitory processor-readable storage medium having stored therein program code of one or more software programs, wherein the program code when executed by a verifier device configured to communicate over one or more networks with a client device and a server device, the verifier device comprising a processor coupled to a memory, causes the verifier device: to participate in a three-party handshake protocol with the client device and the server device in which the verifier device and the client device obtain respective shares of a session key of a secure session with the server device; to receive from the client device a commitment relating to the secure session with the server device; responsive to receipt of the commitment, to release to the client device additional information relating to the secure session that was not previously accessible to the client device; and to verify correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session, based at least in part on the commitment and the additional information.
24. The computer program product of claim 23 wherein verifying correctness of at least one characterization of data obtained by the client device from the server device as part of the secure session comprises: obtaining data derived from at least a portion of at least one ciphertext of the secure session; and verifying correctness of at least one characterization of that data by the client device.
25. The computer program product of claim 23 wherein the verifier device is further configured to operate as a proxy for the client device in conjunction with interactions between the client device and the server device such that the verifier device automatically obtains ciphertexts exchanged between the client device and the server device as part of the secure session via the verifier device operating as the proxy.
AU2020336124A 2019-08-30 2020-08-28 Decentralized techniques for verification of data in transport layer security and other contexts Pending AU2020336124A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962894052P 2019-08-30 2019-08-30
US62/894,052 2019-08-30
PCT/US2020/048344 WO2021041771A1 (en) 2019-08-30 2020-08-28 Decentralized techniques for verification of data in transport layer security and other contexts

Publications (1)

Publication Number Publication Date
AU2020336124A1 true AU2020336124A1 (en) 2022-04-07

Family

ID=74686039

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020336124A Pending AU2020336124A1 (en) 2019-08-30 2020-08-28 Decentralized techniques for verification of data in transport layer security and other contexts

Country Status (7)

Country Link
US (1) US11997107B2 (en)
EP (1) EP4022840A4 (en)
JP (1) JP2022546470A (en)
KR (1) KR20220069020A (en)
CN (1) CN114946152A (en)
AU (1) AU2020336124A1 (en)
WO (1) WO2021041771A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL279405B2 (en) * 2020-12-13 2024-01-01 Google Llc Using secure multi-party computation to improve content selection process integrity
EP4054116A1 (en) * 2021-03-05 2022-09-07 Siemens Aktiengesellschaft Ecosystem exchanging information about esg data of a product related entity
JP2022158677A (en) * 2021-04-02 2022-10-17 株式会社野村総合研究所 Device and system for zero-knowledge proof to be performed by multi-party computation
CN115271719A (en) * 2021-12-08 2022-11-01 黄义宝 Attack protection method based on big data and storage medium
KR102429325B1 (en) * 2022-05-02 2022-08-04 에스엠테크놀러지(주) Parallel type certificate verification system and operation method
CN115622686B (en) * 2022-12-19 2023-03-21 豪符密码检测技术(成都)有限责任公司 Detection method for safe multi-party calculation

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784463A (en) 1996-12-04 1998-07-21 V-One Corporation Token distribution, registration, and dynamic configuration of user entitlement for an application level security system and method
US7962413B2 (en) 1998-08-13 2011-06-14 International Business Machines Corporation End-user system of preventing unauthorized rerecording of multimedia content
US7197639B1 (en) 1999-02-05 2007-03-27 Rsa Security Inc. Cryptographic countermeasures against connection depletion attacks
US6976090B2 (en) 2000-04-20 2005-12-13 Actona Technologies Ltd. Differentiated content and application delivery via internet
US7840993B2 (en) * 2005-05-04 2010-11-23 Tricipher, Inc. Protecting one-time-passwords against man-in-the-middle attacks
US8214635B2 (en) * 2006-11-28 2012-07-03 Cisco Technology, Inc. Transparent proxy of encrypted sessions
JP5210376B2 (en) * 2007-05-07 2013-06-12 ヒタチデータ・システムズ・コーポレイション Data confidentiality preservation method in fixed content distributed data storage system
FR2992509B1 (en) * 2012-06-21 2017-05-26 Commissariat Energie Atomique DEVICE AND METHOD FOR GENERATING A SESSION KEY
WO2015094261A1 (en) 2013-12-19 2015-06-25 Intel Corporation Policy-based trusted inspection of rights managed content
US20150288667A1 (en) * 2014-04-08 2015-10-08 Samsung Electronics Co., Ltd. Apparatus for sharing a session key between devices and method thereof
WO2015173434A1 (en) * 2014-05-16 2015-11-19 Nec Europe Ltd. Method for proving retrievability of information
US9584517B1 (en) 2014-09-03 2017-02-28 Amazon Technologies, Inc. Transforms within secure execution environments
US10567434B1 (en) * 2014-09-10 2020-02-18 Amazon Technologies, Inc. Communication channel security enhancements
US10425234B2 (en) * 2015-08-27 2019-09-24 Cavium, Llc Systems and methods for perfect forward secrecy (PFS) traffic monitoring via a hardware security module
US10425386B2 (en) * 2016-05-11 2019-09-24 Oracle International Corporation Policy enforcement point for a multi-tenant identity and data security management cloud service
US10574692B2 (en) 2016-05-30 2020-02-25 Christopher Nathan Tyrwhitt Drake Mutual authentication security system with detection and mitigation of active man-in-the-middle browser attacks, phishing, and malware and other security improvements
US11829998B2 (en) 2016-06-07 2023-11-28 Cornell University Authenticated data feed for blockchains
US11165862B2 (en) * 2017-10-24 2021-11-02 0Chain, LLC Systems and methods of blockchain platform for distributed applications
US10659228B2 (en) * 2018-06-28 2020-05-19 Nxp B.V. Method for establishing a secure communication session in a communications system
US11270403B2 (en) * 2018-07-30 2022-03-08 Hewlett Packard Enterprise Development Lp Systems and methods of obtaining verifiable image of entity by embedding secured representation of entity's distributed ledger address in image
US11070363B1 (en) * 2018-12-21 2021-07-20 Mcafee, Llc Sharing cryptographic session keys among a cluster of network security platforms monitoring network traffic flows

Also Published As

Publication number Publication date
CN114946152A (en) 2022-08-26
EP4022840A4 (en) 2023-09-20
US20220377084A1 (en) 2022-11-24
US11997107B2 (en) 2024-05-28
JP2022546470A (en) 2022-11-04
WO2021041771A1 (en) 2021-03-04
KR20220069020A (en) 2022-05-26
EP4022840A1 (en) 2022-07-06

Similar Documents

Publication Publication Date Title
Zhang et al. Deco: Liberating web data using decentralized oracles for tls
US11997107B2 (en) Decentralized techniques for verification of data in transport layer security and other contexts
CN104579694B (en) A kind of identity identifying method and system
WO2017045552A1 (en) Method and device for loading digital certificate in ssl or tls communication
CN103763631B (en) Authentication method, server and television set
Cervesato et al. Breaking and fixing public-key Kerberos
Harini et al. 2CAuth: A new two factor authentication scheme using QR-code
EP2095288B1 (en) Method for the secure storing of program state data in an electronic device
US20120054491A1 (en) Re-authentication in client-server communications
CN104580189A (en) Safety communication system
CN104639534A (en) Website safety information uploading method and browser device
US10963593B1 (en) Secure data storage using multiple factors
Obert et al. Recommendations for trust and encryption in DER interoperability standards
CN114553590B (en) Data transmission method and related equipment
Bojjagani et al. A secure end‐to‐end SMS‐based mobile banking protocol
CN114143117B (en) Data processing method and device
Chen et al. Data privacy in trigger-action systems
KR20120091618A (en) Digital signing system and method using chained hash
CN104992329A (en) Method for safely issuing transaction message
CN113438650B (en) Network equipment authentication method and system based on block chain
Mishra et al. Authenticated content distribution framework for digital rights management systems with smart card revocation
Emmanuel et al. Mobile Banking in Developing Countries: Secure Framework for Delivery of SMS-banking Services
Joseph et al. Design a hybrid optimization and homomorphic encryption for securing data in a cloud environment
Balfanz et al. Fido uaf protocol specification v1. 0
Zhang et al. CCMbAS: A Provably Secure CCM-Based Authentication Scheme for Mobile Internet