GB2588083A - Imagery acquisition method and apparatus - Google Patents

Imagery acquisition method and apparatus Download PDF

Info

Publication number
GB2588083A
GB2588083A GB1912254.8A GB201912254A GB2588083A GB 2588083 A GB2588083 A GB 2588083A GB 201912254 A GB201912254 A GB 201912254A GB 2588083 A GB2588083 A GB 2588083A
Authority
GB
United Kingdom
Prior art keywords
imagery
server
source device
encrypted
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB1912254.8A
Other versions
GB201912254D0 (en
Inventor
Mühlhölzl Alex
Robinson Will
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alesa Services Ltd
Original Assignee
Alesa Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alesa Services Ltd filed Critical Alesa Services Ltd
Priority to GB1912254.8A priority Critical patent/GB2588083A/en
Publication of GB201912254D0 publication Critical patent/GB201912254D0/en
Priority to PCT/GB2020/052045 priority patent/WO2021058936A2/en
Publication of GB2588083A publication Critical patent/GB2588083A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication
    • G06F21/445Program or device authentication by mutual authentication, e.g. between devices or programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • H04N21/2356Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages by altering the spatial resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25816Management of client data involving client authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4408Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Transfer Between Computers (AREA)
  • Computer Graphics (AREA)
  • Computer And Data Communications (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Collecting imagery in a distributed system, such as a Closed Circuit Television (CCTV) system. The distributed system has a source device for capturing images which encrypts the images and transmits them to a server for storage. Further data indicative of the captured imagery is generated, and the captured imagery and encrypted imagery is deleted from the source device whilst the encrypted imagery on the server is retained. The further data may be generated from the captured imagery at the source device; and may include, for example, thumbnails, explicit content, protected information or a number plate. When requested, a copy of the encrypted imagery may be transmitted to a review device and decrypted. The system may be used for recording or enforcement of road traffic violations. The source device may be a mobile phone operated by an end user; the sever may be operated by a company providing a service and the review device may be operated by the client or body responsible for enforcement.

Description

Imagery Acquisition Method and Apparatus [0001] This invention relates to a method of collecting data, in particular imagery, using a distributed system, such as for road traffic violation recording and/or enforcement.
BACKGROUND
[0002] CCTV (Closed Circuit Television) is a known technology, used for collecting imagery. However, as with most things in the modern era the meaning of the terminology and the technology has moved on; while the original writers of the legislation likely had in mind a large camera attached to a cable (a kind of system still used in security rigs in shops all over the world), the advent of "mobile CCTV" (attached to a vehicle or other moving object) began to break down that definition, and wireless/internet based CCTV systems have broken it down more still.
[0003] In the current era, cameras have become both drastically more common in use and drastically more connected, being attached to the mobile phones which are effectively universal in the Western world.
[0004] In a parallel consideration, with more people using cars every year, the number of illegally-parked vehicles has increased continuously, and the ability of local authorities to deal with this situation has steadily fallen behind.
[0005] Traditional UK methods of dealing with illegal parking have centred around traffic officers (traffic wardens) that have been variously answerable to police forces and to local authorities. While recent years have seen the application of technology to support traffic officers, such as tablets and miniprinters to create instant tickets along with picture capture, the overall system has not changed.
[0006] It is important at this point to distinguish between illegal and unauthorized parking.
Many areas of towns and cities are marked as only being available for parking during specified times of the day -most commonly outside of working hours or for short stays. Traffic officers are responsible for policing such rules, but they are not based in UK law. Illegal parking, on the other hand, is parking in such a fashion as to breach applicable vehicle statutes; examples would include blocking junctions, crossings or entryways, and blocking legally-protected access features such as dropped kerbs for the disabled.
[0007] While the majority of illegal parking cases are handled by traffic officers, legislation permits the use of evidence gathered via non-officer means to be used in the management and prosecution of illegal parking, since it is a statutory crime. This can include stationary CCTV, and there have been numerous cases of such footage being used in evidence in illegal parking cases. However, the nature of traditional CCTV indicates that it is of limited scope in pursuing such cases.
[0008] It is in this context, that the present disclosure has been devised. BRIEF SUMMARY OF THE DISCLOSURE [0009] The present disclosure provides a method of collecting data in a distributed system. The distributed system comprises a source device and a server. The method comprises: generating source data at the source device; encrypting the source data; transmitting the encrypted data to the server for storage; and generating further data indicative of the source data.
[0010] Thus, there is provided a method for ensuring data integrity of source data by storing the source data, as encrypted. Only if the data is unchanged will the source data be able to be subsequently decrypted correctly. By also generating further data indicative of the source data, for example based on the source data, characteristics of the source data can be determined without needing to decrypt and re-encrypt the encrypted data (which would disrupt the integrity of the encrypted data).
[0011] The above-described method may be combined with any of the features referred to below, apart from those specifically excluded by the context. In particular, the data to be collected can be any data generated by the source device, including imagery. The imagery can be stored on the source device. The imagery may be captured by a camera of the source device.
[0012] For data security, as described elsewhere herein, the source data can be deleted from the source device after the encrypted data has been generated. Thus, the source data is only available on the server in an encrypted form.
[0013] It will be understood that imagery, as used herein, can include any data intended or suitable to be displayed visually, for example as one or more images, including as video. The imagery can depict one or more real objects (such as an image of a vehicle, or a person), or can instead be entirely artificially generated.
[0014] Although it is described herein that the encrypted data is transmitted to the server, it will be understood that in some examples, the source data can be transmitted to the server, and the encrypted data can be generated by encrypting the source data on the server.
[0015] Viewed from another aspect, the present disclosure also provides a method of collecting imagery in a distributed system. The distributed system comprises a source device and a server. The method comprises: capturing imagery using a camera of the source device, operating in an imagery capture mode; encrypting the captured imagery on the source device; transmitting the encrypted imagery to the server for storage; generating further data indicative of the captured imagery; and deleting the captured imagery and the encrypted imagery from the source device.
[0016] Thus, the data integrity of captured imagery can be preserved, whilst also enabling data analysis to be performed on the further data indicative of the captured imagery without needing to modify or even access the encrypted data, for example to decrypt the encrypted data. Furthermore, data security of the captured imagery can be improved by deleting the captured imagery and the encrypted imagery from the source device. It will be understood that the captured imagery can only be deleted after the captured imagery has been encrypted to generate the encrypted imagery. Similarly, the encrypted imagery can only be deleted from the source device once the encrypted imagery has been transmitted to the server.
[0017] The further data may be generated using the captured imagery. In other words, the further data may be generated directly from the captured imagery. The further data is typically not generated by decrypting a copy of the encrypted data, though this is possible in some examples. Thus, the further data can be at least partially generated on the source device. The further data may contain less information than the encrypted imagery. In other words, a file size of the further data may be less than the file size of the captured imagery used to generate the encrypted imagery. Thus, the further data may contain only the information from the captured imagery necessary for analysis of the captured imagery.
[0018] Generating the further data may comprise generating at least one thumbnail of the captured imagery. It will be understood that a thumbnail, sometimes referred to as thumbnail data, is data including only a portion of the captured imagery -e.g. a subset of the images in the captured imagery, or only a portion of one or more images in the captured imagery. In other words, the thumbnail may be data indicative of the captured imagery.
[0019] The further data may be generated on the source device. The further data may be transmitted to the server from the source device. The further data may be transmitted to the server with the encrypted imagery. The further data may be combined with the encrypted imagery in a combined data format. Thus, analysis of the further data can be performed on the server. The further data can be removed from the source device.
[0020] The method may further comprise storing the further data unencrypted on the server. The method may further comprise decrypting the further data on the server. Thus, the further data can be analysed quickly on the server without needing to be decrypted each time.
[0021] The method may further comprise: receiving a request to operate the source device in the imagery capture mode; authenticating between the source device and the server in response to the received request to operate the source device in the imagery capture mode; and operating the source device in the imagery capture mode in response to a successful authentication between the source device and the server.
[0022] It will be understood that authenticating between the source device and the server includes the server authenticating the source device. Furthermore, authenticating between the source device and the server also includes the source device authenticating the server. Authentication can use any suitable method, including the provision of a username and a password (or any suitable equivalent) by the device to be authenticated to the authenticating device. In some examples, the server may authenticate the source device and the source device may authenticate the server. Authentication may be performed automatically without intervention of the user, or may instead require at least one user input of a username and/or password or similar.
[0023] The distributed system may further comprise a review device. Typically, the review device is in data communication with the server. The review device may not be in direct data communication with the source device. The method may further comprise receiving an imagery access request from a review device.
[0024] The method may further comprise transmitting a copy of the encrypted imagery from the server to the review device, whilst retaining the encrypted imagery on the server.
The copy of the encrypted imagery may be transmitted from the server to the review device in response to receipt of the imagery access request.
[0025] The method may further comprise decrypting the copy of the encrypted imagery on the review device. Thus, a user of the review device can access the encrypted imagery as necessary. It will be understood that decrypting the encrypted imagery may require transmission of an encryption key from the server to the review device.
[0026] The method may further comprise displaying the decrypted imagery on a display of the review device.
[0027] The method may further comprise processing the further data to determine one or more characteristics of the captured imagery. In some examples, processing the further data may be referred to as filtering the further data. The further data may be processed substantially on any device on which the further data is located. In some examples, the further data may be partially processed to determine a first subset of the one or more characteristics and may be further processed to determine a second subset of the one or more characteristics. The processing can thus occur on multiple devices. The further data may be processed on the server to determine the one or more characteristics of the captured imagery.
[0028] The method may further comprise deleting the encrypted imagery from the server, for example in dependence on the determined one or more characteristics of the encrypted imagery. Thus, the encrypted imagery can be deleted automatically, without ever being reviewed by a human user, for example a user of a review device.
[0029] The one or more characteristics may include at least one characteristic indicative of the imagery portraying explicit content. Explicit content may include pornographic content, or abusive content. The one or more characteristics may include at least one characteristic indicative of the imagery portraying protected information. Protected information may include personal information, such as one or more faces, and, or other identity information.
[0030] The one or more characteristics may include at least one characteristic indicative of the imagery including a number plate of a vehicle.
[0031] The method may further comprise transmitting the one or more characteristics to the review device in response to receiving the imagery access request. Thus, when a user of the review device requests a review of the captured imagery, one or more characteristics of the captured imagery can be provided. In some examples, the one or more characteristics can be provided before a copy of the encrypted imagery is provided from the server to the review device. Thus, the user of the review device can assess the captured imagery based on at least one characteristic in the one or more characteristics without decrypting the encrypted imagery, therefore improving data integrity and data security.
[0032] The method may further comprise deleting the encrypted imagery on the server in dependence on a user input received on the review device in relation to at least one of the one or more characteristics. Thus, if a user of the review device deems the captured imagery to be unsuitable based on the one or more characteristics, the data can be deleted from the server.
[0033] The method may further comprise transmitting a notification message from the server to the source device, to inform a user of the source device whether the encrypted imagery has been retained on the server. In other words, the user can be informed of the outcome of a user's review of the captured imagery.
[0034] The method may further comprise outputting one or more imagery capture guidance messages when the source device is operating in the imagery capture mode.
Thus, the user of the source device can be given guidance on the imagery that should be captured for the particular application, sometimes referred to as a context. For example, the imagery capture guidance messages may convey to the user that a number plate of a vehicle should be included in the captured imagery.
[0035] Any decryption of the encrypted imagery stored on the server may occur off the server. In other words, the encrypted imagery may never be decrypted on the server.
[0036] The encrypted imagery may be stored on the server with metadata indicative of one or more characteristics of the captured imagery. The one or more characteristics may be generated automatically based on processing of the further data. At least one of the one or more characteristics may be generated based on user-input data.
[0037] The captured imagery may comprise at least one image. The at least one image may be a plurality of still images. The captured imagery may comprise one or more videos. As used herein, a video may include a continuous capture of related still images. In other examples, a video may be a continuous capture of still images each depicting a time-varying record of substantially the same scene.
[0038] The distributed system may form a closed circuit television, CCTV, system. In other words, the captured imagery can only be accessed by authorised parties, such as the user of the review device.
[0039] Viewed from another aspect, the present disclosure provides apparatus for collecting imagery. The apparatus comprises: a source device comprising one or more first processors, a first memory and a camera for capturing imagery; and a server in data communication with the source device and comprising one or more second processors and a second memory. The first memory comprises instructions which, when executed by the one or more first processors, cause the source device to: operate the source device in an imagery capture mode to capture imagery using the camera of the source device; encrypt the captured imagery; transmit the encrypted imagery to the server for storage; and delete the captured imagery and the encrypted imagery from the source device. The second memory comprises instructions which, when executed by the one or more second processors, cause the server to: receive the encrypted imagery from the source device; store the encrypted imagery in the second memory. The apparatus is further configured to generate further data indicative of the captured imagery.
[0040] The further data can be generated on the source device.
[0041] The source device can be substantially any electronic device configured to perform the described functions. The one or more first processors can be any data processing unit, such as an application processor, as is commonly known in electronic devices. The first memory can be any form of non-transitory computer readable storage, or indeed any suitable computer readable storage for storing the instructions and/or any data generated and/or processed by the source device.
[0042] The server can be substantially any electronic device configured to perform the described functions. The one or more second processors can be any data processing unit, such as an application processor, as is commonly known in electronic devices. The second memory can be any form of non-transitory computer readable storage, or indeed any suitable computer readable storage for storing the instructions and/or any data generated and/or processed by the server.
[0043] Similarly, the review device can be substantially any electronic device configured to perform the described functions. The review device may comprise one or more third processors and a third memory comprising instructions which, when executed by the one or more third processors, cause the review device to operate as described herein. The one or third second processors can be any data processing unit, such as an application processor, as is commonly known in electronic devices. The third memory can be any form of non-transitory computer readable storage, or indeed any suitable computer readable storage for storing the instructions and/or any data generated and/or processed by the review device.
[0044] It will be understood that the apparatus can be configured in substantially any of the ways described hereinbefore, for example to carry out any parts of the method as described hereinbefore.
BRIEF DESCRIPTION OF THE DRAWINGS
[0045] Embodiments of the invention are further described hereinafter with reference to the accompanying drawings, in which: Figure 1 shows a general schematic view of the disclosed system.
Figure 2 shows a schematic view of the system, viewed from the perspective of a single client.
Figure 3 shows a flowchart illustrating a method of operating a source device.
Figure 4 shows a flowchart illustrating a method of transmitting collected data on a source device to a server.
Figure 5 shows a flowchart illustrating a method of processing requests from source devices on the server.
Figure 6 shows a flowchart illustrating two methods of filtering collected data on the server in different ways.
Figure 7 shows a flowchart illustrating the use case of a client accessing the system via a Web application.
DETAILED DESCRIPTION [0046] Clarification of Terms
[0047] As the project has developed over the last couple of years, we have used several names for the various forms of the project. With this phase of development, I propose we use two of these names to describe specific things: [0048] eFine is the name of the product being developed by Alesa Services Ltd to provide Local Authorities with a source of evidence in order to improve parking enforcement. It consists of three primary components: A mobile App, a cloud Backend used to receive and store data, and a Webapp that can be used by Local Authorities to receive enforcement data, track cases and enable payments. Several elements of eFine combine to form a secure distributed video system.
[0049] Double Yellow was the original working title for the eFine project but was abandoned in favour of the above due to non-Commonwealth countries not generally using double yellow lines to indicate parking restrictions (therefore reducing potential for internationalization).
[0050] In this document and forwards we will use Double Yellow as the name of the secure distributed video system invented in order to fulfil the specific legal and operational requirements of the eFine product. Although it was originally invented for eFine, it has much broader potential application (both through other Alesa products and potentially as a Software-As-A-Service-oriented API type system for use by other groups).
[0051] This document focuses on explaining the Double Yellow system itself. From here, "the system" refers to the Double Yellow system, while eFine is used as a reference implementation.
[0052] There are 3 particular roles defined in this system. The first is the End User, which is the operator of the device which is used to capture data. The second is the Company, which is the operator of the system used to store and transmit the data. The third is the Client, which is the body which receives the data.
[0053] Development [0054] While developing eFine, it became clear that relevant legislation demanded that we provide parking enforcement data in the form of CCTV footage (Closed Circuit Television). However, as with most things in the modern era the meaning of the terminology and the technology has moved on; while the original writers of the legislation likely had in mind a large camera attached to a cable (a kind of system still used in security rigs in shops all over the world), the advent of "mobile CCTV" (attached to a vehicle or other moving object) began to break down that definition, and wireless/internet based CCTV systems have broken it down more still.
[0055] In the current era, cameras have become both drastically more common in use and drastically more connected, being attached to the mobile phones which are effectively universal in the Western world. This gives us an opportunity to drastically expand the reach and range of CCTV devices, by giving users the ability to voluntarily turn their own devices into CCTV capture devices. However, doing so presents specific technological and organisational challenges.
[0056] Double Yellow Core Functional Requirements [0057] To address the legislative issues with eFine, we first looked to identify the underlying properties of a CCTV system. These constitute the first Core Functional Requirements of our system's specification: [0058] 1. Flow-oriented.
[0059] Images flow from a source device to a receiving device. They can be optionally recorded along the way.
[0060] 2. Enclosed [0061] The system does not send the images to a mass broadcast system, but instead presents the images only to the intended recipient.
[0062] 3. Video Oriented [0063] The system is intended to transmit visual data (either moving or frequent still images) to the receiving device. Audio is optional (and frequently absent).
[0064] 4. Reviewable [0065] The purpose of a CCTV system is for the resulting visual data to be reviewed by a human for whatever operational purposes are required. As such it must be possible to actively view (or review) the visual data.
[0066] We therefore began to develop a system which sustains these requirements, while still allowing us to fulfil our intended use case (a mobile app user sending enforcement data to a mobile app user).
[0067] To achieve this, we decided to build a Distributed CCTV System named specifically to distinguish it from "mobile" systems. This would be a system which produces video data (item 3) from devices, which is transmitted securely (item 2) over cellular internet, to a server which stores it before displaying and/or sending to (item 4) a specific recipient (item 1).
[0068] This gives us several additional Core Functional Requirements: [0069] 5. Secure [0070] The system is transmitting over the open Internet. As a result, it is necessary for data protection regulations that the system be fully secured to protect the identities of all involved.
[0071] 6. Provable [0072] The system must make it possible to demonstrate, as far as possible, a chain of provenance leading from the original capture device to the client.
[0073] 7. Filtered [0074] Since the capture devices are not directly controlled by either us or the customer, we need to ensure that captured images are suitable for purpose. To do this, we must be able to extract meta-data on the visual data flow.
[0075] 8. Contextual [0076] Because Distributed CCTV capture devices could operate almost anywhere that sufficient cellular signal is available, we need to restrict operation to those contexts that are specifically relevant to the recipient.
[0077] Items 5 and 6 serve to create the Enclosed (item 2) nature of the system, while items 7 and 8 are intended to support the Flow-Oriented and Reviewable (items 1 and 4) parts of the system. These new items add numerous technical challenges to the design of the system. Allowing the system to be filtered and reviewed without breaching security is a particular challenge, which we will go into further in later chapters.
[0078] In addition to these Core Functional Requirements which extend from the necessities of the project, it has become clear that there is a final Core Functional Requirement necessary if we wish to use our technology for any other projects:
[0079] 9. Adaptable
[0080] The system needs to be constructed as a generalised method rather than a specific unique product, in order to allow different implementations of the system for differing contexts.
[0081] Figure 1 shows a general schematic view of the disclosed system.
[0082] The encrypted video feed is depicted by a black arrow on this diagram, while the thumbnail / "filter" feed is depicted by a dashed arrow. The typical Source Device is a mobile app, while the typical Client accesses the system via the WebApp. Clients receive the (decrypted) video feeds as a download on their devices, but the thumbnail is used only for indexing in the Web App. The pattern of the clients and sources indicate that particular source devices are locked to particular clients; one important element of this system is that whatever means is used to distinguish clients (region, etc) is exclusive but not fixed, as they can change over time. The colour coding is referred to as a Context, and there is one Context for each Client.
[0083] Figure 2 shows a schematic view of the system, viewed from the perspective of a single client.
[0084] This is similar to the usual structure of a CCTV network. Indeed, the nature of the Source Devices is largely irrelevant to the Client, who sees the system only as collections of captured images; while a Mobile App is the version described here, it could alternatively be a traditional CCTV camera adapted with a suitable encoding/encryption system.
[0085] Output Methods [0086] The system is designed to output video data. This can be either in the form of actual captured video, or in the form of a sequence of images, or in the form of a video constructed from the image sequence; the difference is largely irrelevant to the Double Yellow system, since the data output to the Client is decrypted in the Web App and not manipulated by the system. The type of data can thus be adapted as needed according to Client needs.
[0087] Method of Operations [0088] The primary purpose of the Double Yellow system is to collect footage from Source Devices, run said footage through various Filters, and allow Clients to review and use this footage as appropriate.
[0089] The process is described in the following steps: [0090] 1. The Source Device is activated, either by the user or by specified circumstances.
[0091] 2. The Source Device retrieves the current Context and Client information.
[0092] 3. The Source Device retrieves (application-dependent) configuration and user login information for the current Context.
[0093] 4. The Source Device retrieves the encryption key for the current Context.
[0094] 5. (Optional) The Source Device guides the user in taking footage.
[0095] 6. The Source Device captures images (and optional video).
[0096] 7. The Source Device encodes images and/or video using the encryption key for the current Context (the Video Data) at the same time as processing the images into Thumbnail Data.
[0097] 8. The Source Device transmits the Video and Thumbnail data to the Backend server as a Packet along with application-relevant data.
[0098] 9. The Source Device does not retain the data once successfully sent.
[0099] 10. The Backend stores the Packet and runs the Strong Filter plugins using the Thumbnail Data.
[00100] 11. The Backend sends a message to the Source Device, indicating whether or not the Packet has been Accepted or Rejected.
[00101] 12. The Client User logs into the Webapp and reviews the Packets assigned to their Context.
[00102] 13. The Client User either accepts or rejects the Packet [00103] 14. If rejected at either the Strong Filter or Client Review, the data is deleted.
[00104] 15. If accepted, the Packet can be downloaded. This decrypts the Video Data and saves it to the Client User's local machine.
[00105] 16. Once the Client User accepts the Packet, the Backend runs the Metadata Filter plugins using the Thumbnail Data.
[00106] 17. The Client User reviews the Metadata and adds additional info where needed [00107] 18. The Packet can be reviewed and redownloaded when needed.
[00108] This process can be used for many different applications. Although eFine is the reference implementation, it could be used almost anywhere CCTV is used in order to provide simultaneous provenance and filter matching for CCTV footage.
[00109] Image Capture and Generation [00110] Two classes of image are produced by the Source Device. The first is the original set of full-size images and/or video footage, which is used unchanged and immediately encrypted (see below). The second is a set of thumbnail images, smaller copies of the original images, which are kept unencrypted and used to perform filtering, indexing and other operations without needing to unencrypt the original data.
[00111] Image formats are of relatively little consequence in this system, since the original data is immediately encrypted without alteration. However, it may be desirable to allow the Source Device to capture original data in different formats where possible; this is largely an application-specific detail.
[00112] Encryption System [00113] The Double Yellow system uses a combination of AES-256 symmetric encryption with TLS key transference. Each Client has a key assigned, which is stored in a secure online service (Azure Key Vault in the reference implementation).
[00114] The symmetric encryption key is retrieved by the Source Device once it determines its current operating Context (and therefore its current Client), and by the Client User when a Download is triggered in the Webapp. The use of symmetric encryption allows us to encrypt and decrypt the data with sufficient speed to run within a reasonable timeframe even on a mobile device; each image is encrypted in a separate thread on the device, and the original data is deleted as soon as the encryption process is completed to minimise the possibility of modification.
[00115] It should be noted that access to the encryption key is only available to either a logged in Source Device or Client User, and both can only access the encryption key for their current Context. Other than these, the encryption keys can only be accessed by a member of Company Staff via a Web API process that is used solely for legal recovery purposes.
[00116] Security Considerations [00117] The features we need to consider from a security standpoint are provenance and privacy.
[00118] The first is of increasing importance in the modern world, where falsifying digital media becomes easier with each passing year (in-browser editing of messages, broad access to image editing software, and machine learning methods such as "Deepfakes"). Our system provides a means of demonstrating that the image/video data received by the Client is the same image/video data recorded by the Source Device, since the data is encrypted as immediately as the Source Device will allow. The Encryption System paragraph explains the technical details of how this process works. Because the data can only be retrieved using the Client Key, and our system does not access the data directly, it can be demonstrated to be unmodified at all points from beginning to end.
[00119] Privacy is also of great importance to any system designed to store footage which may include images of members of the public or other protected information. However, due to the nature of the thumbnail feed, our system relies on industry-standard TLS security to keep the thumbnails private. In addition, there is no access at any part of our system to these thumbnails for anyone other than the Client, our Staff, and selected external APIs used for filtering; the End User cannot see any image data, even for cases they have created, and as with the primary encrypted feed the data is not stored or retained on the Source Device.
[00120] The Filtering Process [00121] One important element of the system is its ability to perform filtering on the data in order to curate the data being sent to Clients. Because decrypting the data would reduce the provenance integrity of that data, we need to do so without interfering with the integrity of the primary encrypted data flow. This is achieved by creating a series of thumbnails; small images that can be considered representative of the primary flow.
[00122] While the thumbnail feed is not kept encrypted as the primary data flow is, its only use is to perform filtering (and assist with indexing). The thumbnail feed is intended to allow us to extract relevant information about the primary data flow by performing Machine Learning and cross-referencing tasks such as number plate recognition, facial recognition, public database lookups, and any other relevant information. This data is then stored in the main database alongside the primary data.
[00123] We provide filtering as a plugin system, which can be added to the Double Yellow system in any combination needed. As is noted in the Method of Operation section, the Filtering system operates at two points in the overall operational flow. The first section is the Strong Filter Point, which determines which packets are immediately rejected from the system; this occurs as soon as possible after the packet is received by the Backend system. The second is the Metadata Filter Point, which comes into play once a packet is accepted or rejected by the customer; the Metadata Filter Point then searches for any applicable metadata to add to the stream.
[00124] Each Strong Filter plugin gives a success or failure result; if a given video packet succeeds in all of its filter plugins, then it is presented to the Client via the Webapp; if not, it is rejected and deleted from the system. Metadata Filter plugins similarly return a success or failure result, but failure does not cause the data to be deleted from the system. Both types of Filter plugin can potentially add Metadata to the packet.
[00125] Example Stronq Filter-Appropriate Content Filtering [00126] The Double Yellow system is commonly used to take images from devices used by members of the public. It is likely that some images taken by these devices will include some pornographic or illegal content, whether intentionally or accidentally; in order to prevent our clients from being exposed to inappropriate material, we filter the content accordingly.
[00127] This filter works by feeding the thumbnails into Microsoft's Content Management API. If a configured threshold is not met, the plugin returns a failure state (causing the packet to be rejected and deleted). Optionally, if a secondary (lower) threshold is met, the plugin marks the packet metadata with a tag indicating the content needs to be manually reviewed.
[00128] Example Metadata Filter -Number Plate Recognition [00129] One use for the Double Yellow system is to identify parked cars. This filter uses Microsoft's Text Recognition API to detect number plates in the image. The filter records each located number plate in the packet metadata, along with its position in the image (denoted relative to the centre of the image).
[00130] Source Device Specification
[00131] Figure 3 shows a flowchart illustrating a method of operating a source device.
[00132] (Note that in some implementations, there may be no user interface for the Source Device, and therefore dialogs may not be required. The relevant parts have been marked with dashed borders in the figures.) [00133] Many elements of the Source Device's operation will be application-specific, so we begin our method when the camera is triggered. We first attempt to obtain our context as appropriate to the application (in our reference implementation, this is the current GPS reading from the phone OS). We then call the /key endpoint described below to obtain the encryption key for the current context.
[00134] If the /key endpoint returns an update message, we optionally display a message appropriate to that update.
[00135] Optional element: If we are able to display guidance to the user and it is appropriate, we display guidance on proper operation of the system at this point.
[00136] We then initialise the camera for the capture process and begin capturing visual data.
[00137] When a capture is completed, we trigger processing threads in parallel which generate thumbnails from the data and at the same time encrypt the main data as described in Security Considerations.
[00138] If our configuration requires any more captures, we repeat the process until we have taken all the captures.
[00139] As each of our image processing threads is completed, we store the processed thumbnail and encrypted data and erase the original data by zero-writing the memory range.
[00140] Once all of the image processing threads is completed for all of our captures, we transmit the data to the backend.
[00141] Figure 4 shows a flowchart illustrating a method of transmitting collected data on a source device to a server.
[00142] We collate the visual data along with the GPS data, time of capture and other application-relevant data into a Video Packet.
[00143] We check if we are able to connect to the server. If we cannot, we show a dialog indicating that we cannot do so, and we return to the application's base state.
[00144] We send the packet data to the server /packet endpoint.
[00145] If the /packet endpoint returns an update message, we optionally display a message appropriate to that update.
[00146] If the sending process is completed and confirmed, we erase the Video Packet from the device by zero-writing the memory range.
[00147] Once the Video Packet is erased, we return to the application's base state. [00148] Backend Specification [00149] The combination of the API, Filtering and Webapp stages is referred to in this document as the "Backend" for convenience. Much of this part of the system is implementation-specific, so this document focuses on those elements unique and specific to the Double Yellow system itself (rather than, for example, authorization, user or client management interfaces).
[00150] The API stage of the system acts to receive and store packet information from the Source Device and returns status information back to that device. It also serves to manage the Context which is assigned to specific Source Devices at a given time.
[00151] Figure 5 shows a flowchart illustrating a method of running a server to receive data from Source Devices, store it and return results where needed.
[00152] The two starting connectors in the figure represent "endpoints", accessible elements of our API. These can only be accessed by a user that has already undergone an appropriate authorization process. Since these are standard Web API endpoints, they can be accessed by any Source Device that is connected to the Internet. Each of these endpoints returns a success value, as well as any update response held for the accessing Source Device.
[00153] Both endpoints begin by checking to see if an update for the current authorized user is available on the system, and if so, a response is prepared.
[00154] The /packet endpoint then adds the packet's video data (both encrypted and thumbnail) to the data storage system, and adds information from the packet to the system database. It then stores an update as being ready for the current client. Finally, it triggers the strong filter process and returns a result to the user.
[00155] The /key endpoint simply retrieves the corresponding Client context for the current user and returns it to the user. (In the reference implementation, this is the geographical location of the user; each Client has a unique region in which they are the relevant Context.) [00156] Figure 6 shows a flowchart illustrating a method of filtering the thumbnail data received from users in two distinct ways.
[00157] Both methods operate on a number of software products stored on the server, referred to as "filters". A filter can be an API request to another server, or a script, or a simple operation, or any other form of data transformation.
[00158] In both cases, the filtering system runs as a loop, running all applicable filters on the packet's thumbnail data and updating the metadata entries for the packet in the database accordingly.
[00159] The Metadata Filter Process has no additional elements. The Strong Filter Process has an additional stage after running the filtering code, which is used to determine if the packet is rejected and deleted.
[00160] The Strong Filter is intended to be run on the data before the Client reviews it, and is used to curate and manage the data. The Metadata filter is intended to be run after the Client reviews it, and is used to add supplementary information to the packet.
[00161] Figure 7 shows a flowchart illustrating the use case of a client accessing the system via a Web application.
[00162] Specifics of login, user management, layout, visual design and so on are application-specific and are not included in this specification.
[00163] The Video Packet Gallery is a portion of the Web application that is used to review Video Packets stored by the server. This Gallery summarises the thumbnails for the Video Packets available for review and allows the client to select a Video Packet from the set.
[00164] When the client selects a Video Packet, the Web application displays a Thumbnail Gallery which displays all of the Thumbnails for the Video Packet and allows the client to either Accept or Reject the Packet, and to Download the visual data.
[00165] Choosing to Download the data causes the encrypted data to be downloaded from the server, then decrypted within the client's own browser, then saved to the client's computer system.
[00166] If the packet is Accepted, it is added to the database as a Case Packet entry, then erased from the Video Packet database. The encrypted and thumbnail data is not changed. The Packet source user is registered as having an update ready.
[00167] If the video is Rejected, the packet is erased from the Video Packet database and can only be recovered via the legal recovery process.
[00168] The Case Packet Gallery similarly allows the client to review packets that have been Accepted and select one from the set.
[00169] When the client selects a Case Packet, the Web Application displays a Thumbnail Gallery similar to that described hereinbefore, but with additional relevant Case Details displayed. The client can use this Thumbnail Gallery to modify the case details. If it is modified, the case packet info on in the database is updated and the Packet source user is registered as having an update ready.
[00170] Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of them mean "including but not limited to", and they are not intended to (and do not) exclude other components, integers or steps. Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
[00171] Features, integers, characteristics or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The invention is defined in accordance with the accompanying claims.

Claims (25)

  1. CLAIMS1. A method of collecting imagery in a distributed system, the distributed system comprising a source device and a server, the method comprising: capturing imagery using a camera of the source device, operating in an imagery capture mode; encrypting the captured imagery on the source device; transmitting the encrypted imagery to the server for storage; generating further data indicative of the captured imagery; and deleting the captured imagery and the encrypted imagery from the source device, whilst retaining the encrypted imagery on the server.
  2. 2. The method of claim 1, wherein the further data is generated using the captured imagery.
  3. 3. The method of claim 1 or claim 2, wherein the further data contains less information than the encrypted imagery.
  4. 4. The method of claim 3, wherein generating the further data comprises generating at least one thumbnail of the captured imagery.
  5. 5. The method of any preceding claim, wherein the further data is generated on the source device and is transmitted to the server with the encrypted imagery.
  6. 6. The method of claim 5, further comprising storing the further data unencrypted on the server, or decrypting the further data on the server.
  7. 7. The method of any preceding claim, further comprising: receiving a request to operate the source device in the imagery capture mode; authenticating between the source device and the server in response to the received request to operate the source device in the imagery capture mode; and operating the source device in the imagery capture mode in response to a successful authentication between the source device and the server.
  8. 8. The method of any preceding claim, wherein the distributed system further comprises a review device, the method further comprising: receiving an imagery access request from a review device; transmitting a copy of the encrypted imagery from the server to the review device, whilst retaining the encrypted imagery on the server; and decrypting the copy of the encrypted imagery on the review device.
  9. 9. The method of claim 8, further comprising displaying the decrypted imagery on a display of the review device.
  10. 10. The method of any preceding claim, further comprising processing the further data to determine one or more characteristics of the captured imagery.
  11. 11. The method of claim 10, wherein the further data is processed on the server to determine the one or more characteristics of the captured imagery.
  12. 12. The method of claim 10 or claim 11, further comprising deleting the encrypted imagery from the server in dependence on the determined one or more characteristics of the encrypted imagery.
  13. 13. The method of any of claims 10 to 12, wherein the one or more characteristics includes at least one characteristic indicative of the imagery portraying explicit content.
  14. 14. The method of any of claims 10 to 13, wherein the one or more characteristics includes at least one characteristic indicative of the imagery portraying protected information.
  15. 15. The method of any of claims 10 to 14, wherein the one or more characteristics includes at least one characteristic indicative of the imagery including a number plate of a vehicle.
  16. 16. The method of any of claims 10 to 15, wherein the distributed system further comprises a review device, the method further comprising: receiving an imagery access request from the review device; transmitting the one or more characteristics to the review device in response to receiving the imagery access request; and deleting the encrypted imagery on the server in dependence on a user input received on the review device in relation to at least one of the one or more characteristics.
  17. 17. The method of any preceding claim, further comprising transmitting a notification message from the server to the source device, to inform a user of the source device whether the encrypted imagery has been retained on the server.
  18. 18. The method of any preceding claim, further comprising outputting one or more imagery capture guidance messages when the source device is operating in the imagery capture mode.
  19. 19. The method of any preceding claim, wherein any decryption of the encrypted imagery stored on the server occurs off the server.
  20. 20. The method of any preceding claim, wherein the encrypted imagery is stored on the server with metadata indicative of one or more characteristics of the captured imagery.
  21. 21. The method of any preceding claim, wherein the captured imagery comprises at least one image.
  22. 22. The method of claim 21, wherein the at least one image is a plurality of still images.
  23. 23. The method of any preceding claim, wherein the captured imagery comprises one or more videos.
  24. 24. The method of any preceding claim, wherein the distributed system forms a closed circuit television, CCTV, system.
  25. 25. Apparatus for collecting imagery, the apparatus comprising: a source device comprising one or more first processors, a first memory and a camera for capturing imagery; and a server in data communication with the source device and comprising one or more second processors and a second memory, wherein the first memory comprises instructions which, when executed by the one or more first processors, cause the source device to: operate the source device in an imagery capture mode to capture imagery using the camera of the source device; encrypt the captured imagery; transmit the encrypted imagery to the server for storage; and delete the captured imagery and the encrypted imagery from the source device, wherein the second memory comprises instructions which, when executed by the one or more second processors, cause the server to: receive the encrypted imagery from the source device; and store the encrypted imagery in the second memory, and wherein the apparatus is further configured to generate further data indicative of the captured imagery.
GB1912254.8A 2019-08-27 2019-08-27 Imagery acquisition method and apparatus Pending GB2588083A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1912254.8A GB2588083A (en) 2019-08-27 2019-08-27 Imagery acquisition method and apparatus
PCT/GB2020/052045 WO2021058936A2 (en) 2019-08-27 2020-08-26 Imagery acquisition method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1912254.8A GB2588083A (en) 2019-08-27 2019-08-27 Imagery acquisition method and apparatus

Publications (2)

Publication Number Publication Date
GB201912254D0 GB201912254D0 (en) 2019-10-09
GB2588083A true GB2588083A (en) 2021-04-21

Family

ID=68108905

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1912254.8A Pending GB2588083A (en) 2019-08-27 2019-08-27 Imagery acquisition method and apparatus

Country Status (2)

Country Link
GB (1) GB2588083A (en)
WO (1) WO2021058936A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177214A (en) * 2021-04-29 2021-07-27 百度在线网络技术(北京)有限公司 Image publishing and auditing method, related device and computer program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010111554A2 (en) * 2009-03-25 2010-09-30 Syclipse Technologies, Inc. Apparatus for remote surveillance and applications therefor
US20130013348A1 (en) * 1996-01-29 2013-01-10 Progressive Casualty Insurance Company Vehicle Monitoring System
US20150002674A1 (en) * 2013-06-26 2015-01-01 Ford Global Technologies, Llc Integrated vehicle traffic camera
GB2554136A (en) * 2016-06-30 2018-03-28 Stanford Colin A wearable device, associated system and method of use

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8433895B1 (en) * 2008-05-30 2013-04-30 Symantec Corporation Methods and systems for securely managing multimedia data captured by mobile computing devices
KR101320350B1 (en) * 2009-12-14 2013-10-23 한국전자통신연구원 Secure management server and video data managing method of secure management server
US9137222B2 (en) * 2012-10-31 2015-09-15 Vmware, Inc. Crypto proxy for cloud storage services
KR102444044B1 (en) * 2015-09-25 2022-09-19 삼성전자주식회사 Device and method for processing image
KR101760092B1 (en) * 2016-05-09 2017-07-21 주식회사에스에이티 Apparatus for security enhancement in closed circuit television using hardware security module and the method by using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130013348A1 (en) * 1996-01-29 2013-01-10 Progressive Casualty Insurance Company Vehicle Monitoring System
WO2010111554A2 (en) * 2009-03-25 2010-09-30 Syclipse Technologies, Inc. Apparatus for remote surveillance and applications therefor
US20150002674A1 (en) * 2013-06-26 2015-01-01 Ford Global Technologies, Llc Integrated vehicle traffic camera
GB2554136A (en) * 2016-06-30 2018-03-28 Stanford Colin A wearable device, associated system and method of use

Also Published As

Publication number Publication date
GB201912254D0 (en) 2019-10-09
WO2021058936A3 (en) 2021-05-20
WO2021058936A2 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
EP3803668B1 (en) Obfuscating information related to personally identifiable information (pii)
US10713391B2 (en) Tamper protection and video source identification for video processing pipeline
KR101583206B1 (en) A system and method to protect user privacy in multimedia uploaded to internet sites
US8199160B2 (en) Method and apparatus for monitoring a user's activities
KR101522311B1 (en) A carrying-out system for images of the closed-circuit television with preview function
CN104680078B (en) Method for shooting picture, method, system and terminal for viewing picture
US9928352B2 (en) System and method for creating, processing, and distributing images that serve as portals enabling communication with persons who have interacted with the images
EP3537319A1 (en) Tamper protection and video source identification for video processing pipeline
US20130262864A1 (en) Method and system for supporting secure documents
US20160063278A1 (en) Privacy Compliance Event Analysis System
KR20200018159A (en) Method of prevention of forgery and theft of photo
KR101897987B1 (en) Method, apparatus and system for managing electronic fingerprint of electronic file
WO2021058936A2 (en) Imagery acquisition method and apparatus
JP2017046193A (en) Camera system enabling privacy protection
CN108566397B (en) Special remote data transmission system and transmission method for data recovery service
JP2017219997A (en) Information processing system, information processing device and program
CN113452724B (en) Separated storage electronic signature encryption protection system and method based on Internet
US11853451B2 (en) Controlled data access
CN112149177B (en) Bidirectional protection method and system for network information security
KR101731012B1 (en) System for managing transfer of personal image information
US20210006634A1 (en) Secure and private web browsing system and method
US9633228B1 (en) Verifiable media system and method
WO2016169241A1 (en) Method and device for searching private resource in computer apparatus
Alhassan et al. Forensic Acquisition of Data from a Crypt 12 Encrypted Database of Whatsapps
US11244415B2 (en) Personal IP protection system and method