WO2021120794A1 - 人脸图像传输方法、数值转移方法、装置及电子设备 - Google Patents
人脸图像传输方法、数值转移方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2021120794A1 WO2021120794A1 PCT/CN2020/120316 CN2020120316W WO2021120794A1 WO 2021120794 A1 WO2021120794 A1 WO 2021120794A1 CN 2020120316 W CN2020120316 W CN 2020120316W WO 2021120794 A1 WO2021120794 A1 WO 2021120794A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face image
- identification information
- value
- face
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0861—Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/606—Protecting data by securing the transmission between two devices or processes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
- G06T1/0028—Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/431—Frequency domain transformation; Autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/90—Identifying an image sensor based on its output data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0052—Embedding of the watermark in the frequency domain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0083—Image watermarking whereby only watermarked image required at decoder, e.g. source-based, blind, oblivious
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3204—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to a user, sender, addressee, machine or electronic recording medium
- H04N2201/3205—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to a user, sender, addressee, machine or electronic recording medium of identification information, e.g. name or ID code
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
Definitions
- the present disclosure relates to the field of network technology, and in particular to a face image transmission method, value transfer method, device, electronic equipment, and storage medium.
- users can trigger value transfer operations based on the terminal. For example, the terminal first verifies whether the user is the user based on face recognition technology, and then performs the value transfer operation after the verification is passed.
- the camera of the terminal collects the face image of the user, it will directly send the face image (also called naked data) to the processor of the terminal, and the processor of the terminal will upload the face image to the server.
- the server performs face recognition on the facial image to generate a recognition result, and the server sends the recognition result to the terminal, so that when the recognition result is "you are", the subsequent numerical value transfer operation is triggered.
- the embodiments of the present disclosure provide a face image transmission method, value transfer method, device, electronic equipment, and storage medium.
- the technical scheme is as follows:
- a face image transmission method which is applied to a terminal, and the method includes:
- the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, where the face image information is used to indicate the number of all face images collected by the camera assembly in history;
- a value transfer method which is applied to a terminal, and the method includes:
- the camera assembly When the camera assembly collects any face image, read face image information from the buffer area, where the face image information is used to indicate the number of all face images that have been collected by the camera assembly in history;
- a value transfer method which is applied to a server, and the method includes:
- the value transfer request includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, and the identification information is used to indicate that the face image is Images acquired in real time;
- the value to be transferred is transferred from the value stored corresponding to the first user identifier to the value stored corresponding to the second user identifier.
- a face image transmission device which is applied to a terminal, and the device includes:
- the reading module is used to read the face image information from the buffer area when any face image is collected by the camera assembly.
- the face image information is used to indicate the number of all face images that have been collected by the camera assembly in history. number;
- An embedding module configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
- the sending module is used to send the face image carrying the identification information.
- the reading module is used to:
- the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
- the device is also used for:
- the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
- the reading module is used to:
- the value stored in the target address of the buffer area is determined as the face image information.
- the device is also used for:
- the value stored in the target address is set to the value obtained by adding the identification information and the second target value.
- the identification information is a value obtained by adding the face image information and a third target value.
- the embedded module is used to:
- the identification information is embedded in any area except the face area in the face image.
- the embedded module is used to:
- a value transfer device which is applied to a terminal, and the device includes:
- the acquisition module is used to call the camera component to collect the face image when the trigger operation of the logarithmic value transfer option is detected;
- the reading module is used to read face image information from the buffer area when any face image is collected by the camera assembly, where the face image information is used to represent all faces that have been collected by the camera assembly in history The number of images;
- An embedding module configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
- the numerical value transfer module is used to perform numerical value transfer based on the camera identification of the camera assembly and the face image carrying the identification information.
- the value transfer module is used to:
- the value transfer request including at least a camera identifier of the camera component, a face image carrying the identification information, a value to be transferred, a first user identifier, and a second user identifier;
- the value transfer request is sent to the server, and the server performs identity verification based on the camera identification of the camera component and the face image carrying the identification information, and when the verification is passed, the value corresponding to the stored value from the first user identification Transferring the value to be transferred to the value stored corresponding to the second user identifier.
- a value transfer device which is applied to a server, and the device includes:
- the receiving module is configured to receive a value transfer request, the value transfer request including at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification, and the identification information is used to indicate all
- the face image is an image acquired in real time;
- a recognition module configured to perform face recognition on the face area of the face image to obtain a recognition result when the identification information is greater than each historical identification information stored corresponding to the camera identification;
- the value transfer module is configured to transfer the value to be transferred from the value stored corresponding to the first user identifier to the value stored corresponding to the second user identifier when the recognition result is passed.
- an electronic device includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories, and the at least one piece of program code is generated by the one or more Multiple processors are loaded and executed to achieve the following operations:
- the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, where the face image information is used to indicate the number of all face images collected by the camera assembly in history;
- an electronic device includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories, and the at least one piece of program code is generated by the one or more Multiple processors are loaded and executed to achieve the following operations:
- the camera assembly When the camera assembly collects any face image, read face image information from the buffer area, where the face image information is used to indicate the number of all face images that have been collected by the camera assembly in history;
- an electronic device includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories, and the at least one piece of program code is generated by the one or more Multiple processors are loaded and executed to achieve the following operations:
- the value transfer request includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, and the identification information is used to indicate that the face image is Images acquired in real time;
- the value to be transferred is transferred from the value stored corresponding to the first user identifier to the value stored corresponding to the second user identifier.
- a storage medium stores at least one piece of program code.
- the at least one piece of program code is loaded and executed by a processor to implement a face image transmission method or value in any of the above-mentioned possible implementation modes. The operation performed by the transfer method.
- the camera component When the camera component collects any face image, it can read the face image information from the buffer area.
- the face image information is used to indicate the number of all face images that the camera component has collected in history.
- the identification information is embedded in the image to obtain the face image carrying the identification information, the identification information is used to represent the face image information, and the face image carrying the identification information is sent, so that the face image collected by the camera assembly
- the identification information is directly embedded in the camera component, which increases the security of the facial image collected by the camera component. Even if the facial image is leaked, when the attacker steals the historical facial image to request related services, it will still be unable to pass due to the inconsistency of the identification information. Verification, thereby effectively guaranteeing the security of the face image transmission process.
- FIG. 1 is a schematic diagram of an implementation environment of a face image transmission method provided by an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of the appearance of a terminal 120 provided by an embodiment of the present disclosure
- FIG. 3 is a flowchart of a method for transmitting a face image provided by an embodiment of the present disclosure
- FIG. 4 is an interactive flowchart of a value transfer method provided by an embodiment of the present disclosure
- FIG. 5 is a schematic diagram of a value transfer method provided by an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a face image transmission device provided by an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure.
- FIG. 8 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure.
- FIG. 9 is a structural block diagram of a terminal provided by an embodiment of the present disclosure.
- FIG. 10 is a schematic structural diagram of a server provided by an embodiment of the present disclosure.
- face images may be leaked during data transmission, and “replay attacks” may occur: after the terminal collects the attacker’s face image, the attacker In the process of transmitting images from the terminal to the server, a network attack is initiated, replacing the attacker’s face image with a valid face image that has been stolen, resulting in that the user’s funds corresponding to the valid face image are stolen.
- the security of the face image transmission process is poor.
- Fig. 1 is a schematic diagram of an implementation environment of a face image transmission method provided by an embodiment of the present disclosure.
- a terminal 120 and a server 140 can be included, and both the above-mentioned terminal 120 and server 140 can be referred to as an electronic device.
- the terminal 120 is used for face image transmission.
- the terminal 120 can include a camera component 122 and a host 124.
- the camera component 122 is used to collect a face image, embed identification information on the collected face image, and then insert the face image that carries the identification information.
- the image is sent to the host 124, and the host 124 can compress, encrypt, and encapsulate the face image carrying identification information to obtain a data transmission message, and the host 124 sends the data transmission message to the server 140.
- the camera component 122 is a 3D (3Dimensions, three-dimensional) camera component.
- the 3D camera component can have functions such as face recognition, gesture recognition, human skeleton recognition, three-dimensional measurement, environment perception, or three-dimensional map reconstruction.
- the camera component can detect the distance information between each pixel in the collected image and the camera, so that it can determine whether the user corresponding to the currently collected face image is alive, and prevent attackers from using other people's photos for identity verification And steal the funds of others for numerical transfer.
- the camera assembly 122 includes a sensor 1222, a processor 1224, a memory 1226, and a battery 1228.
- the camera assembly 122 can also have a camera identification, which is used to uniquely identify the camera assembly 122.
- the camera identification is a serial number (SN) assigned by the camera assembly 122 when it leaves the factory. The number is the unique number of the camera assembly 122.
- the sensor 1222 is used to collect face images
- the sensor 1222 can be arranged inside the camera assembly 122, and the sensor 1222 can be at least one of a color image sensor, a depth image sensor, or an infrared image sensor.
- the type is limited.
- the face image collected by the sensor 1222 can also be at least one of a color map, a depth map, or an infrared image.
- the embodiment of the present disclosure does not limit the type of the face image.
- the processor 1224 can be used to embed identification information for the face image collected by the sensor 1222.
- the processor 1224 is a DSP (Digital Signal Processor), and the DSP is a unique microprocessor. A device that can process a large amount of information with digital signals.
- the processor 1224 can also be in the form of hardware such as FPGA (Field Programmable Gate Array), PLA (Programmable Logic Array, Programmable Logic Array), etc., The embodiment of the present disclosure does not limit the hardware form of the processor 1224.
- the memory 1226 is used to store face image information, which is used to indicate the number of all face images collected by the camera component 122 in history, for example, the memory 1226 is FLASH (flash memory), or It is a magnetic disk storage device, a high-speed random access (CACHE) memory, etc.
- FLASH flash memory
- CACHE high-speed random access
- the battery 1228 is used to supply power to the various components of the camera assembly 122. In this case, even if the host 124 of the terminal 120 is powered off, the battery 1228 inside the camera assembly 122 can still supply power to the memory 1226 to avoid power failure. The reason causes the face image information in the memory 1226 to be lost.
- the terminal 120 and the server 140 can be connected through a wired or wireless network.
- the server 140 can include at least one server, multiple servers, a cloud computing platform, or a virtualization center.
- the server 140 is configured to provide a background service for an application program running on the terminal 120, and the application program can provide a value transfer service to the user, so that the user can perform a value transfer operation based on the terminal 120.
- the server 140 is responsible for the main calculation work, and the terminal 120 is responsible for the secondary calculation work; or, the server 140 is responsible for the secondary calculation work, and the terminal 120 is responsible for the main calculation work; or, between the server 140 and the terminal 120 Distributed computing architecture for collaborative computing.
- the server 140 is an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communications, middleware services, domain name services, security services, Content Delivery Network (CDN), and big data and artificial intelligence platforms.
- cloud services such as cloud communications, middleware services, domain name services, security services, Content Delivery Network (CDN), and big data and artificial intelligence platforms.
- the face data transmission process occurs in the process of value transfer based on face recognition.
- the terminal 120 is commonly referred to as the "face-swiping payment terminal".
- the face-swiping payment terminal refers to an integrated camera that can collect data.
- the first user can perform a trigger operation on the value transfer option on the terminal 120, which triggers the terminal 120 to call the camera component 122 to collect the face data stream of the first user in real time, and perform any image frame in the face data stream (that is, the face data stream).
- a face image carrying identification information can be obtained, so that the terminal 120 can encapsulate the face image carrying identification information, camera identification, and value transfer information into a value transfer request
- the value transfer request is sent to the server 140, and the server 140 authenticates the first user based on the face image carrying the identification information and the camera identification.
- the identification information can verify whether the face image is collected by the camera component 122.
- face recognition can verify whether the face area in the face image is the first user himself, so as to achieve double identity verification.
- the server 140 can perform value transfer based on the value transfer information in the value transfer request, the value transfer information including the first user identification, the second user identification, and the value to be transferred.
- first user and the second user are only different names for users with different identities during a certain number of value transfer processes.
- value transfer processes it is possible that a certain user is both the first user and the second user. That is, the user transfers the value from one of his own accounts to another own account.
- the terminal 120 can have a display screen.
- the user performs interactive operations based on the display screen to complete the numerical value transfer operation based on face recognition.
- the device types of the terminal 120 include smart phones, tablet computers, e-book readers, MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) players, MP4 (Moving Picture Experts Group Audio Layer) IV, the dynamic image expert compresses the standard audio level 4) At least one of a player, a laptop computer or a desktop computer.
- the number of the aforementioned terminals 120 can be more or less. For example, there may be only one terminal 120, or there may be dozens or hundreds of terminals 120, or a larger number. The embodiments of the present disclosure do not limit the number and device types of the terminals 120.
- Fig. 3 is a flowchart of a method for transmitting a face image provided by an embodiment of the present disclosure. Referring to FIG. 3, this embodiment can be applied to the terminal 120 in the foregoing implementation environment, which will be described in detail below:
- the terminal determines the maximum value in the target list in the buffer area as the face image information, and each value stored in the target list corresponds to a number of face images.
- the terminal is used for facial image transmission.
- the terminal can include a camera component and a host.
- the camera component is used to collect facial images, embed identification information in the collected facial images, and the camera component sends the facial image carrying identification information To the host, the host transmits the face image carrying the identification information to the server.
- the camera component is a 3D camera component, so that the distance information between each pixel in the collected image and the camera can be detected to determine whether the user corresponding to the currently collected face image is alive , To prevent attackers from using other people’s photos for identity verification and stealing other people’s funds for value transfer.
- the camera assembly includes a sensor, a processor, a memory, and a battery.
- a target list can be stored in the memory of the camera assembly.
- the target list includes multiple values, and each value corresponds to a number of face images. , Whenever the number of face images collected by the camera component increases, the processor of the camera component can write a new value to the target list, delete the existing value with the earliest time stamp in the target list, and realize the target list Live Update.
- each value in the target list can be equal to the number of face images.
- the target list is [500,501,502,...], or the target list
- Each value in the target list can also be N times the number of face images (N>1).
- the target list is [500N,501N,502N,...].
- each value in the target list can also be a number of face images. The number is obtained through exponential transformation, logarithmic transformation, etc. The embodiment of the present disclosure does not limit the transformation method between the number of face images and the individual values stored in the target list.
- the terminal operating system can issue a collection instruction to the camera component.
- the camera component responds to the collection instruction and collects facial images through the sensor of the camera component.
- the sensor can collect the user's facial data stream in real time.
- the sensor sends the collected face image to the processor of the camera component.
- the processor determines the target list from the buffer area and queries the target list. The maximum value, the maximum value is determined as the face image information.
- different types of sensors can collect different types of face images.
- the face image collected by the infrared image sensor is an infrared image
- the face image collected by the depth image sensor is a depth image.
- the face image collected by the sensor is a color image, and the embodiment of the present disclosure does not limit the types of the sensor and the face image.
- the camera component maintains a target list with the maximum value updated as the number of collected face images increases through the internal processor and memory, when the number of face images collected by the camera component The larger the number, the larger the maximum value in the target list, resulting in a larger value of the face image information, so that the camera component can count the number of all face images collected through the face image information.
- the terminal reads the face image information from the buffer area.
- the face image information is used to indicate the number of all face images that the camera component has collected in history.
- the terminal may not
- the face image information is stored in the form of a target list, but the face image information is stored in data formats such as stacks, arrays, and tuples. The embodiment of the present disclosure does not limit the data format of the face image information in the buffer area.
- the terminal performs Fourier transform on the face image to obtain a frequency spectrum image of the face image.
- the terminal can perform DCT (Discrete Cosine Transform) processing on the face image to obtain the spectrum image of the face image.
- DCT processing is a discrete Fourier transform method, which is mainly used to transform the data Or the image is compressed, so that the signal in the space domain can be converted to the frequency domain, with good decorrelation performance.
- the terminal embeds identification information in any area except the face area in the spectrum image.
- the identification information is used to represent face image information.
- the identification information can be a value obtained by adding the face image information and a third target value, where the third target value can be any value greater than or equal to zero.
- the identification information can also be a value obtained by multiplying the face image information and the fourth target value, where the fourth target value can be any value greater than or equal to 1, and the fourth target value can be the same as the fourth target value.
- the three target values are the same, but they can also be different.
- the identification information can also be a value obtained after one-way encryption of the face image information, and the embodiment of the present disclosure does not limit the conversion method between the face image information and the identification information.
- any area other than the face area can be at least one of the upper left corner, the lower left corner, the upper right corner, or the lower right corner of the face image. Any area is determined to be the upper left corner of the face image, and the embodiment of the present disclosure does not limit the location of the embedding area of the identification information.
- the identification information is the same as the face image information, and the terminal can embed the face image information in any area except the face area in the spectrum image.
- the terminal can add the face image information determined in step 301 to the third target value to obtain the identification information, and then in any area of the spectrum image except the face area Embed the identification information.
- the identification information in any area other than the face area in the face image, it can prevent the pixel information of the face area in the face image from being destroyed, and avoid the occlusion of the face area.
- the problem of inability to perform face recognition can optimize the processing logic of the face image transmission process.
- the terminal performs inverse Fourier transform on the spectrum image carrying the identification information to obtain a face image carrying the identification information.
- the terminal after the terminal embeds the identification information in the spectrum image, it can perform DCT inverse transformation on the spectrum image carrying the identification information, and then convert the spectrum image carrying the identification information from the frequency domain back to the spatial domain, so as to ensure that the user cannot perceive
- this kind of identification information that the user cannot perceive can be commonly called "blind watermark” or "digital watermark".
- the user can neither see nor hear the blind watermark, but when the terminal will carry
- the server can parse out the blind watermark carried in the face image.
- the terminal can embed identification information in any area of the face image except the face area.
- the terminal can also embed identification information in the face area of the face image, and the embodiment of the present disclosure does not limit whether the embedding area of the identification information is a face area.
- the terminal embeds identification information in the face image to obtain a face image carrying the identification information.
- the identification information is used to indicate the face image information and can be directed to the face data source collected by the camera component ( That is to say, any face image), security is guarded by embedding a blind watermark, so that even if the face image is transmitted between the camera component and the host or between the host and the server, the attacker will steal it.
- the server can recognize in time that the identification information of the historical face image has expired, and it is not the face image collected in real time at the latest time, so as to determine the correctness.
- the face image verification failed, which greatly improves the security of various businesses that are verified based on the face image.
- the terminal writes the value obtained by adding the identification information and the first target value to the target list in the buffer area.
- the first target value is any value greater than or equal to 0, for example, the first target value is 1.
- the terminal embeds the identification information in the face image through the processor of the camera component, it can also write the value obtained by adding the identification information and the first target value to the target list. Since the face image information is the target The original maximum value in the list, and the identification information is greater than or equal to the face image information, the value written this time is greater than or equal to the identification information, that is, the value written this time must be greater than or equal to the original value in the target list.
- the maximum value of so that the maximum value in the target list can be updated, and the maximum value in the target list stored in the memory of the camera assembly is kept increasing as the number of collected face images increases.
- the sensor of the camera component is SENSOR
- the processor of the camera component is DSP
- the memory of the camera component is FLASH
- each value in the target list stored in FLASH happens to be the number of each face image.
- FLASH counts the number of collected face images in real time.
- DSP obtains the face data stream from SENSOR.
- DSP obtains the face data stream from SENSOR.
- the DSP determines the value obtained by adding one to the maximum number of face images as identification information.
- the identification information is embedded in the aforementioned frame of face image, and assuming that the first target value is 0, the DSP directly writes the identification information into the target list.
- the current latest count in the FLASH target list is 1000, indicating that the number of all face images collected by the camera component in history is 1000.
- the DSP reads it from the FLASH target list
- the current latest count is 1000, the identification information is determined to be 1001, the identification information 1001 is embedded in the above-mentioned 1001th face image in a blind watermark, and then the identification information 1001 is written back to the target list, so that the DSP will perform the current latest count next time When reading, 1001 can be read, which ensures that the DSP can read the incremented count every time from the FLASH.
- each value in the target list stored in FLASH is exactly the value obtained by adding one to the number of each face image
- the current latest count read by DSP from the target list stored in FLASH is the value obtained by adding one to the maximum number of face images.
- the third target value is 0, then the DSP adds one to the maximum number of face images.
- the obtained value is determined as the identification information, and the identification information is embedded in the aforementioned frame of face image.
- the DSP writes the value obtained by adding one to the identification information into the target list.
- the current latest count in the FLASH target list is 1000, indicating that the number of all face images collected by the camera component in history is 999.
- the DSP reads it from the FLASH target list
- the current latest count is 1000, and the identification information is determined to be 1000.
- the identification information 1000 is embedded in the above 1000th face image by blind watermarking, and the value 1001 obtained by adding one to the identification information 1000 is written back to the target list, so that the DSP is in the next
- 1001 can be read, which ensures that the DSP can read the incremented count every time from the FLASH.
- the terminal can also store the face image information in the memory of the camera component not in the form of a target list, but directly store the face image information in the target address, thereby saving the storage of the memory of the camera component. space.
- step 301 can be replaced by the following steps: the terminal determines the value stored in the target address of the buffer area as the face image information, that is, the terminal reads the camera assembly through the processor of the camera assembly The target address in the memory of, the value stored in the target address is determined as the face image information.
- the above step 305 can be replaced by the following steps: the terminal sets the value stored in the target address as the value obtained by adding the identification information and the second target value, where the second target value is either greater than or A value equal to 0, for example, the second target value is 1, and the second target value is the same as or different from the first target value.
- the embodiment of the present disclosure does not limit the value of the second target value or the first target value.
- the terminal does not need to maintain a costly target list in the memory of the camera assembly, but only stores the number of all face images that have been collected in the target address, which can reduce the memory occupied Resource overhead, so that not only the face image information stored in the target address can be updated in real time, but also the processing efficiency of the camera assembly can be improved.
- the terminal sends a face image carrying the identification information.
- the terminal can separately send the face image carrying the identification information, or it can encapsulate the face image carrying the identification information together with other information into a service request, thereby sending the service request, which can correspond to
- service types such as identity verification service, value transfer service, etc.
- the embodiment of the present disclosure does not limit the service type of the service request.
- the terminal can encapsulate the camera identification of the camera component and the face image carrying the identification information into an identity verification request, thereby sending the identity verification request to the server.
- the terminal can also encapsulate the camera identification of the camera component, the face image carrying the identification information, and the value transfer information into a value transfer request, thereby sending the value transfer request to the server, where the value transfer information is at least Including the first user identification, the second user identification and the value to be transferred.
- the method provided by the embodiment of the present disclosure can read the face image information from the buffer area when any face image is collected by the camera assembly, and the face image information is used to represent all the faces that have been collected by the camera assembly in history.
- the number of images, the identification information is embedded in the face image, and the face image carrying the identification information is obtained.
- the identification information is used to represent the face image information, and the face image carrying the identification information can be sent in
- the identification information is directly embedded in the face image collected by the camera component, which increases the security of the face image collected by the camera component. Even if the face image is leaked, the attacker still uses the historical face image to request related services. Because the identification information does not match, the verification cannot be passed, which effectively guarantees the security of the face image transmission process.
- the face image transmission method provided by the foregoing embodiment can ensure the security of the face data source collected by the camera assembly, and uniquely identify the time when the face image is collected through the identification information, which can effectively ensure that the person collected each time
- the face image can only be used once, and each user does not need to upgrade the hardware or system of the terminal's host, and there is no mandatory requirement for the configuration of the host itself.
- Each user only needs to access the camera component provided by the embodiment of the present disclosure to ensure the face data
- the security of the source greatly reduces the threshold for maintaining the security of the face data source, and has high portability and usability.
- the face image transmission method can be applied to various business scenarios that rely on face images.
- the process of performing identity verification based on the face image to complete the value transfer service is described as an example.
- the above process It is referred to as the face payment scene or the face payment scene for short, which will be described in detail below.
- FIG. 4 is an interaction flowchart of a value transfer method provided by an embodiment of the present disclosure. Referring to FIG. 4, this embodiment is applied to the interaction process between the terminal 120 and the server 140 in the foregoing implementation environment. This embodiment includes the following steps :
- the terminal calls the camera component to collect a face image.
- the terminal can be the personal terminal of the first user, or a "face-swiping payment terminal" installed in the store where the second user is located.
- the face-swiping payment terminal refers to an integrated camera that can collect images of the user's face.
- the embodiment of the present disclosure does not limit the device type of the terminal.
- first user and the second user are only different names for users with different identities during a certain number of value transfer processes.
- value transfer processes it is possible that a certain user is both the first user and the second user. That is, the user transfers the value from one of his own accounts to another own account.
- the terminal is triggered to display a payment interface on the display screen.
- the payment interface can include value transfer information and value transfer options.
- the first user can check the value transfer information Afterwards, a trigger operation is performed on the value transfer option.
- the terminal operating system issues a collection instruction to the camera component, and calls the camera component to collect the face image of the first user.
- the above-mentioned value transfer information can include at least the first user ID, the second user ID, and the value to be transferred.
- the value transfer information can also include transaction item information, discount information, transaction timestamp, and the like.
- the terminal reads the face image information from the buffer area, and the face image information is used to indicate the number of all face images that the camera assembly has collected in history.
- the above step 402 is similar to the above step 301, and will not be repeated here.
- the terminal embeds identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information.
- step 403 is similar to the above-mentioned steps 302-304, and will not be repeated here.
- the terminal generates a value transfer request, where the value transfer request includes at least the camera identifier of the camera component, the face image carrying the identification information, the value to be transferred, the first user identifier, and the second user identifier.
- the terminal can use a compression algorithm to compress the camera ID of the camera component, the face image carrying the ID information, the value to be transferred, the first user ID, and the second user ID to obtain compressed information, and the encryption algorithm can be used to compress the information.
- the compressed information is encrypted to obtain the cipher text information, and the cipher text information is encapsulated by the transmission protocol to obtain the value transfer request.
- the compression algorithm can include at least one of a br compression algorithm, a gzip compression algorithm, or a Huffman compression algorithm
- the encryption algorithm can include at least one of a symmetric encryption algorithm or an asymmetric encryption algorithm, such as a message digest.
- the transmission protocol can include IP (Internet Protocol), TCP (Transmission Control Protocol), or UDP (User Datagram Protocol, At least one item in the User Datagram Protocol), the embodiments of the present disclosure do not limit the types of compression algorithms, encryption algorithms, and transmission protocols.
- the terminal sends the value transfer request to the server.
- the server can perform identity verification based on the camera identification of the camera component and the face image carrying the identification information.
- identity verification based on the camera identification of the camera component and the face image carrying the identification information.
- the value to be transferred will be transferred from the stored value corresponding to the first user identification. From the value to the value stored corresponding to the second user ID, the implementation process will be described in detail in the following steps 406-408.
- the server receives the value transfer request.
- the value transfer request includes at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification, and the identification information is used to indicate that the face image is an image acquired in real time.
- the server after the server receives the value transfer request, it can parse the value transfer request to obtain the ciphertext information, decrypt the ciphertext information with a decryption algorithm, obtain compressed information, and decompress the compressed information to obtain the aforementioned camera Identification, face image carrying identification information, value to be transferred, first user identification, and second user identification.
- the decryption algorithm and the decompression algorithm used here correspond to the encryption algorithm and the compression algorithm in step 404, and will not be repeated here.
- the server performs face recognition on the face area of the face image to obtain a recognition result.
- the server stores the identification information of each face image corresponding to each camera identification.
- the server can maintain the identification information sequence corresponding to each camera identification in the background, which will correspond to the same camera.
- Each historical identification information of the identified face image is stored in the same sequence, so that when any value transfer request carrying the camera identification is received, the face image carrying the identification information in the value transfer request is obtained, and the face image is obtained from the person.
- the identification information is extracted from the face image, and the server can query the maximum historical value in the identification information sequence corresponding to the camera identification. If the extracted identification information is greater than the above-mentioned maximum historical value, then it is determined that the face image is legal, that is, the The face image is not a stolen historical face image.
- the server performs face recognition on the face area of the face image and obtains the recognition result. Otherwise, if the extracted identifier is less than or equal to the above-mentioned maximum historical value, the person is determined
- the face image is illegal, that is, the face image is a stolen historical face image, and the server can send verification failure information to the terminal.
- the face image in the process of face recognition, can be input into the face similarity model, and the similarity between the face image and the first user's pre-stored image can be predicted through the face similarity model. If the similarity is higher than or equal to the target threshold, it is determined that the recognition result of the face image is passed, and the following step 408 is performed. Otherwise, if the similarity is lower than the target threshold, the recognition result of the face image is determined to be fail , The server can send verification failure information to the terminal.
- the server transfers the value to be transferred from the value stored corresponding to the first user ID to the value stored corresponding to the second user ID.
- the server performs the above-mentioned operation of transferring the value to be transferred from the value corresponding to the first user ID to the value corresponding to the second user. So that the first user completes the transfer of the value to be transferred to the second user.
- the server performs the value transfer based on the camera identification of the camera component and the face image carrying the identification information. In some embodiments, when the value transfer is completed, the server can also send the transfer success message to the terminal To notify the terminal that the value transfer operation has been successfully performed.
- the terminal calls the camera component to collect the face image, and when the camera component collects any face image, it reads the face image from the buffer area Information, the face image information is used to indicate the number of all face images collected by the camera component in history, and the identification information is embedded in the face image to obtain the face image carrying the identification information, and the identification information is used to indicate The face image information is transferred based on the camera identification of the camera component and the face image carrying the identification information.
- the identification information can be directly embedded in the face image collected by the camera component, increasing The security of the face images collected by the camera component is improved. Even if the face image is leaked, when an attacker steals historical face images to request value transfer services, they will still fail to pass the verification because the identification information does not match, thus effectively guaranteeing The security of the value transfer process is improved.
- the server receives a value transfer request
- the value transfer request includes at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification
- the identification information is used to indicate the face image
- face recognition is performed on the face area of the face image to obtain the identification result
- the identification result is passed , Transfer the value to be transferred from the value stored corresponding to the first user ID to the value corresponding to the second user ID.
- the server can determine that the ID is carried by comparing the identification information with each stored historical identification information.
- the face image of the information is a stolen historical face image, which adds a way to verify the face image in the time dimension, which can deal with more complex network attacks, even if the face image is leaked, the attacker embezzles it
- the historical face image requests the value transfer service, it still fails to pass the verification because the identification information does not match, thereby effectively ensuring the security of the value transfer process.
- FIG. 5 is a schematic diagram of a value transfer method provided by an embodiment of the present disclosure. Please refer to FIG. 5.
- the numerical value transfer method provided by the embodiments of the present disclosure by internally improving the camera component of the face-swiping payment terminal, can add security precautions to the camera component that collects the face data source, and does not require hardware upgrades on the host side, which strictly guarantees The security of the face data source can effectively resist the "replay attack" of the face data.
- the camera component of the terminal stores the face image information in FLASH.
- the DSP After the DSP obtains the collected face data stream from the SENSOR (sensor), it can read the face image count from FLASH (that is, Face image information), a blind watermark (that is, identification information) is embedded in each frame of the face image in the face data stream.
- the blind watermark is used to indicate the count of the face image, and the DSP will carry the face of the blind watermark.
- the image is sent to the host, and the host sends the face image with the blind watermark to the server.
- the server stores an incremental sequence corresponding to the camera ID of the camera component in the background. Each historical blind watermark in the incremental sequence (that is, the historical human face) The image count) increases with the time sequence.
- the face image transmitted this time is considered legal, otherwise, it is considered The face image transmitted this time is invalid, so the validity of the sequence is verified through the above process, which ensures the security of the face image transmission process and avoids the "replay attack" of the face image. Further, when confirming the current transmission Only when the face image is legal, the face recognition is performed on the face image, and when the recognition result of the face recognition is passed, the value transfer operation is performed, which also guarantees the security of the value transfer process.
- FIG. 6 is a schematic structural diagram of a face image transmission device provided by an embodiment of the present disclosure. Please refer to FIG. 6.
- the device includes:
- the reading module 601 is used to read the face image information from the buffer area when any face image is collected by the camera assembly.
- the face image information is used to indicate the number of all face images collected by the camera assembly in history ;
- the embedding module 602 is configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
- the sending module 603 is configured to send the face image carrying the identification information.
- the device provided by the embodiment of the present disclosure can read the face image information from the buffer area when any face image is collected by the camera assembly, and the face image information is used to represent all the faces that have been collected by the camera assembly in history.
- the number of images, the identification information is embedded in the face image, and the face image carrying the identification information is obtained.
- the identification information is used to represent the face image information, and the face image carrying the identification information can be sent in
- the identification information is directly embedded in the face image collected by the camera component, which increases the security of the face image collected by the camera component. Even if the face image is leaked, the attacker still uses the historical face image to request related services. Because the identification information does not match, the verification cannot be passed, which effectively guarantees the security of the face image transmission process.
- the reading module 601 is used to:
- the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
- the device is also used to:
- the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
- the reading module 601 is used to:
- the value stored in the target address of the buffer area is determined as the face image information.
- the device is also used to:
- the value stored in the target address is set as the value obtained by adding the identification information and the second target value.
- the identification information is a value obtained by adding the face image information and the third target value.
- the embedded module 602 is used to:
- the identification information is embedded in any area of the face image except the face area.
- the embedded module 602 is used to:
- the facial image transmission device provided in the above embodiment transmits a facial image
- only the division of the above-mentioned functional modules is used as an example for illustration.
- the above-mentioned functions can be allocated to different functions according to needs.
- Module completion that is, the internal structure of an electronic device (such as a terminal) is divided into different functional modules to complete all or part of the functions described above.
- the face image transmission device provided in the foregoing embodiment belongs to the same concept as the face image transmission method embodiment, and its implementation process is detailed in the face image transmission method embodiment, which will not be repeated here.
- FIG. 7 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure. Please refer to FIG. 7.
- the device includes:
- the collection module 701 is used to call the camera component to collect the face image when the trigger operation of the logarithmic value transfer option is detected;
- the reading module 702 is used to read face image information from the buffer area when any face image is collected by the camera assembly.
- the face image information is used to indicate the number of all face images collected by the camera assembly in history. number;
- the embedding module 703 is configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
- the value transfer module 704 is configured to perform value transfer based on the camera identification of the camera assembly and the face image carrying the identification information.
- the terminal calls the camera component to collect the face image, and when the camera component collects any face image, it reads the face image from the buffer area Information, the face image information is used to indicate the number of all face images collected by the camera component in history, and the identification information is embedded in the face image to obtain the face image carrying the identification information, and the identification information is used to indicate The face image information is transferred based on the camera identification of the camera component and the face image carrying the identification information.
- the identification information can be directly embedded in the face image collected by the camera component, increasing The security of the face images collected by the camera component is improved. Even if the face image is leaked, when an attacker steals historical face images to request value transfer services, they will still fail to pass the verification because the identification information does not match, thus effectively guaranteeing The security of the value transfer process is improved.
- the value transfer module 704 is used to:
- the value transfer request including at least the camera identification of the camera component, the face image carrying the identification information, the value to be transferred, the first user identification, and the second user identification;
- the value transfer request is sent to the server, and the server performs identity verification based on the camera identification of the camera component and the face image carrying the identification information.
- the value to be transferred is transferred from the stored value corresponding to the first user identification. Value to the value stored corresponding to the second user ID.
- the value transfer device provided in the above embodiment transfers a value
- only the division of the above-mentioned functional modules is used as an example.
- the above-mentioned function allocation can be completed by different functional modules according to needs, i.e.
- the internal structure of an electronic device (such as a terminal) is divided into different functional modules to complete all or part of the functions described above.
- the numerical value transfer device provided in the foregoing embodiment and the numerical value transfer method embodiment belong to the same concept, and the implementation process is detailed in the numerical value transfer method embodiment, which will not be repeated here.
- FIG. 8 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure. Please refer to FIG. 8.
- the device includes:
- the receiving module 801 is configured to receive a value transfer request.
- the value transfer request includes at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification, and the identification information is used to indicate the person
- the face image is an image acquired in real time;
- the recognition module 802 is configured to perform face recognition on the face area of the face image when the identification information is greater than each historical identification information stored corresponding to the camera identification to obtain a recognition result;
- the value transfer module 803 is configured to transfer the value to be transferred from the value stored in the first user ID to the value stored in the second user ID when the recognition result is passed.
- the device provided by the embodiment of the present disclosure receives a value transfer request, which includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, and the ID information is used for Indicates that the face image is an image acquired in real time.
- the identification information is greater than the historical identification information stored corresponding to the camera identification
- face recognition is performed on the face area of the face image to obtain the recognition result.
- the recognition result is passed, the value to be transferred is transferred from the value stored corresponding to the first user ID to the value stored corresponding to the second user ID.
- the identification information By comparing the identification information with each stored historical identification information, it can be Determine whether the face image carrying the identification information is a stolen historical face image, thereby adding a way to verify the face image in the time dimension, which can deal with more complex cyber attacks, even if the face image is leaked , When an attacker steals historical facial images to request a value transfer service, he still fails to pass the verification because the identification information does not match, thus effectively guaranteeing the security of the value transfer process.
- the value transfer device provided in the above embodiment transfers a value
- only the division of the above-mentioned functional modules is used as an example.
- the above-mentioned function allocation can be completed by different functional modules according to needs, i.e.
- the internal structure of an electronic device (such as a server) is divided into different functional modules to complete all or part of the functions described above.
- the numerical value transfer device provided in the foregoing embodiment and the numerical value transfer method embodiment belong to the same concept, and the implementation process is detailed in the numerical value transfer method embodiment, which will not be repeated here.
- FIG. 9 is a structural block diagram of a terminal 900 provided by an embodiment of the present disclosure.
- the terminal 900 is also an electronic device.
- the terminal 900 is: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, the dynamic image expert compresses the standard audio layer 4) Player, laptop or desktop computer.
- the terminal 900 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and other names.
- the terminal 900 includes a processor 901 and a memory 902.
- the processor 901 can include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
- the processor 901 can be implemented in at least one hardware form among DSP (Digital Signal Processing), FPGA (Field Programmable Gate Array), PLA (Programmable Logic Array, Programmable Logic Array) .
- the processor 901 can also include a main processor and a coprocessor.
- the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
- the processor 901 is integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
- the processor 901 includes an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
- AI Artificial Intelligence
- the memory 902 can include one or more computer-readable storage media, which can be non-transitory.
- the memory 902 can also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
- the non-transitory computer-readable storage medium in the memory 902 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, and the face image information is used to indicate the number of all face images that the camera assembly has historically collected;
- Embed identification information in the face image to obtain a face image carrying the identification information, where the identification information is used to represent the face image information;
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the value stored in the target address of the buffer area is determined as the face image information.
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the value stored in the target address is set as the value obtained by adding the identification information and the second target value.
- the identification information is a value obtained by adding the face image information and the third target value.
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the identification information is embedded in any area of the face image except the face area.
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, and the face image information is used to indicate the number of all face images that the camera assembly has historically collected;
- Embed identification information in the face image to obtain a face image carrying the identification information, where the identification information is used to represent the face image information;
- the value transfer is performed.
- the at least one instruction is used to be executed by the processor 901 to implement the following operations:
- the value transfer request including at least the camera identification of the camera component, the face image carrying the identification information, the value to be transferred, the first user identification, and the second user identification;
- the value transfer request is sent to the server, and the server performs identity verification based on the camera identification of the camera component and the face image carrying the identification information.
- the value to be transferred is transferred from the stored value corresponding to the first user identification. Value to the value stored corresponding to the second user ID.
- the terminal 900 can further include: a peripheral device interface 903 and at least one peripheral device.
- the processor 901, the memory 902, and the peripheral device interface 903 can be connected by a bus or a signal line.
- Each peripheral device can be connected to the peripheral device interface 903 through a bus, a signal line, or a circuit board.
- the peripheral device includes at least one of a radio frequency circuit 904, a touch display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
- the peripheral device interface 903 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 901 and the memory 902.
- the processor 901, the memory 902, and the peripheral device interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 901, the memory 902, and the peripheral device interface 903 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
- the radio frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
- the radio frequency circuit 904 communicates with a communication network and other communication devices through electromagnetic signals.
- the radio frequency circuit 904 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
- the radio frequency circuit 904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so on.
- the radio frequency circuit 904 can communicate with other terminals through at least one wireless communication protocol.
- the wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
- the radio frequency circuit 904 can also include a circuit related to NFC (Near Field Communication), which is not limited in the present disclosure.
- the display screen 905 is used to display a UI (User Interface, user interface).
- the UI can include graphics, text, icons, videos, and any combination thereof.
- the display screen 905 also has the ability to collect touch signals on or above the surface of the display screen 905.
- the touch signal can be input to the processor 901 as a control signal for processing.
- the display screen 905 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
- one display screen 905 is provided on the front panel of the terminal 900; in other embodiments, there are at least two display screens 905, which are respectively provided on different surfaces of the terminal 900 or in a folding design;
- the display screen 905 is a flexible display screen, and is arranged on the curved surface or the folding surface of the terminal 900.
- the display screen 905 can also be configured as a non-rectangular irregular pattern, that is, a special-shaped screen.
- the display screen 905 can be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
- the camera assembly 906 is used to capture images or videos.
- the camera assembly 906 includes a front camera and a rear camera.
- the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
- the camera assembly 906 also includes a flash.
- the flash can be a single-color temperature flash or a dual-color temperature flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
- the audio circuit 907 can include a microphone and a speaker.
- the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals to be input to the processor 901 for processing, or input to the radio frequency circuit 904 to implement voice communication.
- the microphone can also be an array microphone or an omnidirectional collection microphone.
- the speaker is used to convert the electrical signal from the processor 901 or the radio frequency circuit 904 into sound waves.
- the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
- the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into human audible sound waves, but also convert electrical signals into human inaudible sound waves for purposes such as distance measurement.
- the audio circuit 907 can also include a headphone jack.
- the positioning component 908 is used to locate the current geographic location of the terminal 900 to implement navigation or LBS (Location Based Service, location-based service).
- the positioning component 908 can be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Granus system of Russia, or the Galileo system of the European Union.
- the power supply 909 is used to supply power to various components in the terminal 900.
- the power source 909 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
- the rechargeable battery can support wired charging or wireless charging.
- the rechargeable battery can also be used to support fast charging technology.
- the terminal 900 further includes one or more sensors 910.
- the one or more sensors 910 include, but are not limited to: an acceleration sensor 911, a gyroscope sensor 912, a pressure sensor 913, a fingerprint sensor 914, an optical sensor 915, and a proximity sensor 916.
- the acceleration sensor 911 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 900.
- the acceleration sensor 911 can be used to detect the components of gravitational acceleration on three coordinate axes.
- the processor 901 can control the touch screen 905 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 911.
- the acceleration sensor 911 can also be used for game or user motion data collection.
- the gyroscope sensor 912 can detect the body direction and the rotation angle of the terminal 900, and the gyroscope sensor 912 can cooperate with the acceleration sensor 911 to collect the user's 3D actions on the terminal 900.
- the processor 901 can implement the following functions according to the data collected by the gyroscope sensor 912: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
- the pressure sensor 913 can be disposed on the side frame of the terminal 900 and/or the lower layer of the touch screen 905.
- the pressure sensor 913 can detect the user's holding signal of the terminal 900, and the processor 901 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 913.
- the processor 901 implements control of the operability controls on the UI interface according to the user's pressure operation on the touch display screen 905.
- the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
- the fingerprint sensor 914 is used to collect the user's fingerprint, and the processor 901 can identify the user's identity according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 can identify the user's identity according to the collected fingerprint. When it is recognized that the user's identity is a trusted identity, the processor 901 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
- the fingerprint sensor 914 can be provided on the front, back, or side of the terminal 900. When a physical button or a manufacturer logo is provided on the terminal 900, the fingerprint sensor 914 can be integrated with the physical button or the manufacturer logo.
- the optical sensor 915 is used to collect the ambient light intensity.
- the processor 901 can control the display brightness of the touch screen 905 according to the ambient light intensity collected by the optical sensor 915. In some embodiments, when the ambient light intensity is high, the display brightness of the touch screen 905 is increased; when the ambient light intensity is low, the display brightness of the touch screen 905 is decreased. In another embodiment, the processor 901 can also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
- the proximity sensor 916 also called a distance sensor, is usually arranged on the front panel of the terminal 900.
- the proximity sensor 916 is used to collect the distance between the user and the front of the terminal 900.
- the processor 901 controls the touch screen 905 to switch from the on-screen state to the off-screen state; when the proximity sensor 916 detects When the distance between the user and the front of the terminal 900 gradually increases, the processor 901 controls the touch display screen 905 to switch from the rest screen state to the bright screen state.
- FIG. 9 does not constitute a limitation on the terminal 900, and can include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
- FIG. 10 is a schematic structural diagram of a server provided by an embodiment of the present disclosure.
- the server 1000 is also an electronic device.
- the server 1000 may have relatively large differences due to differences in configuration or performance, and may include one or more processors (Central Processing Units, CPU) 1001 and one or more memories 1002, where at least one piece of program code is stored in the memory 1002, and the at least one piece of program code is loaded and executed by the processor 1001 to implement the following operations:
- processors Central Processing Units, CPU
- memories 1002 where at least one piece of program code is stored in the memory 1002, and the at least one piece of program code is loaded and executed by the processor 1001 to implement the following operations:
- the value transfer request includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, the identification information is used to indicate that the face image is collected in real time Image;
- the value to be transferred is transferred from the value stored corresponding to the first user ID to the value stored corresponding to the second user ID.
- the server 1000 can also have components such as a wired or wireless network interface, a keyboard, an input and output interface for input and output, and the server 1000 can also include other components for implementing device functions, which will not be repeated here.
- a computer-readable storage medium such as a memory including at least one piece of program code, and the above-mentioned at least one piece of program code can be executed by a processor in a terminal to implement the following operations:
- the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, and the face image information is used to indicate the number of all face images collected by the camera assembly in history;
- Embed identification information in the face image to obtain a face image carrying the identification information, where the identification information is used to represent the face image information;
- the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
- the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
- the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
- the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
- the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
- the value stored in the target address of the buffer area is determined as the face image information.
- the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
- the value stored in the target address is set as the value obtained by adding the identification information and the second target value.
- the identification information is a value obtained by adding the face image information and the third target value.
- the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
- the identification information is embedded in any area of the face image except the face area.
- the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
- the computer-readable storage medium is ROM (Read-Only Memory), RAM (Random-Access Memory), CD-ROM (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks and optical data storage devices, etc.
- a computer program or computer program product including at least one piece of program code is also provided, which when running on a computer device, causes the computer device to execute the face image transmission method or value provided by the foregoing various embodiments. Any possible implementation of the transfer method will not be repeated here.
- the program can be stored in a computer-readable storage medium, as mentioned above.
- the storage medium can be read-only memory, magnetic disk or optical disk, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Accounting & Taxation (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Finance (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioethics (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Technology Law (AREA)
- Image Processing (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
Abstract
Description
Claims (36)
- 一种人脸图像传输方法,应用于终端,所述方法包括:当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;发送所述携带所述标识信息的人脸图像。
- 根据权利要求1所述的方法,其中,所述从缓存区中读取人脸图像信息包括:将所述缓存区的目标列表中最大数值确定为所述人脸图像信息,所述目标列表中存储的各个数值分别对应于一个人脸图像个数。
- 根据权利要求2所述的方法,其中,所述在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像之后,所述方法还包括:向所述缓存区的所述目标列表中写入所述标识信息与第一目标数值相加所得的数值。
- 根据权利要求1所述的方法,其中,所述从缓存区中读取人脸图像信息包括:将所述缓存区的目标地址内存储的数值确定为所述人脸图像信息。
- 根据权利要求4所述的方法,其中,所述在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像之后,所述方法还包括:将所述目标地址内存储的数值置为所述标识信息与第二目标数值相加所得的数值。
- 根据权利要求1所述的方法,其中,所述标识信息为所述人脸图像信息与第三目标数值相加所得的数值。
- 根据权利要求1至6任一项所述的方法,其中,所述在所述人脸图像中嵌入标识信息包括:在所述人脸图像中除了人脸区域之外的任一区域内嵌入所述标识信息。
- 根据权利要求1所述的方法,其中,所述在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像包括:对所述人脸图像进行傅里叶变换,得到所述人脸图像的频谱图像;在所述频谱图像中除了人脸区域之外的任一区域内嵌入所述标识信息;对携带所述标识信息的频谱图像进行逆傅里叶变换,得到携带所述标识信息的人脸图像。
- 一种数值转移方法,应用于终端,所述方法包括:当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
- 根据权利要求9所述的方法,其中,所述基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移包括:生成数值转移请求,所述数值转移请求至少包括所述摄像头组件的摄像头标识、携带所述标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;向服务器发送所述数值转移请求,由所述服务器基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像进行身份验证,当验证通过时从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
- 一种数值转移方法,应用于服务器,所述方法包括:接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
- 一种人脸图像传输装置,应用于终端,所述装置包括:读取模块,用于当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;嵌入模块,用于在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;发送模块,用于发送所述携带所述标识信息的人脸图像。
- 根据权利要求12所述的装置,其中,所述读取模块用于:将所述缓存区的目标列表中最大数值确定为所述人脸图像信息,所述目标列表中存储的各个数值分别对应于一个人脸图像个数。
- 根据权利要求13所述的装置,其中,所述装置还用于:向所述缓存区的所述目标列表中写入所述标识信息与第一目标数值相加所得的数值。
- 根据权利要求12所述的装置,其中,所述读取模块用于:将所述缓存区的目标地址内存储的数值确定为所述人脸图像信息。
- 根据权利要求15所述的装置,其中,所述装置还用于:将所述目标地址内存储的数值置为所述标识信息与第二目标数值相加所得的数值。
- 根据权利要求12所述的装置,其中,所述标识信息为所述人脸图像信息与第三目标数值相加所得的数值。
- 根据权利要求12至17任一项所述的装置,其中,所述嵌入模块用于:在所述人脸图像中除了人脸区域之外的任一区域内嵌入所述标识信息。
- 根据权利要求12所述的装置,其中,所述嵌入模块用于:对所述人脸图像进行傅里叶变换,得到所述人脸图像的频谱图像;在所述频谱图像中除了人脸区域之外的任一区域内嵌入所述标识信息;对携带所述标识信息的频谱图像进行逆傅里叶变换,得到携带所述标识信息的人脸图像。
- 一种数值转移装置,应用于终端,所述装置包括:采集模块,用于当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;读取模块,用于当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;嵌入模块,用于在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;数值转移模块,用于基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
- 根据权利要求20所述的装置,其中,所述数值转移模块用于:生成数值转移请求,所述数值转移请求至少包括所述摄像头组件的摄像头标识、携带所述标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;向服务器发送所述数值转移请求,由所述服务器基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像进行身份验证,当验证通过时从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
- 一种数值转移装置,应用于服务器,所述装置包括:接收模块,用于接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;识别模块,用于当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;数值转移模块,用于当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
- 一种电子设备,所述电子设备包括一个或多个处理器和一个或多个存储器,所述一 个或多个存储器中存储有至少一条程序代码,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;发送所述携带所述标识信息的人脸图像。
- 根据权利要求23所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:将所述缓存区的目标列表中最大数值确定为所述人脸图像信息,所述目标列表中存储的各个数值分别对应于一个人脸图像个数。
- 根据权利要求24所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:向所述缓存区的所述目标列表中写入所述标识信息与第一目标数值相加所得的数值。
- 根据权利要求23所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:将所述缓存区的目标地址内存储的数值确定为所述人脸图像信息。
- 根据权利要求26所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:将所述目标地址内存储的数值置为所述标识信息与第二目标数值相加所得的数值。
- 根据权利要求23所述的电子设备,其中,所述标识信息为所述人脸图像信息与第三目标数值相加所得的数值。
- 根据权利要求23至28任一项所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:在所述人脸图像中除了人脸区域之外的任一区域内嵌入所述标识信息。
- 根据权利要求23所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:对所述人脸图像进行傅里叶变换,得到所述人脸图像的频谱图像;在所述频谱图像中除了人脸区域之外的任一区域内嵌入所述标识信息;对携带所述标识信息的频谱图像进行逆傅里叶变换,得到携带所述标识信息的人脸图像。
- 一种电子设备,所述电子设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条程序代码,所述至少一条程序代码由所述一个或多个处理 器加载并执行以实现如下操作:当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
- 根据权利要求31所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:生成数值转移请求,所述数值转移请求至少包括所述摄像头组件的摄像头标识、携带所述标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;向服务器发送所述数值转移请求,由所述服务器基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像进行身份验证,当验证通过时从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
- 一种电子设备,所述电子设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条程序代码,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
- 一种存储介质,所述存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现如下操作:当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;发送所述携带所述标识信息的人脸图像。
- 一种存储介质,所述存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现如下操作:当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
- 一种存储介质,所述存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现如下操作:接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20903189.7A EP3989113A4 (en) | 2019-12-16 | 2020-10-12 | FACE IMAGE TRANSMISSION METHOD, METHOD AND DEVICE FOR TRANSMISSION OF NUMERICAL VALUES AND ELECTRONIC DEVICE |
JP2022515017A JP7389236B2 (ja) | 2019-12-16 | 2020-10-12 | 顔画像送信方法、価値転送方法、装置、電子デバイス |
US17/528,079 US20220075998A1 (en) | 2019-12-16 | 2021-11-16 | Secure face image transmission method, apparatuses, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911300268.6 | 2019-12-16 | ||
CN201911300268.6A CN111062323B (zh) | 2019-12-16 | 2019-12-16 | 人脸图像传输方法、数值转移方法、装置及电子设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/528,079 Continuation US20220075998A1 (en) | 2019-12-16 | 2021-11-16 | Secure face image transmission method, apparatuses, and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021120794A1 true WO2021120794A1 (zh) | 2021-06-24 |
Family
ID=70301845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/120316 WO2021120794A1 (zh) | 2019-12-16 | 2020-10-12 | 人脸图像传输方法、数值转移方法、装置及电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220075998A1 (zh) |
EP (1) | EP3989113A4 (zh) |
JP (1) | JP7389236B2 (zh) |
CN (1) | CN111062323B (zh) |
WO (1) | WO2021120794A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046365B (zh) * | 2019-12-16 | 2023-05-05 | 腾讯科技(深圳)有限公司 | 人脸图像传输方法、数值转移方法、装置及电子设备 |
CN111062323B (zh) * | 2019-12-16 | 2023-06-02 | 腾讯科技(深圳)有限公司 | 人脸图像传输方法、数值转移方法、装置及电子设备 |
CN113450121B (zh) * | 2021-06-30 | 2022-08-05 | 湖南校智付网络科技有限公司 | 用于校园支付的人脸识别方法 |
CN115205952B (zh) * | 2022-09-16 | 2022-11-25 | 深圳市企鹅网络科技有限公司 | 一种基于深度学习的线上学习图像采集方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803289A (zh) * | 2016-12-22 | 2017-06-06 | 五邑大学 | 一种智能移动防伪签到方法与系统 |
CN107679861A (zh) * | 2017-08-30 | 2018-02-09 | 阿里巴巴集团控股有限公司 | 资源转移方法、资金支付方法、装置及电子设备 |
CN108306886A (zh) * | 2018-02-01 | 2018-07-20 | 深圳市腾讯计算机系统有限公司 | 一种身份验证方法、装置及存储介质 |
US20180240212A1 (en) * | 2015-03-06 | 2018-08-23 | Digimarc Corporation | Digital watermarking applications |
CN111062323A (zh) * | 2019-12-16 | 2020-04-24 | 腾讯科技(深圳)有限公司 | 人脸图像传输方法、数值转移方法、装置及电子设备 |
Family Cites Families (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6212285B1 (en) * | 1998-04-15 | 2001-04-03 | Massachusetts Institute Of Technology | Method and apparatus for multi-bit zoned data hiding in printed images |
US7756892B2 (en) * | 2000-05-02 | 2010-07-13 | Digimarc Corporation | Using embedded data with file sharing |
US6608911B2 (en) * | 2000-12-21 | 2003-08-19 | Digimarc Corporation | Digitally watermaking holograms for use with smart cards |
JP2001052143A (ja) | 1999-08-09 | 2001-02-23 | Mega Chips Corp | 認証用記録媒体および認証システム |
JP2001094755A (ja) | 1999-09-20 | 2001-04-06 | Toshiba Corp | 画像処理方法 |
US7663670B1 (en) * | 2001-02-09 | 2010-02-16 | Digital Imaging Systems Gmbh | Methods and systems for embedding camera information in images |
JP4541632B2 (ja) * | 2002-05-13 | 2010-09-08 | パナソニック株式会社 | 電子透かし埋め込み装置、その方法及び記録媒体 |
US6782116B1 (en) * | 2002-11-04 | 2004-08-24 | Mediasec Technologies, Gmbh | Apparatus and methods for improving detection of watermarks in content that has undergone a lossy transformation |
US8509472B2 (en) * | 2004-06-24 | 2013-08-13 | Digimarc Corporation | Digital watermarking methods, programs and apparatus |
JP4799854B2 (ja) * | 2004-12-09 | 2011-10-26 | ソニー株式会社 | 情報処理装置および方法、並びにプログラム |
US7370190B2 (en) * | 2005-03-03 | 2008-05-06 | Digimarc Corporation | Data processing systems and methods with enhanced bios functionality |
JP2007116506A (ja) | 2005-10-21 | 2007-05-10 | Fujifilm Corp | 電子申請用顔画像媒体、その作成装置、無効化装置及び再発行装置 |
US7861056B2 (en) * | 2007-01-03 | 2010-12-28 | Tekelec | Methods, systems, and computer program products for providing memory management with constant defragmentation time |
US8233677B2 (en) * | 2007-07-04 | 2012-07-31 | Sanyo Electric Co., Ltd. | Image sensing apparatus and image file data structure |
US8289562B2 (en) * | 2007-09-21 | 2012-10-16 | Fujifilm Corporation | Image processing apparatus, method and recording medium |
JP5071471B2 (ja) * | 2009-12-18 | 2012-11-14 | コニカミノルタビジネステクノロジーズ株式会社 | 画像形成装置、画像形成方法および画像形成装置を制御する制御プログラム |
CN101777212A (zh) * | 2010-02-05 | 2010-07-14 | 广州广电运通金融电子股份有限公司 | 安全卡、卡认证系统、具有该系统的金融设备及认证方法 |
US9626273B2 (en) * | 2011-11-09 | 2017-04-18 | Nec Corporation | Analysis system including analysis engines executing predetermined analysis and analysis executing part controlling operation of analysis engines and causing analysis engines to execute analysis |
CN104429075B (zh) * | 2012-07-09 | 2017-10-31 | 太阳专利托管公司 | 图像编码方法、图像解码方法、图像编码装置及图像解码装置 |
KR102106539B1 (ko) * | 2013-07-01 | 2020-05-28 | 삼성전자주식회사 | 화상 통화동안 비디오 컨텐츠를 인증하는 방법 및 디바이스 |
US20190238954A1 (en) * | 2016-06-22 | 2019-08-01 | Southern Star Corporation Pty Ltd | Systems and methods for delivery of audio and video content |
CN106485201B (zh) * | 2016-09-09 | 2019-06-28 | 首都师范大学 | 超复数加密域的彩色人脸识别方法 |
CN106920111B (zh) * | 2017-02-23 | 2018-06-26 | 彭雨妍 | 产品生产过程信息的处理方法及系统 |
US10262191B2 (en) * | 2017-03-08 | 2019-04-16 | Morphotrust Usa, Llc | System and method for manufacturing and inspecting identification documents |
CN107066983B (zh) * | 2017-04-20 | 2022-08-09 | 腾讯科技(上海)有限公司 | 一种身份验证方法及装置 |
CN110869963A (zh) | 2017-08-02 | 2020-03-06 | 麦克赛尔株式会社 | 生物认证结算系统、结算系统和收银系统 |
JP6946175B2 (ja) * | 2017-12-27 | 2021-10-06 | 富士フイルム株式会社 | 撮影制御システム、撮影制御方法、プログラムおよび記録媒体 |
US11176373B1 (en) * | 2018-01-12 | 2021-11-16 | Amazon Technologies, Inc. | System and method for visitor detection algorithm |
US10991064B1 (en) * | 2018-03-07 | 2021-04-27 | Adventure Soup Inc. | System and method of applying watermark in a digital image |
CN114780934A (zh) * | 2018-08-13 | 2022-07-22 | 创新先进技术有限公司 | 一种身份验证方法及装置 |
CN110086954B (zh) * | 2019-03-26 | 2020-07-28 | 同济大学 | 一种基于数字水印的航线加密方法和执行方法 |
CN110287841B (zh) * | 2019-06-17 | 2021-09-17 | 秒针信息技术有限公司 | 图像传输方法及装置、图像传输系统、存储介质 |
-
2019
- 2019-12-16 CN CN201911300268.6A patent/CN111062323B/zh active Active
-
2020
- 2020-10-12 EP EP20903189.7A patent/EP3989113A4/en active Pending
- 2020-10-12 WO PCT/CN2020/120316 patent/WO2021120794A1/zh active Application Filing
- 2020-10-12 JP JP2022515017A patent/JP7389236B2/ja active Active
-
2021
- 2021-11-16 US US17/528,079 patent/US20220075998A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240212A1 (en) * | 2015-03-06 | 2018-08-23 | Digimarc Corporation | Digital watermarking applications |
CN106803289A (zh) * | 2016-12-22 | 2017-06-06 | 五邑大学 | 一种智能移动防伪签到方法与系统 |
CN107679861A (zh) * | 2017-08-30 | 2018-02-09 | 阿里巴巴集团控股有限公司 | 资源转移方法、资金支付方法、装置及电子设备 |
CN108306886A (zh) * | 2018-02-01 | 2018-07-20 | 深圳市腾讯计算机系统有限公司 | 一种身份验证方法、装置及存储介质 |
CN111062323A (zh) * | 2019-12-16 | 2020-04-24 | 腾讯科技(深圳)有限公司 | 人脸图像传输方法、数值转移方法、装置及电子设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3989113A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP3989113A1 (en) | 2022-04-27 |
JP7389236B2 (ja) | 2023-11-29 |
EP3989113A4 (en) | 2023-01-18 |
US20220075998A1 (en) | 2022-03-10 |
CN111062323A (zh) | 2020-04-24 |
CN111062323B (zh) | 2023-06-02 |
JP2022547923A (ja) | 2022-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7338044B2 (ja) | 顔画像送信方法、値転送方法、装置および電子機器 | |
WO2021120794A1 (zh) | 人脸图像传输方法、数值转移方法、装置及电子设备 | |
CN110290146B (zh) | 分享口令的生成方法、装置、服务器及存储介质 | |
CN112235400B (zh) | 通信方法、通信系统、装置、服务器及存储介质 | |
WO2019062606A1 (zh) | 弹幕信息显示方法、提供方法以及设备 | |
CN111866140B (zh) | 融合管理设备、管理系统、服务调用方法及介质 | |
CN110365501B (zh) | 基于图形码进行群组加入处理的方法及装置 | |
CN110769050B (zh) | 数据处理方法、数据处理系统、计算机设备及存储介质 | |
CN110826103A (zh) | 基于区块链的文档权限处理方法、装置、设备及存储介质 | |
CN110752929B (zh) | 应用程序的处理方法及相关产品 | |
CN111404991A (zh) | 获取云服务的方法、装置、电子设备及介质 | |
CN110677262B (zh) | 基于区块链的信息公证方法、装置及系统 | |
CN111193702B (zh) | 数据加密传输的方法和装置 | |
CN110597840B (zh) | 基于区块链的伴侣关系建立方法、装置、设备及存储介质 | |
CN111694892B (zh) | 资源转移方法、装置、终端、服务器及存储介质 | |
CN111970298A (zh) | 应用访问方法、装置、存储介质及计算机设备 | |
CN110738491A (zh) | 数值转移方法、系统、装置、终端及存储介质 | |
CN112765571B (zh) | 权限管理方法、系统、装置、服务器及存储介质 | |
CN115495169A (zh) | 数据获取、页面生成方法、装置、设备及可读存储介质 | |
CN112528311B (zh) | 数据管理方法、装置及终端 | |
CN112699364A (zh) | 验证信息的处理方法、装置、设备及存储介质 | |
CN114124405A (zh) | 业务处理方法、系统、计算机设备及计算机可读存储介质 | |
CN114793288B (zh) | 权限信息处理方法、装置、服务器及介质 | |
CN116743351A (zh) | 密钥管理方法、装置、设备及存储介质 | |
JP2022551241A (ja) | メディアデータを再生する方法、装置、システム、デバイス、および記憶媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20903189 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20903189.7 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2020903189 Country of ref document: EP Effective date: 20220124 |
|
ENP | Entry into the national phase |
Ref document number: 2022515017 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |