WO2021120794A1 - 人脸图像传输方法、数值转移方法、装置及电子设备 - Google Patents

人脸图像传输方法、数值转移方法、装置及电子设备 Download PDF

Info

Publication number
WO2021120794A1
WO2021120794A1 PCT/CN2020/120316 CN2020120316W WO2021120794A1 WO 2021120794 A1 WO2021120794 A1 WO 2021120794A1 CN 2020120316 W CN2020120316 W CN 2020120316W WO 2021120794 A1 WO2021120794 A1 WO 2021120794A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
identification information
value
face
information
Prior art date
Application number
PCT/CN2020/120316
Other languages
English (en)
French (fr)
Inventor
王少鸣
耿志军
周俊
郭润增
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20903189.7A priority Critical patent/EP3989113A4/en
Priority to JP2022515017A priority patent/JP7389236B2/ja
Publication of WO2021120794A1 publication Critical patent/WO2021120794A1/zh
Priority to US17/528,079 priority patent/US20220075998A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/606Protecting data by securing the transmission between two devices or processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0028Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/431Frequency domain transformation; Autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/90Identifying an image sensor based on its output data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0052Embedding of the watermark in the frequency domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0083Image watermarking whereby only watermarked image required at decoder, e.g. source-based, blind, oblivious
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3204Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to a user, sender, addressee, machine or electronic recording medium
    • H04N2201/3205Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to a user, sender, addressee, machine or electronic recording medium of identification information, e.g. name or ID code
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Definitions

  • the present disclosure relates to the field of network technology, and in particular to a face image transmission method, value transfer method, device, electronic equipment, and storage medium.
  • users can trigger value transfer operations based on the terminal. For example, the terminal first verifies whether the user is the user based on face recognition technology, and then performs the value transfer operation after the verification is passed.
  • the camera of the terminal collects the face image of the user, it will directly send the face image (also called naked data) to the processor of the terminal, and the processor of the terminal will upload the face image to the server.
  • the server performs face recognition on the facial image to generate a recognition result, and the server sends the recognition result to the terminal, so that when the recognition result is "you are", the subsequent numerical value transfer operation is triggered.
  • the embodiments of the present disclosure provide a face image transmission method, value transfer method, device, electronic equipment, and storage medium.
  • the technical scheme is as follows:
  • a face image transmission method which is applied to a terminal, and the method includes:
  • the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, where the face image information is used to indicate the number of all face images collected by the camera assembly in history;
  • a value transfer method which is applied to a terminal, and the method includes:
  • the camera assembly When the camera assembly collects any face image, read face image information from the buffer area, where the face image information is used to indicate the number of all face images that have been collected by the camera assembly in history;
  • a value transfer method which is applied to a server, and the method includes:
  • the value transfer request includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, and the identification information is used to indicate that the face image is Images acquired in real time;
  • the value to be transferred is transferred from the value stored corresponding to the first user identifier to the value stored corresponding to the second user identifier.
  • a face image transmission device which is applied to a terminal, and the device includes:
  • the reading module is used to read the face image information from the buffer area when any face image is collected by the camera assembly.
  • the face image information is used to indicate the number of all face images that have been collected by the camera assembly in history. number;
  • An embedding module configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
  • the sending module is used to send the face image carrying the identification information.
  • the reading module is used to:
  • the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
  • the device is also used for:
  • the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
  • the reading module is used to:
  • the value stored in the target address of the buffer area is determined as the face image information.
  • the device is also used for:
  • the value stored in the target address is set to the value obtained by adding the identification information and the second target value.
  • the identification information is a value obtained by adding the face image information and a third target value.
  • the embedded module is used to:
  • the identification information is embedded in any area except the face area in the face image.
  • the embedded module is used to:
  • a value transfer device which is applied to a terminal, and the device includes:
  • the acquisition module is used to call the camera component to collect the face image when the trigger operation of the logarithmic value transfer option is detected;
  • the reading module is used to read face image information from the buffer area when any face image is collected by the camera assembly, where the face image information is used to represent all faces that have been collected by the camera assembly in history The number of images;
  • An embedding module configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
  • the numerical value transfer module is used to perform numerical value transfer based on the camera identification of the camera assembly and the face image carrying the identification information.
  • the value transfer module is used to:
  • the value transfer request including at least a camera identifier of the camera component, a face image carrying the identification information, a value to be transferred, a first user identifier, and a second user identifier;
  • the value transfer request is sent to the server, and the server performs identity verification based on the camera identification of the camera component and the face image carrying the identification information, and when the verification is passed, the value corresponding to the stored value from the first user identification Transferring the value to be transferred to the value stored corresponding to the second user identifier.
  • a value transfer device which is applied to a server, and the device includes:
  • the receiving module is configured to receive a value transfer request, the value transfer request including at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification, and the identification information is used to indicate all
  • the face image is an image acquired in real time;
  • a recognition module configured to perform face recognition on the face area of the face image to obtain a recognition result when the identification information is greater than each historical identification information stored corresponding to the camera identification;
  • the value transfer module is configured to transfer the value to be transferred from the value stored corresponding to the first user identifier to the value stored corresponding to the second user identifier when the recognition result is passed.
  • an electronic device includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories, and the at least one piece of program code is generated by the one or more Multiple processors are loaded and executed to achieve the following operations:
  • the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, where the face image information is used to indicate the number of all face images collected by the camera assembly in history;
  • an electronic device includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories, and the at least one piece of program code is generated by the one or more Multiple processors are loaded and executed to achieve the following operations:
  • the camera assembly When the camera assembly collects any face image, read face image information from the buffer area, where the face image information is used to indicate the number of all face images that have been collected by the camera assembly in history;
  • an electronic device includes one or more processors and one or more memories, and at least one piece of program code is stored in the one or more memories, and the at least one piece of program code is generated by the one or more Multiple processors are loaded and executed to achieve the following operations:
  • the value transfer request includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, and the identification information is used to indicate that the face image is Images acquired in real time;
  • the value to be transferred is transferred from the value stored corresponding to the first user identifier to the value stored corresponding to the second user identifier.
  • a storage medium stores at least one piece of program code.
  • the at least one piece of program code is loaded and executed by a processor to implement a face image transmission method or value in any of the above-mentioned possible implementation modes. The operation performed by the transfer method.
  • the camera component When the camera component collects any face image, it can read the face image information from the buffer area.
  • the face image information is used to indicate the number of all face images that the camera component has collected in history.
  • the identification information is embedded in the image to obtain the face image carrying the identification information, the identification information is used to represent the face image information, and the face image carrying the identification information is sent, so that the face image collected by the camera assembly
  • the identification information is directly embedded in the camera component, which increases the security of the facial image collected by the camera component. Even if the facial image is leaked, when the attacker steals the historical facial image to request related services, it will still be unable to pass due to the inconsistency of the identification information. Verification, thereby effectively guaranteeing the security of the face image transmission process.
  • FIG. 1 is a schematic diagram of an implementation environment of a face image transmission method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of the appearance of a terminal 120 provided by an embodiment of the present disclosure
  • FIG. 3 is a flowchart of a method for transmitting a face image provided by an embodiment of the present disclosure
  • FIG. 4 is an interactive flowchart of a value transfer method provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a value transfer method provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a face image transmission device provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure.
  • FIG. 9 is a structural block diagram of a terminal provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of a server provided by an embodiment of the present disclosure.
  • face images may be leaked during data transmission, and “replay attacks” may occur: after the terminal collects the attacker’s face image, the attacker In the process of transmitting images from the terminal to the server, a network attack is initiated, replacing the attacker’s face image with a valid face image that has been stolen, resulting in that the user’s funds corresponding to the valid face image are stolen.
  • the security of the face image transmission process is poor.
  • Fig. 1 is a schematic diagram of an implementation environment of a face image transmission method provided by an embodiment of the present disclosure.
  • a terminal 120 and a server 140 can be included, and both the above-mentioned terminal 120 and server 140 can be referred to as an electronic device.
  • the terminal 120 is used for face image transmission.
  • the terminal 120 can include a camera component 122 and a host 124.
  • the camera component 122 is used to collect a face image, embed identification information on the collected face image, and then insert the face image that carries the identification information.
  • the image is sent to the host 124, and the host 124 can compress, encrypt, and encapsulate the face image carrying identification information to obtain a data transmission message, and the host 124 sends the data transmission message to the server 140.
  • the camera component 122 is a 3D (3Dimensions, three-dimensional) camera component.
  • the 3D camera component can have functions such as face recognition, gesture recognition, human skeleton recognition, three-dimensional measurement, environment perception, or three-dimensional map reconstruction.
  • the camera component can detect the distance information between each pixel in the collected image and the camera, so that it can determine whether the user corresponding to the currently collected face image is alive, and prevent attackers from using other people's photos for identity verification And steal the funds of others for numerical transfer.
  • the camera assembly 122 includes a sensor 1222, a processor 1224, a memory 1226, and a battery 1228.
  • the camera assembly 122 can also have a camera identification, which is used to uniquely identify the camera assembly 122.
  • the camera identification is a serial number (SN) assigned by the camera assembly 122 when it leaves the factory. The number is the unique number of the camera assembly 122.
  • the sensor 1222 is used to collect face images
  • the sensor 1222 can be arranged inside the camera assembly 122, and the sensor 1222 can be at least one of a color image sensor, a depth image sensor, or an infrared image sensor.
  • the type is limited.
  • the face image collected by the sensor 1222 can also be at least one of a color map, a depth map, or an infrared image.
  • the embodiment of the present disclosure does not limit the type of the face image.
  • the processor 1224 can be used to embed identification information for the face image collected by the sensor 1222.
  • the processor 1224 is a DSP (Digital Signal Processor), and the DSP is a unique microprocessor. A device that can process a large amount of information with digital signals.
  • the processor 1224 can also be in the form of hardware such as FPGA (Field Programmable Gate Array), PLA (Programmable Logic Array, Programmable Logic Array), etc., The embodiment of the present disclosure does not limit the hardware form of the processor 1224.
  • the memory 1226 is used to store face image information, which is used to indicate the number of all face images collected by the camera component 122 in history, for example, the memory 1226 is FLASH (flash memory), or It is a magnetic disk storage device, a high-speed random access (CACHE) memory, etc.
  • FLASH flash memory
  • CACHE high-speed random access
  • the battery 1228 is used to supply power to the various components of the camera assembly 122. In this case, even if the host 124 of the terminal 120 is powered off, the battery 1228 inside the camera assembly 122 can still supply power to the memory 1226 to avoid power failure. The reason causes the face image information in the memory 1226 to be lost.
  • the terminal 120 and the server 140 can be connected through a wired or wireless network.
  • the server 140 can include at least one server, multiple servers, a cloud computing platform, or a virtualization center.
  • the server 140 is configured to provide a background service for an application program running on the terminal 120, and the application program can provide a value transfer service to the user, so that the user can perform a value transfer operation based on the terminal 120.
  • the server 140 is responsible for the main calculation work, and the terminal 120 is responsible for the secondary calculation work; or, the server 140 is responsible for the secondary calculation work, and the terminal 120 is responsible for the main calculation work; or, between the server 140 and the terminal 120 Distributed computing architecture for collaborative computing.
  • the server 140 is an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communications, middleware services, domain name services, security services, Content Delivery Network (CDN), and big data and artificial intelligence platforms.
  • cloud services such as cloud communications, middleware services, domain name services, security services, Content Delivery Network (CDN), and big data and artificial intelligence platforms.
  • the face data transmission process occurs in the process of value transfer based on face recognition.
  • the terminal 120 is commonly referred to as the "face-swiping payment terminal".
  • the face-swiping payment terminal refers to an integrated camera that can collect data.
  • the first user can perform a trigger operation on the value transfer option on the terminal 120, which triggers the terminal 120 to call the camera component 122 to collect the face data stream of the first user in real time, and perform any image frame in the face data stream (that is, the face data stream).
  • a face image carrying identification information can be obtained, so that the terminal 120 can encapsulate the face image carrying identification information, camera identification, and value transfer information into a value transfer request
  • the value transfer request is sent to the server 140, and the server 140 authenticates the first user based on the face image carrying the identification information and the camera identification.
  • the identification information can verify whether the face image is collected by the camera component 122.
  • face recognition can verify whether the face area in the face image is the first user himself, so as to achieve double identity verification.
  • the server 140 can perform value transfer based on the value transfer information in the value transfer request, the value transfer information including the first user identification, the second user identification, and the value to be transferred.
  • first user and the second user are only different names for users with different identities during a certain number of value transfer processes.
  • value transfer processes it is possible that a certain user is both the first user and the second user. That is, the user transfers the value from one of his own accounts to another own account.
  • the terminal 120 can have a display screen.
  • the user performs interactive operations based on the display screen to complete the numerical value transfer operation based on face recognition.
  • the device types of the terminal 120 include smart phones, tablet computers, e-book readers, MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) players, MP4 (Moving Picture Experts Group Audio Layer) IV, the dynamic image expert compresses the standard audio level 4) At least one of a player, a laptop computer or a desktop computer.
  • the number of the aforementioned terminals 120 can be more or less. For example, there may be only one terminal 120, or there may be dozens or hundreds of terminals 120, or a larger number. The embodiments of the present disclosure do not limit the number and device types of the terminals 120.
  • Fig. 3 is a flowchart of a method for transmitting a face image provided by an embodiment of the present disclosure. Referring to FIG. 3, this embodiment can be applied to the terminal 120 in the foregoing implementation environment, which will be described in detail below:
  • the terminal determines the maximum value in the target list in the buffer area as the face image information, and each value stored in the target list corresponds to a number of face images.
  • the terminal is used for facial image transmission.
  • the terminal can include a camera component and a host.
  • the camera component is used to collect facial images, embed identification information in the collected facial images, and the camera component sends the facial image carrying identification information To the host, the host transmits the face image carrying the identification information to the server.
  • the camera component is a 3D camera component, so that the distance information between each pixel in the collected image and the camera can be detected to determine whether the user corresponding to the currently collected face image is alive , To prevent attackers from using other people’s photos for identity verification and stealing other people’s funds for value transfer.
  • the camera assembly includes a sensor, a processor, a memory, and a battery.
  • a target list can be stored in the memory of the camera assembly.
  • the target list includes multiple values, and each value corresponds to a number of face images. , Whenever the number of face images collected by the camera component increases, the processor of the camera component can write a new value to the target list, delete the existing value with the earliest time stamp in the target list, and realize the target list Live Update.
  • each value in the target list can be equal to the number of face images.
  • the target list is [500,501,502,...], or the target list
  • Each value in the target list can also be N times the number of face images (N>1).
  • the target list is [500N,501N,502N,...].
  • each value in the target list can also be a number of face images. The number is obtained through exponential transformation, logarithmic transformation, etc. The embodiment of the present disclosure does not limit the transformation method between the number of face images and the individual values stored in the target list.
  • the terminal operating system can issue a collection instruction to the camera component.
  • the camera component responds to the collection instruction and collects facial images through the sensor of the camera component.
  • the sensor can collect the user's facial data stream in real time.
  • the sensor sends the collected face image to the processor of the camera component.
  • the processor determines the target list from the buffer area and queries the target list. The maximum value, the maximum value is determined as the face image information.
  • different types of sensors can collect different types of face images.
  • the face image collected by the infrared image sensor is an infrared image
  • the face image collected by the depth image sensor is a depth image.
  • the face image collected by the sensor is a color image, and the embodiment of the present disclosure does not limit the types of the sensor and the face image.
  • the camera component maintains a target list with the maximum value updated as the number of collected face images increases through the internal processor and memory, when the number of face images collected by the camera component The larger the number, the larger the maximum value in the target list, resulting in a larger value of the face image information, so that the camera component can count the number of all face images collected through the face image information.
  • the terminal reads the face image information from the buffer area.
  • the face image information is used to indicate the number of all face images that the camera component has collected in history.
  • the terminal may not
  • the face image information is stored in the form of a target list, but the face image information is stored in data formats such as stacks, arrays, and tuples. The embodiment of the present disclosure does not limit the data format of the face image information in the buffer area.
  • the terminal performs Fourier transform on the face image to obtain a frequency spectrum image of the face image.
  • the terminal can perform DCT (Discrete Cosine Transform) processing on the face image to obtain the spectrum image of the face image.
  • DCT processing is a discrete Fourier transform method, which is mainly used to transform the data Or the image is compressed, so that the signal in the space domain can be converted to the frequency domain, with good decorrelation performance.
  • the terminal embeds identification information in any area except the face area in the spectrum image.
  • the identification information is used to represent face image information.
  • the identification information can be a value obtained by adding the face image information and a third target value, where the third target value can be any value greater than or equal to zero.
  • the identification information can also be a value obtained by multiplying the face image information and the fourth target value, where the fourth target value can be any value greater than or equal to 1, and the fourth target value can be the same as the fourth target value.
  • the three target values are the same, but they can also be different.
  • the identification information can also be a value obtained after one-way encryption of the face image information, and the embodiment of the present disclosure does not limit the conversion method between the face image information and the identification information.
  • any area other than the face area can be at least one of the upper left corner, the lower left corner, the upper right corner, or the lower right corner of the face image. Any area is determined to be the upper left corner of the face image, and the embodiment of the present disclosure does not limit the location of the embedding area of the identification information.
  • the identification information is the same as the face image information, and the terminal can embed the face image information in any area except the face area in the spectrum image.
  • the terminal can add the face image information determined in step 301 to the third target value to obtain the identification information, and then in any area of the spectrum image except the face area Embed the identification information.
  • the identification information in any area other than the face area in the face image, it can prevent the pixel information of the face area in the face image from being destroyed, and avoid the occlusion of the face area.
  • the problem of inability to perform face recognition can optimize the processing logic of the face image transmission process.
  • the terminal performs inverse Fourier transform on the spectrum image carrying the identification information to obtain a face image carrying the identification information.
  • the terminal after the terminal embeds the identification information in the spectrum image, it can perform DCT inverse transformation on the spectrum image carrying the identification information, and then convert the spectrum image carrying the identification information from the frequency domain back to the spatial domain, so as to ensure that the user cannot perceive
  • this kind of identification information that the user cannot perceive can be commonly called "blind watermark” or "digital watermark".
  • the user can neither see nor hear the blind watermark, but when the terminal will carry
  • the server can parse out the blind watermark carried in the face image.
  • the terminal can embed identification information in any area of the face image except the face area.
  • the terminal can also embed identification information in the face area of the face image, and the embodiment of the present disclosure does not limit whether the embedding area of the identification information is a face area.
  • the terminal embeds identification information in the face image to obtain a face image carrying the identification information.
  • the identification information is used to indicate the face image information and can be directed to the face data source collected by the camera component ( That is to say, any face image), security is guarded by embedding a blind watermark, so that even if the face image is transmitted between the camera component and the host or between the host and the server, the attacker will steal it.
  • the server can recognize in time that the identification information of the historical face image has expired, and it is not the face image collected in real time at the latest time, so as to determine the correctness.
  • the face image verification failed, which greatly improves the security of various businesses that are verified based on the face image.
  • the terminal writes the value obtained by adding the identification information and the first target value to the target list in the buffer area.
  • the first target value is any value greater than or equal to 0, for example, the first target value is 1.
  • the terminal embeds the identification information in the face image through the processor of the camera component, it can also write the value obtained by adding the identification information and the first target value to the target list. Since the face image information is the target The original maximum value in the list, and the identification information is greater than or equal to the face image information, the value written this time is greater than or equal to the identification information, that is, the value written this time must be greater than or equal to the original value in the target list.
  • the maximum value of so that the maximum value in the target list can be updated, and the maximum value in the target list stored in the memory of the camera assembly is kept increasing as the number of collected face images increases.
  • the sensor of the camera component is SENSOR
  • the processor of the camera component is DSP
  • the memory of the camera component is FLASH
  • each value in the target list stored in FLASH happens to be the number of each face image.
  • FLASH counts the number of collected face images in real time.
  • DSP obtains the face data stream from SENSOR.
  • DSP obtains the face data stream from SENSOR.
  • the DSP determines the value obtained by adding one to the maximum number of face images as identification information.
  • the identification information is embedded in the aforementioned frame of face image, and assuming that the first target value is 0, the DSP directly writes the identification information into the target list.
  • the current latest count in the FLASH target list is 1000, indicating that the number of all face images collected by the camera component in history is 1000.
  • the DSP reads it from the FLASH target list
  • the current latest count is 1000, the identification information is determined to be 1001, the identification information 1001 is embedded in the above-mentioned 1001th face image in a blind watermark, and then the identification information 1001 is written back to the target list, so that the DSP will perform the current latest count next time When reading, 1001 can be read, which ensures that the DSP can read the incremented count every time from the FLASH.
  • each value in the target list stored in FLASH is exactly the value obtained by adding one to the number of each face image
  • the current latest count read by DSP from the target list stored in FLASH is the value obtained by adding one to the maximum number of face images.
  • the third target value is 0, then the DSP adds one to the maximum number of face images.
  • the obtained value is determined as the identification information, and the identification information is embedded in the aforementioned frame of face image.
  • the DSP writes the value obtained by adding one to the identification information into the target list.
  • the current latest count in the FLASH target list is 1000, indicating that the number of all face images collected by the camera component in history is 999.
  • the DSP reads it from the FLASH target list
  • the current latest count is 1000, and the identification information is determined to be 1000.
  • the identification information 1000 is embedded in the above 1000th face image by blind watermarking, and the value 1001 obtained by adding one to the identification information 1000 is written back to the target list, so that the DSP is in the next
  • 1001 can be read, which ensures that the DSP can read the incremented count every time from the FLASH.
  • the terminal can also store the face image information in the memory of the camera component not in the form of a target list, but directly store the face image information in the target address, thereby saving the storage of the memory of the camera component. space.
  • step 301 can be replaced by the following steps: the terminal determines the value stored in the target address of the buffer area as the face image information, that is, the terminal reads the camera assembly through the processor of the camera assembly The target address in the memory of, the value stored in the target address is determined as the face image information.
  • the above step 305 can be replaced by the following steps: the terminal sets the value stored in the target address as the value obtained by adding the identification information and the second target value, where the second target value is either greater than or A value equal to 0, for example, the second target value is 1, and the second target value is the same as or different from the first target value.
  • the embodiment of the present disclosure does not limit the value of the second target value or the first target value.
  • the terminal does not need to maintain a costly target list in the memory of the camera assembly, but only stores the number of all face images that have been collected in the target address, which can reduce the memory occupied Resource overhead, so that not only the face image information stored in the target address can be updated in real time, but also the processing efficiency of the camera assembly can be improved.
  • the terminal sends a face image carrying the identification information.
  • the terminal can separately send the face image carrying the identification information, or it can encapsulate the face image carrying the identification information together with other information into a service request, thereby sending the service request, which can correspond to
  • service types such as identity verification service, value transfer service, etc.
  • the embodiment of the present disclosure does not limit the service type of the service request.
  • the terminal can encapsulate the camera identification of the camera component and the face image carrying the identification information into an identity verification request, thereby sending the identity verification request to the server.
  • the terminal can also encapsulate the camera identification of the camera component, the face image carrying the identification information, and the value transfer information into a value transfer request, thereby sending the value transfer request to the server, where the value transfer information is at least Including the first user identification, the second user identification and the value to be transferred.
  • the method provided by the embodiment of the present disclosure can read the face image information from the buffer area when any face image is collected by the camera assembly, and the face image information is used to represent all the faces that have been collected by the camera assembly in history.
  • the number of images, the identification information is embedded in the face image, and the face image carrying the identification information is obtained.
  • the identification information is used to represent the face image information, and the face image carrying the identification information can be sent in
  • the identification information is directly embedded in the face image collected by the camera component, which increases the security of the face image collected by the camera component. Even if the face image is leaked, the attacker still uses the historical face image to request related services. Because the identification information does not match, the verification cannot be passed, which effectively guarantees the security of the face image transmission process.
  • the face image transmission method provided by the foregoing embodiment can ensure the security of the face data source collected by the camera assembly, and uniquely identify the time when the face image is collected through the identification information, which can effectively ensure that the person collected each time
  • the face image can only be used once, and each user does not need to upgrade the hardware or system of the terminal's host, and there is no mandatory requirement for the configuration of the host itself.
  • Each user only needs to access the camera component provided by the embodiment of the present disclosure to ensure the face data
  • the security of the source greatly reduces the threshold for maintaining the security of the face data source, and has high portability and usability.
  • the face image transmission method can be applied to various business scenarios that rely on face images.
  • the process of performing identity verification based on the face image to complete the value transfer service is described as an example.
  • the above process It is referred to as the face payment scene or the face payment scene for short, which will be described in detail below.
  • FIG. 4 is an interaction flowchart of a value transfer method provided by an embodiment of the present disclosure. Referring to FIG. 4, this embodiment is applied to the interaction process between the terminal 120 and the server 140 in the foregoing implementation environment. This embodiment includes the following steps :
  • the terminal calls the camera component to collect a face image.
  • the terminal can be the personal terminal of the first user, or a "face-swiping payment terminal" installed in the store where the second user is located.
  • the face-swiping payment terminal refers to an integrated camera that can collect images of the user's face.
  • the embodiment of the present disclosure does not limit the device type of the terminal.
  • first user and the second user are only different names for users with different identities during a certain number of value transfer processes.
  • value transfer processes it is possible that a certain user is both the first user and the second user. That is, the user transfers the value from one of his own accounts to another own account.
  • the terminal is triggered to display a payment interface on the display screen.
  • the payment interface can include value transfer information and value transfer options.
  • the first user can check the value transfer information Afterwards, a trigger operation is performed on the value transfer option.
  • the terminal operating system issues a collection instruction to the camera component, and calls the camera component to collect the face image of the first user.
  • the above-mentioned value transfer information can include at least the first user ID, the second user ID, and the value to be transferred.
  • the value transfer information can also include transaction item information, discount information, transaction timestamp, and the like.
  • the terminal reads the face image information from the buffer area, and the face image information is used to indicate the number of all face images that the camera assembly has collected in history.
  • the above step 402 is similar to the above step 301, and will not be repeated here.
  • the terminal embeds identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information.
  • step 403 is similar to the above-mentioned steps 302-304, and will not be repeated here.
  • the terminal generates a value transfer request, where the value transfer request includes at least the camera identifier of the camera component, the face image carrying the identification information, the value to be transferred, the first user identifier, and the second user identifier.
  • the terminal can use a compression algorithm to compress the camera ID of the camera component, the face image carrying the ID information, the value to be transferred, the first user ID, and the second user ID to obtain compressed information, and the encryption algorithm can be used to compress the information.
  • the compressed information is encrypted to obtain the cipher text information, and the cipher text information is encapsulated by the transmission protocol to obtain the value transfer request.
  • the compression algorithm can include at least one of a br compression algorithm, a gzip compression algorithm, or a Huffman compression algorithm
  • the encryption algorithm can include at least one of a symmetric encryption algorithm or an asymmetric encryption algorithm, such as a message digest.
  • the transmission protocol can include IP (Internet Protocol), TCP (Transmission Control Protocol), or UDP (User Datagram Protocol, At least one item in the User Datagram Protocol), the embodiments of the present disclosure do not limit the types of compression algorithms, encryption algorithms, and transmission protocols.
  • the terminal sends the value transfer request to the server.
  • the server can perform identity verification based on the camera identification of the camera component and the face image carrying the identification information.
  • identity verification based on the camera identification of the camera component and the face image carrying the identification information.
  • the value to be transferred will be transferred from the stored value corresponding to the first user identification. From the value to the value stored corresponding to the second user ID, the implementation process will be described in detail in the following steps 406-408.
  • the server receives the value transfer request.
  • the value transfer request includes at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification, and the identification information is used to indicate that the face image is an image acquired in real time.
  • the server after the server receives the value transfer request, it can parse the value transfer request to obtain the ciphertext information, decrypt the ciphertext information with a decryption algorithm, obtain compressed information, and decompress the compressed information to obtain the aforementioned camera Identification, face image carrying identification information, value to be transferred, first user identification, and second user identification.
  • the decryption algorithm and the decompression algorithm used here correspond to the encryption algorithm and the compression algorithm in step 404, and will not be repeated here.
  • the server performs face recognition on the face area of the face image to obtain a recognition result.
  • the server stores the identification information of each face image corresponding to each camera identification.
  • the server can maintain the identification information sequence corresponding to each camera identification in the background, which will correspond to the same camera.
  • Each historical identification information of the identified face image is stored in the same sequence, so that when any value transfer request carrying the camera identification is received, the face image carrying the identification information in the value transfer request is obtained, and the face image is obtained from the person.
  • the identification information is extracted from the face image, and the server can query the maximum historical value in the identification information sequence corresponding to the camera identification. If the extracted identification information is greater than the above-mentioned maximum historical value, then it is determined that the face image is legal, that is, the The face image is not a stolen historical face image.
  • the server performs face recognition on the face area of the face image and obtains the recognition result. Otherwise, if the extracted identifier is less than or equal to the above-mentioned maximum historical value, the person is determined
  • the face image is illegal, that is, the face image is a stolen historical face image, and the server can send verification failure information to the terminal.
  • the face image in the process of face recognition, can be input into the face similarity model, and the similarity between the face image and the first user's pre-stored image can be predicted through the face similarity model. If the similarity is higher than or equal to the target threshold, it is determined that the recognition result of the face image is passed, and the following step 408 is performed. Otherwise, if the similarity is lower than the target threshold, the recognition result of the face image is determined to be fail , The server can send verification failure information to the terminal.
  • the server transfers the value to be transferred from the value stored corresponding to the first user ID to the value stored corresponding to the second user ID.
  • the server performs the above-mentioned operation of transferring the value to be transferred from the value corresponding to the first user ID to the value corresponding to the second user. So that the first user completes the transfer of the value to be transferred to the second user.
  • the server performs the value transfer based on the camera identification of the camera component and the face image carrying the identification information. In some embodiments, when the value transfer is completed, the server can also send the transfer success message to the terminal To notify the terminal that the value transfer operation has been successfully performed.
  • the terminal calls the camera component to collect the face image, and when the camera component collects any face image, it reads the face image from the buffer area Information, the face image information is used to indicate the number of all face images collected by the camera component in history, and the identification information is embedded in the face image to obtain the face image carrying the identification information, and the identification information is used to indicate The face image information is transferred based on the camera identification of the camera component and the face image carrying the identification information.
  • the identification information can be directly embedded in the face image collected by the camera component, increasing The security of the face images collected by the camera component is improved. Even if the face image is leaked, when an attacker steals historical face images to request value transfer services, they will still fail to pass the verification because the identification information does not match, thus effectively guaranteeing The security of the value transfer process is improved.
  • the server receives a value transfer request
  • the value transfer request includes at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification
  • the identification information is used to indicate the face image
  • face recognition is performed on the face area of the face image to obtain the identification result
  • the identification result is passed , Transfer the value to be transferred from the value stored corresponding to the first user ID to the value corresponding to the second user ID.
  • the server can determine that the ID is carried by comparing the identification information with each stored historical identification information.
  • the face image of the information is a stolen historical face image, which adds a way to verify the face image in the time dimension, which can deal with more complex network attacks, even if the face image is leaked, the attacker embezzles it
  • the historical face image requests the value transfer service, it still fails to pass the verification because the identification information does not match, thereby effectively ensuring the security of the value transfer process.
  • FIG. 5 is a schematic diagram of a value transfer method provided by an embodiment of the present disclosure. Please refer to FIG. 5.
  • the numerical value transfer method provided by the embodiments of the present disclosure by internally improving the camera component of the face-swiping payment terminal, can add security precautions to the camera component that collects the face data source, and does not require hardware upgrades on the host side, which strictly guarantees The security of the face data source can effectively resist the "replay attack" of the face data.
  • the camera component of the terminal stores the face image information in FLASH.
  • the DSP After the DSP obtains the collected face data stream from the SENSOR (sensor), it can read the face image count from FLASH (that is, Face image information), a blind watermark (that is, identification information) is embedded in each frame of the face image in the face data stream.
  • the blind watermark is used to indicate the count of the face image, and the DSP will carry the face of the blind watermark.
  • the image is sent to the host, and the host sends the face image with the blind watermark to the server.
  • the server stores an incremental sequence corresponding to the camera ID of the camera component in the background. Each historical blind watermark in the incremental sequence (that is, the historical human face) The image count) increases with the time sequence.
  • the face image transmitted this time is considered legal, otherwise, it is considered The face image transmitted this time is invalid, so the validity of the sequence is verified through the above process, which ensures the security of the face image transmission process and avoids the "replay attack" of the face image. Further, when confirming the current transmission Only when the face image is legal, the face recognition is performed on the face image, and when the recognition result of the face recognition is passed, the value transfer operation is performed, which also guarantees the security of the value transfer process.
  • FIG. 6 is a schematic structural diagram of a face image transmission device provided by an embodiment of the present disclosure. Please refer to FIG. 6.
  • the device includes:
  • the reading module 601 is used to read the face image information from the buffer area when any face image is collected by the camera assembly.
  • the face image information is used to indicate the number of all face images collected by the camera assembly in history ;
  • the embedding module 602 is configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
  • the sending module 603 is configured to send the face image carrying the identification information.
  • the device provided by the embodiment of the present disclosure can read the face image information from the buffer area when any face image is collected by the camera assembly, and the face image information is used to represent all the faces that have been collected by the camera assembly in history.
  • the number of images, the identification information is embedded in the face image, and the face image carrying the identification information is obtained.
  • the identification information is used to represent the face image information, and the face image carrying the identification information can be sent in
  • the identification information is directly embedded in the face image collected by the camera component, which increases the security of the face image collected by the camera component. Even if the face image is leaked, the attacker still uses the historical face image to request related services. Because the identification information does not match, the verification cannot be passed, which effectively guarantees the security of the face image transmission process.
  • the reading module 601 is used to:
  • the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
  • the device is also used to:
  • the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
  • the reading module 601 is used to:
  • the value stored in the target address of the buffer area is determined as the face image information.
  • the device is also used to:
  • the value stored in the target address is set as the value obtained by adding the identification information and the second target value.
  • the identification information is a value obtained by adding the face image information and the third target value.
  • the embedded module 602 is used to:
  • the identification information is embedded in any area of the face image except the face area.
  • the embedded module 602 is used to:
  • the facial image transmission device provided in the above embodiment transmits a facial image
  • only the division of the above-mentioned functional modules is used as an example for illustration.
  • the above-mentioned functions can be allocated to different functions according to needs.
  • Module completion that is, the internal structure of an electronic device (such as a terminal) is divided into different functional modules to complete all or part of the functions described above.
  • the face image transmission device provided in the foregoing embodiment belongs to the same concept as the face image transmission method embodiment, and its implementation process is detailed in the face image transmission method embodiment, which will not be repeated here.
  • FIG. 7 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure. Please refer to FIG. 7.
  • the device includes:
  • the collection module 701 is used to call the camera component to collect the face image when the trigger operation of the logarithmic value transfer option is detected;
  • the reading module 702 is used to read face image information from the buffer area when any face image is collected by the camera assembly.
  • the face image information is used to indicate the number of all face images collected by the camera assembly in history. number;
  • the embedding module 703 is configured to embed identification information in the face image to obtain a face image carrying the identification information, and the identification information is used to represent the face image information;
  • the value transfer module 704 is configured to perform value transfer based on the camera identification of the camera assembly and the face image carrying the identification information.
  • the terminal calls the camera component to collect the face image, and when the camera component collects any face image, it reads the face image from the buffer area Information, the face image information is used to indicate the number of all face images collected by the camera component in history, and the identification information is embedded in the face image to obtain the face image carrying the identification information, and the identification information is used to indicate The face image information is transferred based on the camera identification of the camera component and the face image carrying the identification information.
  • the identification information can be directly embedded in the face image collected by the camera component, increasing The security of the face images collected by the camera component is improved. Even if the face image is leaked, when an attacker steals historical face images to request value transfer services, they will still fail to pass the verification because the identification information does not match, thus effectively guaranteeing The security of the value transfer process is improved.
  • the value transfer module 704 is used to:
  • the value transfer request including at least the camera identification of the camera component, the face image carrying the identification information, the value to be transferred, the first user identification, and the second user identification;
  • the value transfer request is sent to the server, and the server performs identity verification based on the camera identification of the camera component and the face image carrying the identification information.
  • the value to be transferred is transferred from the stored value corresponding to the first user identification. Value to the value stored corresponding to the second user ID.
  • the value transfer device provided in the above embodiment transfers a value
  • only the division of the above-mentioned functional modules is used as an example.
  • the above-mentioned function allocation can be completed by different functional modules according to needs, i.e.
  • the internal structure of an electronic device (such as a terminal) is divided into different functional modules to complete all or part of the functions described above.
  • the numerical value transfer device provided in the foregoing embodiment and the numerical value transfer method embodiment belong to the same concept, and the implementation process is detailed in the numerical value transfer method embodiment, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a value transfer device provided by an embodiment of the present disclosure. Please refer to FIG. 8.
  • the device includes:
  • the receiving module 801 is configured to receive a value transfer request.
  • the value transfer request includes at least a camera identification, a face image carrying identification information, a value to be transferred, a first user identification, and a second user identification, and the identification information is used to indicate the person
  • the face image is an image acquired in real time;
  • the recognition module 802 is configured to perform face recognition on the face area of the face image when the identification information is greater than each historical identification information stored corresponding to the camera identification to obtain a recognition result;
  • the value transfer module 803 is configured to transfer the value to be transferred from the value stored in the first user ID to the value stored in the second user ID when the recognition result is passed.
  • the device provided by the embodiment of the present disclosure receives a value transfer request, which includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, and the ID information is used for Indicates that the face image is an image acquired in real time.
  • the identification information is greater than the historical identification information stored corresponding to the camera identification
  • face recognition is performed on the face area of the face image to obtain the recognition result.
  • the recognition result is passed, the value to be transferred is transferred from the value stored corresponding to the first user ID to the value stored corresponding to the second user ID.
  • the identification information By comparing the identification information with each stored historical identification information, it can be Determine whether the face image carrying the identification information is a stolen historical face image, thereby adding a way to verify the face image in the time dimension, which can deal with more complex cyber attacks, even if the face image is leaked , When an attacker steals historical facial images to request a value transfer service, he still fails to pass the verification because the identification information does not match, thus effectively guaranteeing the security of the value transfer process.
  • the value transfer device provided in the above embodiment transfers a value
  • only the division of the above-mentioned functional modules is used as an example.
  • the above-mentioned function allocation can be completed by different functional modules according to needs, i.e.
  • the internal structure of an electronic device (such as a server) is divided into different functional modules to complete all or part of the functions described above.
  • the numerical value transfer device provided in the foregoing embodiment and the numerical value transfer method embodiment belong to the same concept, and the implementation process is detailed in the numerical value transfer method embodiment, which will not be repeated here.
  • FIG. 9 is a structural block diagram of a terminal 900 provided by an embodiment of the present disclosure.
  • the terminal 900 is also an electronic device.
  • the terminal 900 is: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, the dynamic image expert compresses the standard audio layer 4) Player, laptop or desktop computer.
  • the terminal 900 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and other names.
  • the terminal 900 includes a processor 901 and a memory 902.
  • the processor 901 can include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 901 can be implemented in at least one hardware form among DSP (Digital Signal Processing), FPGA (Field Programmable Gate Array), PLA (Programmable Logic Array, Programmable Logic Array) .
  • the processor 901 can also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 901 is integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 901 includes an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 902 can include one or more computer-readable storage media, which can be non-transitory.
  • the memory 902 can also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 902 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, and the face image information is used to indicate the number of all face images that the camera assembly has historically collected;
  • Embed identification information in the face image to obtain a face image carrying the identification information, where the identification information is used to represent the face image information;
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the value stored in the target address of the buffer area is determined as the face image information.
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the value stored in the target address is set as the value obtained by adding the identification information and the second target value.
  • the identification information is a value obtained by adding the face image information and the third target value.
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the identification information is embedded in any area of the face image except the face area.
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, and the face image information is used to indicate the number of all face images that the camera assembly has historically collected;
  • Embed identification information in the face image to obtain a face image carrying the identification information, where the identification information is used to represent the face image information;
  • the value transfer is performed.
  • the at least one instruction is used to be executed by the processor 901 to implement the following operations:
  • the value transfer request including at least the camera identification of the camera component, the face image carrying the identification information, the value to be transferred, the first user identification, and the second user identification;
  • the value transfer request is sent to the server, and the server performs identity verification based on the camera identification of the camera component and the face image carrying the identification information.
  • the value to be transferred is transferred from the stored value corresponding to the first user identification. Value to the value stored corresponding to the second user ID.
  • the terminal 900 can further include: a peripheral device interface 903 and at least one peripheral device.
  • the processor 901, the memory 902, and the peripheral device interface 903 can be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 903 through a bus, a signal line, or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 904, a touch display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
  • the peripheral device interface 903 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 901 and the memory 902.
  • the processor 901, the memory 902, and the peripheral device interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 901, the memory 902, and the peripheral device interface 903 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 904 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 904 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so on.
  • the radio frequency circuit 904 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 904 can also include a circuit related to NFC (Near Field Communication), which is not limited in the present disclosure.
  • the display screen 905 is used to display a UI (User Interface, user interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 905 also has the ability to collect touch signals on or above the surface of the display screen 905.
  • the touch signal can be input to the processor 901 as a control signal for processing.
  • the display screen 905 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • one display screen 905 is provided on the front panel of the terminal 900; in other embodiments, there are at least two display screens 905, which are respectively provided on different surfaces of the terminal 900 or in a folding design;
  • the display screen 905 is a flexible display screen, and is arranged on the curved surface or the folding surface of the terminal 900.
  • the display screen 905 can also be configured as a non-rectangular irregular pattern, that is, a special-shaped screen.
  • the display screen 905 can be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
  • the camera assembly 906 is used to capture images or videos.
  • the camera assembly 906 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • the camera assembly 906 also includes a flash.
  • the flash can be a single-color temperature flash or a dual-color temperature flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 907 can include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals to be input to the processor 901 for processing, or input to the radio frequency circuit 904 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 901 or the radio frequency circuit 904 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into human audible sound waves, but also convert electrical signals into human inaudible sound waves for purposes such as distance measurement.
  • the audio circuit 907 can also include a headphone jack.
  • the positioning component 908 is used to locate the current geographic location of the terminal 900 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 908 can be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Granus system of Russia, or the Galileo system of the European Union.
  • the power supply 909 is used to supply power to various components in the terminal 900.
  • the power source 909 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery can support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 900 further includes one or more sensors 910.
  • the one or more sensors 910 include, but are not limited to: an acceleration sensor 911, a gyroscope sensor 912, a pressure sensor 913, a fingerprint sensor 914, an optical sensor 915, and a proximity sensor 916.
  • the acceleration sensor 911 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 900.
  • the acceleration sensor 911 can be used to detect the components of gravitational acceleration on three coordinate axes.
  • the processor 901 can control the touch screen 905 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 911.
  • the acceleration sensor 911 can also be used for game or user motion data collection.
  • the gyroscope sensor 912 can detect the body direction and the rotation angle of the terminal 900, and the gyroscope sensor 912 can cooperate with the acceleration sensor 911 to collect the user's 3D actions on the terminal 900.
  • the processor 901 can implement the following functions according to the data collected by the gyroscope sensor 912: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 913 can be disposed on the side frame of the terminal 900 and/or the lower layer of the touch screen 905.
  • the pressure sensor 913 can detect the user's holding signal of the terminal 900, and the processor 901 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 913.
  • the processor 901 implements control of the operability controls on the UI interface according to the user's pressure operation on the touch display screen 905.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 914 is used to collect the user's fingerprint, and the processor 901 can identify the user's identity according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 can identify the user's identity according to the collected fingerprint. When it is recognized that the user's identity is a trusted identity, the processor 901 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 914 can be provided on the front, back, or side of the terminal 900. When a physical button or a manufacturer logo is provided on the terminal 900, the fingerprint sensor 914 can be integrated with the physical button or the manufacturer logo.
  • the optical sensor 915 is used to collect the ambient light intensity.
  • the processor 901 can control the display brightness of the touch screen 905 according to the ambient light intensity collected by the optical sensor 915. In some embodiments, when the ambient light intensity is high, the display brightness of the touch screen 905 is increased; when the ambient light intensity is low, the display brightness of the touch screen 905 is decreased. In another embodiment, the processor 901 can also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
  • the proximity sensor 916 also called a distance sensor, is usually arranged on the front panel of the terminal 900.
  • the proximity sensor 916 is used to collect the distance between the user and the front of the terminal 900.
  • the processor 901 controls the touch screen 905 to switch from the on-screen state to the off-screen state; when the proximity sensor 916 detects When the distance between the user and the front of the terminal 900 gradually increases, the processor 901 controls the touch display screen 905 to switch from the rest screen state to the bright screen state.
  • FIG. 9 does not constitute a limitation on the terminal 900, and can include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • FIG. 10 is a schematic structural diagram of a server provided by an embodiment of the present disclosure.
  • the server 1000 is also an electronic device.
  • the server 1000 may have relatively large differences due to differences in configuration or performance, and may include one or more processors (Central Processing Units, CPU) 1001 and one or more memories 1002, where at least one piece of program code is stored in the memory 1002, and the at least one piece of program code is loaded and executed by the processor 1001 to implement the following operations:
  • processors Central Processing Units, CPU
  • memories 1002 where at least one piece of program code is stored in the memory 1002, and the at least one piece of program code is loaded and executed by the processor 1001 to implement the following operations:
  • the value transfer request includes at least a camera ID, a face image carrying identification information, a value to be transferred, a first user ID, and a second user ID, the identification information is used to indicate that the face image is collected in real time Image;
  • the value to be transferred is transferred from the value stored corresponding to the first user ID to the value stored corresponding to the second user ID.
  • the server 1000 can also have components such as a wired or wireless network interface, a keyboard, an input and output interface for input and output, and the server 1000 can also include other components for implementing device functions, which will not be repeated here.
  • a computer-readable storage medium such as a memory including at least one piece of program code, and the above-mentioned at least one piece of program code can be executed by a processor in a terminal to implement the following operations:
  • the camera assembly When the camera assembly collects any face image, read the face image information from the buffer area, and the face image information is used to indicate the number of all face images collected by the camera assembly in history;
  • Embed identification information in the face image to obtain a face image carrying the identification information, where the identification information is used to represent the face image information;
  • the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
  • the maximum value in the target list in the buffer area is determined as the face image information, and each value stored in the target list corresponds to a number of face images.
  • the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
  • the value obtained by adding the identification information and the first target value is written into the target list in the buffer area.
  • the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
  • the value stored in the target address of the buffer area is determined as the face image information.
  • the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
  • the value stored in the target address is set as the value obtained by adding the identification information and the second target value.
  • the identification information is a value obtained by adding the face image information and the third target value.
  • the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
  • the identification information is embedded in any area of the face image except the face area.
  • the foregoing at least one piece of program code may be executed by a processor in the terminal to implement the following operations:
  • the computer-readable storage medium is ROM (Read-Only Memory), RAM (Random-Access Memory), CD-ROM (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks and optical data storage devices, etc.
  • a computer program or computer program product including at least one piece of program code is also provided, which when running on a computer device, causes the computer device to execute the face image transmission method or value provided by the foregoing various embodiments. Any possible implementation of the transfer method will not be repeated here.
  • the program can be stored in a computer-readable storage medium, as mentioned above.
  • the storage medium can be read-only memory, magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioethics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Technology Law (AREA)
  • Image Processing (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

一种人脸图像传输方法、数值转移方法、装置、电子设备及存储介质,属于网络技术领域。该方法通过当摄像头组件采集到任一人脸图像时,能够从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数(301),在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息,发送该携带该标识信息的人脸图像(306),增加了摄像头组件采集到的人脸图像的安全性,有效地保障了人脸图像传输过程的安全性。

Description

人脸图像传输方法、数值转移方法、装置及电子设备
本公开要求于2019年12月16日提交的申请号为201911300268.6、发明名称为“人脸图像传输方法、数值转移方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及网络技术领域,特别涉及一种人脸图像传输方法、数值转移方法、装置、电子设备及存储介质。
背景技术
随着网络技术的发展,用户能够基于终端触发数值转移操作,比如,终端先基于人脸识别技术验证用户是否为本人,在验证通过后再执行数值转移操作。
目前,终端的摄像头采集到用户的人脸图像之后,会直接将该人脸图像(也称为裸数据)发给终端的处理器,由终端的处理器再将人脸图像上转至服务器,由服务器对该人脸图像进行人脸识别,生成识别结果,服务器向终端发送识别结果,从而当识别结果为“是本人”时,触发后续的数值转移操作。
发明内容
本公开实施例提供了一种人脸图像传输方法、数值转移方法、装置、电子设备及存储介质。该技术方案如下:
一方面,提供了一种人脸图像传输方法,应用于终端,该方法包括:
当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
发送所述携带所述标识信息的人脸图像。
一方面,提供了一种数值转移方法,应用于终端,该方法包括:
当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
一方面,提供了一种数值转移方法,应用于服务器,该方法包括:
接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;
当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;
当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
一方面,提供了一种人脸图像传输装置,应用于终端,该装置包括:
读取模块,用于当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
嵌入模块,用于在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
发送模块,用于发送所述携带所述标识信息的人脸图像。
在一些实施例中,所述读取模块用于:
将所述缓存区的目标列表中最大数值确定为所述人脸图像信息,所述目标列表中存储的各个数值分别对应于一个人脸图像个数。
在一些实施例中,所述装置还用于:
向所述缓存区的所述目标列表中写入所述标识信息与第一目标数值相加所得的数值。
在一些实施例中,所述读取模块用于:
将所述缓存区的目标地址内存储的数值确定为所述人脸图像信息。
在一些实施例中,所述装置还用于:
将所述目标地址内存储的数值置为所述标识信息与第二目标数值相加所得的数值。
在一些实施例中,所述标识信息为所述人脸图像信息与第三目标数值相加所得的数值。
在一些实施例中,所述嵌入模块用于:
在所述人脸图像中除了人脸区域之外的任一区域内嵌入所述标识信息。
在一些实施例中,所述嵌入模块用于:
对所述人脸图像进行傅里叶变换,得到所述人脸图像的频谱图像;
在所述频谱图像中除了人脸区域之外的任一区域内嵌入所述标识信息;
对携带所述标识信息的频谱图像进行逆傅里叶变换,得到携带所述标识信息的人脸图像。
一方面,提供了一种数值转移装置,应用于终端,该装置包括:
采集模块,用于当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
读取模块,用于当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
嵌入模块,用于在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
数值转移模块,用于基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
在一些实施例中,所述数值转移模块用于:
生成数值转移请求,所述数值转移请求至少包括所述摄像头组件的摄像头标识、携带所述标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;
向服务器发送所述数值转移请求,由所述服务器基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像进行身份验证,当验证通过时从所述第一用户标识对应存储的 数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
一方面,提供了一种数值转移装置,应用于服务器,该装置包括:
接收模块,用于接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;
识别模块,用于当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;
数值转移模块,用于当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
一方面,提供了一种电子设备,该电子设备包括一个或多个处理器和一个或多个存储器,该一个或多个存储器中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器加载并执行以实现如下操作:
当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
发送所述携带所述标识信息的人脸图像。
一方面,提供了一种电子设备,该电子设备包括一个或多个处理器和一个或多个存储器,该一个或多个存储器中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器加载并执行以实现如下操作:
当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
一方面,提供了一种电子设备,该电子设备包括一个或多个处理器和一个或多个存储器,该一个或多个存储器中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器加载并执行以实现如下操作:
接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;
当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;
当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
一方面,提供了一种存储介质,该存储介质中存储有至少一条程序代码,该至少一条程序代码由处理器加载并执行以实现如上述任一种可能实现方式的人脸图像传输方法或数值转移方法所执行的操作。
本公开实施例提供的技术方案带来的有益效果至少包括:
通过当摄像头组件采集到任一人脸图像时,能够从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数,在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息,发送该携带该标识信息的人脸图像,从而能够在摄像头组件采集到的人脸图像中直接嵌入标识信息,增加了摄像头组件采集到的人脸图像的安全性,即使发生了人脸图像泄露,攻击者盗用历史人脸图像进行请求相关业务时,仍然会因为标识信息不符而无法通过验证,从而有效地保障了人脸图像传输过程的安全性。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还能够根据这些附图获得其他的附图。
图1是本公开实施例提供的一种人脸图像传输方法的实施环境示意图;
图2是本公开实施例提供的一种终端120的外观示意图;
图3是本公开实施例提供的一种人脸图像传输方法的流程图;
图4是本公开实施例提供的一种数值转移方法的交互流程图;
图5是本公开实施例提供的一种数值转移方法的原理性示意图;
图6是本公开实施例提供的一种人脸图像传输装置的结构示意图;
图7是本公开实施例提供的一种数值转移装置的结构示意图;
图8是本公开实施例提供的一种数值转移装置的结构示意图;
图9是本公开实施例提供的一种终端的结构框图;
图10是本公开实施例提供的一种服务器的结构示意图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开实施方式作进一步地详细描述。
在相关技术中,由于当前网络攻击方式逐渐多样化,人脸图像有可能会在数据传输过程中发生泄漏,有可能会产生“重放攻击”:终端采集攻击者的人脸图像之后,攻击者在终端向服务器传输图像的过程中发起网络攻击,将攻击者的人脸图像替换为曾经窃取过的有效人脸图像,从而导致该有效人脸图像所对应用户的资金被盗刷,因此,人脸图像传输过程的安全性较差。
图1是本公开实施例提供的一种人脸图像传输方法的实施环境示意图。参见图1,在该实施环境中能够包括终端120和服务器140,上述终端120和服务器140均能够称为一种电子设备。
终端120用于进行人脸图像传输,终端120能够包括摄像头组件122和主机124,摄像头组件122用于采集人脸图像,对采集到的人脸图像嵌入标识信息,再将携带标识信息的人脸图像发送至主机124,主机124能够对携带标识信息的人脸图像进行压缩、加密以及封装, 得到数据传输报文,主机124将数据传输报文发送至服务器140。
在一些实施例中,该摄像头组件122是3D(3Dimensions,三维)摄像头组件,3D摄像头组件能够具有人脸识别、手势识别、人体骨架识别、三维测量、环境感知或者三维地图重建等功能,利用3D摄像头组件能够检测出采集到的图像中每个像素点与摄像头之间的距离信息,从而能够判断出当前采集的人脸图像所对应的用户是否为活体,避免攻击者使用他人的相片进行身份验证并盗刷他人的资金进行数值转移。
在一些实施例中,该摄像头组件122中包括传感器(sensor)1222、处理器1224、存储器1226以及电池1228。在一些实施例中,摄像头组件122还能够具有摄像头标识,该摄像头标识用于唯一标识摄像头组件122,例如,该摄像头标识是摄像头组件122出厂时分配的序列号(Serial Number,SN),该序列号是摄像头组件122的唯一编号。
其中,传感器1222用于采集人脸图像,传感器1222能够设置在摄像头组件122的内部,传感器1222能够为彩色图传感器、深度图传感器或者红外图传感器中至少一项,本公开实施例不对传感器1222的类型进行限定,相应地,传感器1222采集到的人脸图像也能够为彩色图、深度图或者红外图中至少一项,本公开实施例不对人脸图像的类型进行限定。
其中,处理器1224能够用于为传感器1222采集到的人脸图像嵌入标识信息,例如,处理器1224是DSP(Digital Signal Processor,数字信号处理器),DSP是一种独特的微处理器,是一种能够以数字信号来处理大量信息的器件,当然,处理器1224还能够是FPGA(Field Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)等硬件形式,本公开实施例不对处理器1224的硬件形式进行限定。
其中,存储器1226用于存储人脸图像信息,该人脸图像信息用于表示摄像头组件122历史采集过的所有人脸图像个数,例如,存储器1226是FLASH(flash memory,快闪存储器),或者是磁盘存储设备、高速随机存取(CACHE)存储器等,本公开实施例不对存储器1226的类型进行限定。
其中,电池1228用于为摄像头组件122的各个组成部分进行供电,在这样的情况下,即使终端120的主机124断电,摄像头组件122内部的电池1228仍然能够为存储器1226供电,避免由于断电原因导致存储器1226内的人脸图像信息丢失。
终端120和服务器140之间能够通过有线或无线网络进行连接。
服务器140能够包括一台服务器、多台服务器、云计算平台或者虚拟化中心中的至少一种。服务器140用于为终端120上运行的应用程序提供后台服务,该应用程序能够向用户提供数值转移业务,使得用户能够基于终端120进行数值转移操作。在一些实施例中,服务器140承担主要计算工作,终端120承担次要计算工作;或者,服务器140承担次要计算工作,终端120承担主要计算工作;或者,服务器140和终端120两者之间采用分布式计算架构进行协同计算。
在一些实施例中,服务器140是独立的物理服务器,或者是多个物理服务器构成的服务器集群或者分布式系统,或者是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)以及大数据和人工智能平台等基础云计算服务的云服务器。
在一个示例性场景中,人脸数据传输过程发生在基于人脸识别进行数值转移的过程中,此时终端120俗称为“刷脸支付终端”,刷脸支付终端是指集成了摄像头、能够采集用户人脸 图像之后进行支付的电子设备。第一用户能够在终端120上对数值转移选项执行触发操作,触发终端120调用摄像头组件122实时采集第一用户的人脸数据流,对人脸数据流中任一个图像帧(也即是人脸图像),基于本公开实施例提供的人脸数据传输方法,能够得到携带标识信息的人脸图像,从而终端120能够将携带标识信息的人脸图像、摄像头标识以及数值转移信息封装到数值转移请求中,向服务器140发送该数值转移请求,服务器140基于携带标识信息的人脸图像与摄像头标识对第一用户进行身份验证,一方面通过标识信息能够验证该人脸图像是否为摄像头组件122采集到的最新图像,避免攻击者盗用第一用户的历史人脸图像进行非法支付,一方面通过人脸识别能够验证该人脸图像中人脸区域是否为第一用户本人,从而能够达到双重身份验证,当双重身份验证均通过时,服务器140能够基于数值转移请求中的数值转移信息进行数值转移,该数值转移信息包括第一用户标识、第二用户标识以及待转移数值。
需要说明的是,第一用户和第二用户仅仅是针对某次数值转移过程中不同身份用户的区别称呼,在一些数值转移过程中,可能某一个用户既是第一用户,也是第二用户,也即是该用户从自己的某个账户中将数值转移至另一个自己的账户。当然,对某一个用户而言,也可能在一次数值转移过程中作为第一用户,在另一次数值转移过程中作为第二用户。
图2是本公开实施例提供的一种终端120的外观示意图,参见图2,终端120能够具有显示屏,用户基于显示屏执行交互操作,从而完成基于人脸识别的数值转移操作,在一些实施例中,终端120的设备类型包括智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机或者台式计算机中的至少一种。
在一些实施例中,上述终端120的数量能够更多或更少。比如上述终端120能够仅为一个,或者上述终端120为几十个或几百个,或者更多数量。本公开实施例对终端120的数量和设备类型不加以限定。
图3是本公开实施例提供的一种人脸图像传输方法的流程图。参见图3,该实施例能够应用于上述实施环境中的终端120,下面进行详述:
301、当摄像头组件采集到任一人脸图像时,终端将缓存区的目标列表中最大数值确定为人脸图像信息,该目标列表中存储的各个数值分别对应于一个人脸图像个数。
其中,终端用于进行人脸图像传输,终端能够包括摄像头组件和主机,摄像头组件用于采集人脸图像,对采集到的人脸图像嵌入标识信息,摄像头组件将携带标识信息的人脸图像发送至主机,由主机再将携带标识信息的人脸图像传输至服务器。在一些实施例中,该摄像头组件是3D摄像头组件,从而能够检测出采集到的图像中每个像素点与摄像头之间的距离信息,以判断当前采集的人脸图像所对应的用户是否为活体,避免攻击者使用他人的相片进行身份验证并盗刷他人的资金进行数值转移。
在一些实施例中,该摄像头组件包括传感器、处理器、存储器以及电池,摄像头组件的存储器中能够存储有目标列表,在目标列表中包括多个数值,各个数值分别对应于一个人脸图像个数,每当摄像头组件采集到的人脸图像个数增加时,摄像头组件的处理器能够向目标列表中写入新的数值,删除掉目标列表中时间戳最早的已有数值,实现对目标列表的实时更 新。
在一些实施例中,当人脸图像个数为[500,501,502,…]时,目标列表中各个数值能够与人脸图像个数相等,此时目标列表即为[500,501,502,…],或者,目标列表中各个数值还能够是人脸图像个数的N倍(N>1),此时目标列表即为[500N,501N,502N,…],当然,目标列表中各个数值还能够是人脸图像个数经过指数变换、对数变换等方式得到的数值,本公开实施例不对人脸图像个数与目标列表中存储的各个数值之间的变换方式进行限定。
在上述情况中,终端操作系统能够向摄像头组件下发采集指令,摄像头组件响应于该采集指令,通过摄像头组件的传感器采集人脸图像,该传感器能够实时采集到用户的人脸数据流,对于人脸数据流中任一人脸图像(也即是任一图像帧),传感器将采集到的人脸图像发送至摄像头组件的处理器,该处理器从缓存区中确定目标列表,查询目标列表中的最大数值,将该最大数值确定为人脸图像信息。
在一些实施例中,不同类型的传感器能够采集到不同类型的人脸图像,比如,红外图传感器采集到的人脸图像为红外图,深度图传感器采集到的人脸图像为深度图,彩色图传感器采集到的人脸图像为彩色图,本公开实施例不对传感器以及人脸图像的类型进行限定。
在上述过程中,由于摄像头组件通过内部的处理器和存储器,维护了一个随着采集过的人脸图像个数的增加而更新最大数值的目标列表,当摄像头组件采集过的人脸图像个数越多时,目标列表中的最大数值就越大,导致人脸图像信息的取值就越大,使得摄像头组件通过人脸图像信息实现了对采集过的所有人脸图像个数的计数。
在上述步骤301中,终端从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数,在一些实施例中,终端也能够不以目标列表的形式存储人脸图像信息,而是以堆栈、数组、元组等数据格式来存储人脸图像信息,本公开实施例不对人脸图像信息在缓存区的数据格式进行限定。
302、终端对该人脸图像进行傅里叶变换,得到该人脸图像的频谱图像。
在上述过程中,终端能够对该人脸图像进行DCT(Discrete Cosine Transform,离散余弦变换)处理,得到人脸图像的频谱图像,DCT处理是一种离散傅里叶变换方式,主要用于将数据或图像进行压缩,从而能够将空域的信号转换到频域上,具有良好的去相关性的性能。
303、终端在该频谱图像中除了人脸区域之外的任一区域内嵌入标识信息。
其中,该标识信息用于表示人脸图像信息。在一些实施例中,该标识信息能够为该人脸图像信息与第三目标数值相加所得的数值,其中,该第三目标数值能够为任一大于或等于0的数值。在一些实施例中,该标识信息还能够为人脸图像信息与第四目标数值相乘所得的数值,其中,第四目标数值能够为任一大于或等于1的数值,第四目标数值能够与第三目标数值相同,也能够不同。在一些实施例中,该标识信息还能够为人脸图像信息经过单向加密之后所得的数值,本公开实施例不对人脸图像信息与标识信息之间的变换方式进行限定。
由于人脸区域通常位于人脸图像的中央,因此,除了人脸区域之外的任一区域能够是人脸图像的左上角、左下角、右上角或者右下角中至少一项,比如,将该任一区域确定为人脸图像的左上角,本公开实施例不对标识信息的嵌入区域所在位置进行限定。
当第三目标数值为零时,该标识信息与该人脸图像信息相同,终端能够在频谱图像中除了人脸区域之外的任一区域内嵌入该人脸图像信息。当第三目标数值大于0时,终端能够将上述步骤301中确定人脸图像信息与第三目标数值相加,得到该标识信息,再在频谱图像中 除了人脸区域之外的任一区域内嵌入该标识信息。
在上述过程中,通过在人脸图像中人脸区域之外的任一区域嵌入标识信息,能够防止破坏掉人脸图像中人脸区域的像素信息,也就避免了因遮挡人脸区域而造成无法进行人脸识别的问题,能够优化人脸图像传输过程的处理逻辑。
304、终端对携带该标识信息的频谱图像进行逆傅里叶变换,得到携带该标识信息的人脸图像。
在上述过程中,终端在频谱图像中嵌入标识信息之后,能够对携带该标识信息的频谱图像进行DCT逆变换,将携带标识信息的频谱图像再从频域转换回空域,这样能够保证用户无法感知到人脸图像中嵌入的标识信息,这种用户感知不到的标识信息能够俗称为“盲水印”或“数字水印”,用户既看不到盲水印也听不见盲水印,但当终端将携带盲水印的人脸图像发送至服务器时,服务器能够解析出人脸图像中携带的盲水印。
在上述步骤302-304中,终端能够在人脸图像中除了人脸区域之外的任一区域内嵌入标识信息。当然,终端也能够在人脸图像的人脸区域中嵌入标识信息,本公开实施例不对标识信息的嵌入区域是否为人脸区域进行限定。
在上述过程中,终端在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息,能够针对摄像头组件采集到的人脸数据源(也即是任一张人脸图像),通过嵌入盲水印的方式进行安全防范,这样即使在摄像头组件与主机之间或者主机与服务器之间传输人脸图像时遭受网络攻击,攻击者将窃取的本次采集的人脸图像替换成欲盗取用户的历史人脸图像时,服务器能够及时识别出历史人脸图像的标识信息已过期,并非是最新时刻实时采集到的人脸图像,从而确定对该人脸图像验证失败,大大提升了基于人脸图像进行验证的各项业务的安全性。
305、终端向该缓存区的该目标列表中写入该标识信息与第一目标数值相加所得的数值。
其中,该第一目标数值为任一大于或等于0的数值,例如,第一目标数值为1。
在上述过程中,终端通过摄像头组件的处理器在人脸图像中嵌入标识信息之后,还能够向目标列表中写入标识信息与第一目标数值相加所得的数值,由于人脸图像信息为目标列表中原有的最大数值,而标识信息大于或等于人脸图像信息,本次写入的数值又大于或等于标识信息,也即是说,本次写入的数值一定大于或等于目标列表中原有的最大数值,从而能够对目标列表中的最大数值进行更新,保持摄像头组件的存储器中存储的目标列表中最大数值随着采集过的人脸图像个数的增加而递增。
在一个示例性场景中,摄像头组件的传感器为SENSOR,摄像头组件的处理器为DSP,摄像头组件的存储器为FLASH,FLASH中存储的目标列表中各个数值恰好为各个人脸图像个数,此时认为FLASH对已采集的人脸图像个数的进行实时更新地计数,DSP从SENSOR获取人脸数据流,对每一帧人脸图像(也即是人脸数据流的任一图像帧),DSP从FLASH中存储的目标列表中读取最大人脸图像个数(也即是当前最新计数),假设第三目标数值为1,DSP将最大人脸图像个数加一所得的数值确定为标识信息,将该标识信息嵌入到上述一帧人脸图像中,假设第一目标数值为0,那么DSP直接将该标识信息写入到目标列表中。
例如,FLASH的目标列表中当前最新计数为1000,说明摄像头组件历史采集过的所有人脸图像个数为1000个,当SENSOR采集到第1001个人脸图像时,DSP从FLASH的目标列表中读取当前最新计数1000,确定标识信息为1001,在上述第1001个人脸图像中以盲水 印的方式嵌入标识信息1001,再将标识信息1001写回到目标列表中,使得DSP在下一次进行当前最新计数的读取时,能够读取到1001,保证了DSP每次从FLASH中均能够读取到递增的计数。
在一个示例性场景中,假设FLASH中存储的目标列表中各个数值恰好为各个人脸图像个数加一所得的数值,那么DSP从SENSOR获取人脸数据流之后,对每一帧人脸图像,DSP从FLASH中存储的目标列表中读取到的当前最新计数则为最大人脸图像个数加一所得的数值,假设第三目标数值为0,那么DSP将该最大人脸图像个数加一所得的数值确定为标识信息,将该标识信息嵌入到上述一帧人脸图像中,假设第一目标数值为1,那么DSP将标识信息加一所得的数值写入目标列表中。
例如,FLASH的目标列表中当前最新计数为1000,说明摄像头组件历史采集过的所有人脸图像个数为999个,当SENSOR采集到第1000个人脸图像时,DSP从FLASH的目标列表中读取当前最新计数1000,确定标识信息为1000,在上述第1000个人脸图像中以盲水印的方式嵌入标识信息1000,再将标识信息1000加一所得的数值1001写回到目标列表中,使得DSP在下一次进行当前最新计数的读取时,能够读取到1001,保证了DSP每次从FLASH中均能够读取到递增的计数。
在一些实施例中,终端也能够在摄像头组件的存储器中不以目标列表的形式来存储人脸图像信息,而是直接在目标地址中存储人脸图像信息,从而能够节约摄像头组件的存储器的存储空间。
在这种情况中,上述步骤301能够采用下述步骤进行替换:终端将缓存区的目标地址内存储的数值确定为人脸图像信息,也即是说,终端通过摄像头组件的处理器读取摄像头组件的存储器中的目标地址,将目标地址内存储的数值确定为人脸图像信息。
在上述基础上,上述步骤305能够采用下述步骤进行替换:终端将目标地址内存储的数值置为标识信息与第二目标数值相加所得的数值,其中,第二目标数值为任一大于或等于0的数值,例如,第二目标数值为1,第二目标数值与第一目标数值相同,或者不同,本公开实施例不对第二目标数值或第一目标数值的取值进行限定。
在上述过程中,终端无需在摄像头组件的存储器中维护一个开销较大的目标列表,而是仅在目标地址中存储当前已采集过的所有人脸图像个数即可,能够减少存储器所占用的资源开销,从而不仅能够对目标地址中存储的人脸图像信息进行实时更新,而且能够提升摄像头组件的处理效率。
306、终端发送携带该标识信息的人脸图像。
在上述过程中,终端能够单独发送携带该标识信息的人脸图像,也能够将携带该标识信息的人脸图像与其他信息一起封装为一个业务请求,从而发送该业务请求,该业务请求能够对应于不同的业务类型,例如身份验证业务、数值转移业务等,本公开实施例不对业务请求的业务类型进行限定。
在一个示例性场景中,终端能够将摄像头组件的摄像头标识和携带该标识信息的人脸图像封装为身份验证请求,从而向服务器发送该身份验证请求。
在一个示例性场景中,终端还能够将摄像头组件的摄像头标识、携带该标识信息的人脸图像以及数值转移信息封装为数值转移请求,从而向服务器发送该数值转移请求,其中,数值转移信息至少包括第一用户标识、第二用户标识以及待转移数值。
上述所有可选技术方案,能够采用任意结合形成本公开的可选实施例,在此不再一一赘述。
本公开实施例提供的方法,通过当摄像头组件采集到任一人脸图像时,能够从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数,在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息,发送该携带该标识信息的人脸图像,从而能够在摄像头组件采集到的人脸图像中直接嵌入标识信息,增加了摄像头组件采集到的人脸图像的安全性,即使发生了人脸图像泄露,攻击者盗用历史人脸图像进行请求相关业务时,仍然会因为标识信息不符而无法通过验证,从而有效地保障了人脸图像传输过程的安全性。
上述实施例所提供的人脸图像传输方法,能够保证摄像头组件采集到的人脸数据源的安全性,通过标识信息的对人脸图像的采集时机进行唯一标识,能够有效保证每次采集的人脸图像只能使用一次,各个用户无需对终端的主机进行硬件升级或者系统改造,对主机本身的配置也没有强制要求,各个用户仅需要接入本公开实施例提供的摄像头组件即可保证人脸数据源的安全性,大大降低了维护人脸数据源安全性的门槛,具有较高的可移植性以及可用性。该人脸图像传输方法能够应用于各类依赖于人脸图像的业务场景中,在本公开实施例中,以基于人脸图像进行身份验证从而完成数值转移业务的过程为例进行说明,上述过程简称为人脸支付场景或者刷脸支付场景,下面进行详述。
图4是本公开实施例提供的一种数值转移方法的交互流程图,参见图4,该实施例应用于上述实施环境中终端120以及服务器140之间的交互过程,该实施例包括下述步骤:
401、当检测到第一用户对数值转移选项的触发操作时,终端调用摄像头组件采集人脸图像。
在上述过程中,终端能够是第一用户的个人终端,或者是设置在第二用户所在店铺内的“刷脸支付终端”,刷脸支付终端是指集成了摄像头、能够采集用户人脸图像之后进行支付的电子设备,本公开实施例不对终端的设备类型进行限定。
需要说明的是,第一用户和第二用户仅仅是针对某次数值转移过程中不同身份用户的区别称呼,在一些数值转移过程中,可能某一个用户既是第一用户,也是第二用户,也即是该用户从自己的某个账户中将数值转移至另一个自己的账户。当然,对某一个用户而言,也可能在一次数值转移过程中作为第一用户,在另一次数值转移过程中作为第二用户。
在上述步骤401中,第一用户在需要进行数值转移时,触发终端在显示屏上显示支付界面,在该支付界面中能够包括数值转移信息和数值转移选项,第一用户能够在核对数值转移信息之后,对数值转移选项执行触发操作,当检测到第一用户对数值转移选项的触发操作时,终端操作系统向摄像头组件下发采集指令,调用摄像头组件采集第一用户的人脸图像。
在一些实施例中,上述该数值转移信息至少能够包括第一用户标识、第二用户标识以及待转移数值,当然,该数值转移信息还能够包括交易物品信息、折扣信息、交易时间戳等。
402、当该摄像头组件采集到任一人脸图像时,终端从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数。
上述步骤402与上述步骤301类似,这里不做赘述。
403、终端在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息 用于表示该人脸图像信息。
上述步骤403与上述步骤302-304类似,这里不做赘述。
404、终端生成数值转移请求,该数值转移请求至少包括该摄像头组件的摄像头标识、携带该标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识。
在上述过程中,终端能够采用压缩算法对摄像头组件的摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识进行压缩,得到压缩信息,采用加密算法对该压缩信息进行加密,得到密文信息,采用传输协议对该密文信息进行封装,得到数值转移请求。
在一些实施例中,该压缩算法能够包括br压缩算法、gzip压缩算法或者霍夫曼压缩算法中至少一项,该加密算法能够包括对称加密算法或者非对称加密算法中至少一项,例如消息摘要算法、DH(Diffie-Hellman)密钥交换算法等,该传输协议能够包括IP协议(Internet Protocol,网际互连协议)、TCP协议(Transmission Control Protocol,传输控制协议)或者UDP协议(User Datagram Protocol,用户数据报协议)中至少一项,本公开实施例不对压缩算法、加密算法以及传输协议的类型进行限定。
405、终端向服务器发送该数值转移请求。
终端发送数值转移请求之后,能够由该服务器基于该摄像头组件的摄像头标识以及携带该标识信息的人脸图像进行身份验证,当验证通过时从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值,实现过程将在下述步骤406-408中进行详述。
406、服务器接收该数值转移请求。
其中,该数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,该标识信息用于表示该人脸图像为实时采集得到的图像。
在上述过程中服务器在接收数值转移请求之后,能够解析该数值转移请求,得到密文信息,采用解密算法对该密文信息进行解密,得到压缩信息,对该压缩信息进行解压缩,得到上述摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识。这里采用的解密算法、解压缩算法分别与上述步骤404中的加密算法、压缩算法对应,这里不做赘述。
407、当该标识信息大于与该摄像头标识对应存储的各个历史标识信息时,服务器对该人脸图像的人脸区域进行人脸识别,得到识别结果。
在上述过程中,服务器将各个摄像头标识所对应的各个人脸图像的标识信息分别对应存储,在一些实施例中,服务器能够在后台维护各个摄像头标识所对应的标识信息序列,将对应于同一摄像头标识的人脸图像的各个历史标识信息存储到同一序列中,从而当接收到携带该摄像头标识的任一数值转移请求时,获取该数值转移请求中携带该标识信息的人脸图像,从该人脸图像中提取该标识信息,服务器能够查询与该摄像头标识对应的标识信息序列中的最大历史数值,若提取出的标识信息大于上述最大历史数值,那么确定该人脸图像合法,也即是该人脸图像并非为盗用的历史人脸图像,服务器对该人脸图像的人脸区域进行人脸识别,得到识别结果,否则,若提取出的标识小于或等于上述最大历史数值,那么确定该人脸图像非法,也即是该人脸图像为盗用的历史人脸图像,服务器能够向终端发送验证失败信息。
在一些实施例中,在进行人脸识别的过程中,能够将该人脸图像输入人脸相似度模型, 通过人脸相似度模型预测该人脸图像与第一用户的预存图像之间的相似度,若相似度高于或等于目标阈值,确定该人脸图像的识别结果为通过,执行下述步骤408,否则,若相似度低于目标阈值,确定该人脸图像的识别结果为不通过,服务器能够向终端发送验证失败信息。
408、当该识别结果为通过时,服务器从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值。
在上述过程中,如果既对标识信息验证通过,又对人脸区域验证通过,那么服务器执行上述从第一用户标识对应存储的数值中转移待转移数值至第二用户对应存储的数值的操作,使得第一用户完成向第二用户转移该待转移数值。在上述步骤404-408中,服务器基于该摄像头组件的摄像头标识以及携带该标识信息的人脸图像,进行数值转移,在一些实施例中,当数值转移完毕,服务器还能够向终端发送转移成功信息,以通知终端数值转移操作已经成功执行。
上述所有可选技术方案,能够采用任意结合形成本公开的可选实施例,在此不再一一赘述。
本公开实施例提供的方法,当检测到对数值转移选项的触发操作时,终端调用摄像头组件采集人脸图像,当该摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数,在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息,基于该摄像头组件的摄像头标识以及携带该标识信息的人脸图像,进行数值转移,能够在数值转移过程中,直接向摄像头组件采集到的人脸图像中嵌入标识信息,增加了摄像头组件采集到的人脸图像的安全性,即使发生了人脸图像泄露,攻击者盗用历史人脸图像进行请求数值转移业务时,仍然会因为标识信息不符而无法通过验证,从而有效地保障了数值转移过程的安全性。
进一步地,服务器接收数值转移请求,该数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,该标识信息用于表示该人脸图像为实时采集得到的图像,当该标识信息大于与该摄像头标识对应存储的各个历史标识信息时,对该人脸图像的人脸区域进行人脸识别,得到识别结果,当该识别结果为通过时,从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值,服务器通过将标识信息与已存储的各个历史标识信息进行比大小,能够判断出携带标识信息的人脸图像是否为盗用的历史人脸图像,从而增加了一种时间维度上对人脸图像进行验证的方式,能够应对更加复杂的网络攻击,即使发生了人脸图像泄露,攻击者盗用历史人脸图像进行请求数值转移业务时,仍然会因为标识信息不符而无法通过验证,从而有效地保障了数值转移过程的安全性。
在一个示例性场景中,图5是本公开实施例提供的一种数值转移方法的原理性示意图,请参考图5,随着人脸支付的普及,越来越多的商户接入了人脸支付服务,随着用户量的陡然增加,人脸支付的安全性就越发重要。本公开实施例提供的数值转移方法,通过对刷脸支付终端的摄像头组件进行内部改进,能够在采集人脸数据源的摄像头组件上增加安全防范手段,无需在主机侧进行硬件升级,严格保障了人脸数据源的安全性,能够有效抵御人脸数据的“重放攻击”。在一些实施例中,终端的摄像头组件在FLASH中存储人脸图像信息,DSP 在从SENSOR(传感器)中获取采集到的人脸数据流之后,能够从FLASH读取人脸图像计数(也即是人脸图像信息),在人脸数据流的每一帧人脸图像上均嵌入盲水印(也即是标识信息),该盲水印用于表示人脸图像计数,DSP将携带盲水印的人脸图像发送至主机,主机将携带盲水印的人脸图像发送至服务器,服务器在后台为该摄像头组件的摄像头标识对应存储了一个递增序列,该递增序列中各个历史盲水印(也即是历史人脸图像计数)随着时间顺序而递增,只有当本次传输的人脸图像所携带的盲水印大于递增序列中所有的历史盲水印时,才认为本次传输的人脸图像合法,否则,则认为本次传输的人脸图像无效,从而通过上述过程校验序列有效性,保证了人脸图像传输过程的安全性,避免了人脸图像的“重放攻击”,进一步地,当确认本次传输的人脸图像合法时,才对人脸图像进行人脸识别,当人脸识别的识别结果为通过时,才进行数值转移操作,从而也保障了数值转移过程的安全性。
图6是本公开实施例提供的一种人脸图像传输装置的结构示意图,请参考图6,该装置包括:
读取模块601,用于当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数;
嵌入模块602,用于在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息;
发送模块603,用于发送该携带该标识信息的人脸图像。
本公开实施例提供的装置,通过当摄像头组件采集到任一人脸图像时,能够从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数,在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息,发送该携带该标识信息的人脸图像,从而能够在摄像头组件采集到的人脸图像中直接嵌入标识信息,增加了摄像头组件采集到的人脸图像的安全性,即使发生了人脸图像泄露,攻击者盗用历史人脸图像进行请求相关业务时,仍然会因为标识信息不符而无法通过验证,从而有效地保障了人脸图像传输过程的安全性。
在一些实施例中,该读取模块601用于:
将该缓存区的目标列表中最大数值确定为该人脸图像信息,该目标列表中存储的各个数值分别对应于一个人脸图像个数。
在一些实施例中,该装置还用于:
向该缓存区的该目标列表中写入该标识信息与第一目标数值相加所得的数值。
在一些实施例中,该读取模块601用于:
将该缓存区的目标地址内存储的数值确定为该人脸图像信息。
在一些实施例中,该装置还用于:
将该目标地址内存储的数值置为该标识信息与第二目标数值相加所得的数值。
在一些实施例中,该标识信息为该人脸图像信息与第三目标数值相加所得的数值。
在一些实施例中,该嵌入模块602用于:
在该人脸图像中除了人脸区域之外的任一区域内嵌入该标识信息。
在一些实施例中,该嵌入模块602用于:
对该人脸图像进行傅里叶变换,得到该人脸图像的频谱图像;
在该频谱图像中除了人脸区域之外的任一区域内嵌入该标识信息;
对携带该标识信息的频谱图像进行逆傅里叶变换,得到携带该标识信息的人脸图像。
上述所有可选技术方案,能够采用任意结合形成本公开的可选实施例,在此不再一一赘述。
需要说明的是:上述实施例提供的人脸图像传输装置在传输人脸图像时,仅以上述各功能模块的划分进行举例说明,实际应用中,能够根据需要而将上述功能分配由不同的功能模块完成,即将电子设备(例如终端)的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的人脸图像传输装置与人脸图像传输方法实施例属于同一构思,其实现过程详见人脸图像传输方法实施例,这里不再赘述。
图7是本公开实施例提供的一种数值转移装置的结构示意图,请参考图7,该装置包括:
采集模块701,用于当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
读取模块702,用于当该摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数;
嵌入模块703,用于在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息;
数值转移模块704,用于基于该摄像头组件的摄像头标识以及携带该标识信息的人脸图像,进行数值转移。
本公开实施例提供的装置,当检测到对数值转移选项的触发操作时,终端调用摄像头组件采集人脸图像,当该摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数,在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息,基于该摄像头组件的摄像头标识以及携带该标识信息的人脸图像,进行数值转移,能够在数值转移过程中,直接向摄像头组件采集到的人脸图像中嵌入标识信息,增加了摄像头组件采集到的人脸图像的安全性,即使发生了人脸图像泄露,攻击者盗用历史人脸图像进行请求数值转移业务时,仍然会因为标识信息不符而无法通过验证,从而有效地保障了数值转移过程的安全性。
在一些实施例中,该数值转移模块704用于:
生成数值转移请求,该数值转移请求至少包括该摄像头组件的摄像头标识、携带该标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;
向服务器发送该数值转移请求,由该服务器基于该摄像头组件的摄像头标识以及携带该标识信息的人脸图像进行身份验证,当验证通过时从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值。
上述所有可选技术方案,能够采用任意结合形成本公开的可选实施例,在此不再一一赘述。
需要说明的是:上述实施例提供的数值转移装置在转移数值时,仅以上述各功能模块的划分进行举例说明,实际应用中,能够根据需要而将上述功能分配由不同的功能模块完成,即将电子设备(例如终端)的内部结构划分成不同的功能模块,以完成以上描述的全部或者 部分功能。另外,上述实施例提供的数值转移装置与数值转移方法实施例属于同一构思,其实现过程详见数值转移方法实施例,这里不再赘述。
图8是本公开实施例提供的一种数值转移装置的结构示意图,请参考图8,该装置包括:
接收模块801,用于接收数值转移请求,该数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,该标识信息用于表示该人脸图像为实时采集得到的图像;
识别模块802,用于当该标识信息大于与该摄像头标识对应存储的各个历史标识信息时,对该人脸图像的人脸区域进行人脸识别,得到识别结果;
数值转移模块803,用于当该识别结果为通过时,从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值。
本公开实施例提供的装置,通过接收数值转移请求,该数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,该标识信息用于表示该人脸图像为实时采集得到的图像,当该标识信息大于与该摄像头标识对应存储的各个历史标识信息时,对该人脸图像的人脸区域进行人脸识别,得到识别结果,当该识别结果为通过时,从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值,通过将标识信息与已存储的各个历史标识信息进行比大小,能够判断出携带标识信息的人脸图像是否为盗用的历史人脸图像,从而增加了一种时间维度上对人脸图像进行验证的方式,能够应对更加复杂的网络攻击,即使发生了人脸图像泄露,攻击者盗用历史人脸图像进行请求数值转移业务时,仍然会因为标识信息不符而无法通过验证,从而有效地保障了数值转移过程的安全性。
需要说明的是:上述实施例提供的数值转移装置在转移数值时,仅以上述各功能模块的划分进行举例说明,实际应用中,能够根据需要而将上述功能分配由不同的功能模块完成,即将电子设备(例如服务器)的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的数值转移装置与数值转移方法实施例属于同一构思,其实现过程详见数值转移方法实施例,这里不再赘述。
图9是本公开实施例提供的终端900的结构框图。该终端900也即是一种电子设备,该终端900是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端900还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端900包括有:处理器901和存储器902。
处理器901能够包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器901能够采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器901也能够包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器 901在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器901包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器902能够包括一个或多个计算机可读存储介质,该计算机可读存储介质能够是非暂态的。存储器902还能够包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器902中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器901所执行以实现下述操作:
当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数;
在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息;
发送该携带该标识信息的人脸图像。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
将该缓存区的目标列表中最大数值确定为该人脸图像信息,该目标列表中存储的各个数值分别对应于一个人脸图像个数。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
向该缓存区的该目标列表中写入该标识信息与第一目标数值相加所得的数值。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
将该缓存区的目标地址内存储的数值确定为该人脸图像信息。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
将该目标地址内存储的数值置为该标识信息与第二目标数值相加所得的数值。
在一些实施例中,该标识信息为该人脸图像信息与第三目标数值相加所得的数值。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
在该人脸图像中除了人脸区域之外的任一区域内嵌入该标识信息。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
对该人脸图像进行傅里叶变换,得到该人脸图像的频谱图像;
在该频谱图像中除了人脸区域之外的任一区域内嵌入该标识信息;
对携带该标识信息的频谱图像进行逆傅里叶变换,得到携带该标识信息的人脸图像。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
当该摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数;
在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息;
基于该摄像头组件的摄像头标识以及携带该标识信息的人脸图像,进行数值转移。
在一些实施例中,该至少一个指令用于被处理器901所执行以实现下述操作:
生成数值转移请求,该数值转移请求至少包括该摄像头组件的摄像头标识、携带该标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;
向服务器发送该数值转移请求,由该服务器基于该摄像头组件的摄像头标识以及携带该 标识信息的人脸图像进行身份验证,当验证通过时从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值。
在一些实施例中,终端900还能够包括有:外围设备接口903和至少一个外围设备。处理器901、存储器902和外围设备接口903之间能够通过总线或信号线相连。各个外围设备能够通过总线、信号线或电路板与外围设备接口903相连。在一些实施例中,外围设备包括:射频电路904、触摸显示屏905、摄像头组件906、音频电路907、定位组件908和电源909中的至少一种。
外围设备接口903可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器901和存储器902。在一些实施例中,处理器901、存储器902和外围设备接口903被集成在同一芯片或电路板上;在一些其他实施例中,处理器901、存储器902和外围设备接口903中的任意一个或两个能够在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路904用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路904通过电磁信号与通信网络以及其他通信设备进行通信。射频电路904将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。在一些实施例中,射频电路904包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路904能够通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路904还能够包括NFC(Near Field Communication,近距离无线通信)有关的电路,本公开对此不加以限定。
显示屏905用于显示UI(User Interface,用户界面)。该UI能够包括图形、文本、图标、视频及其它们的任意组合。当显示屏905是触摸显示屏时,显示屏905还具有采集在显示屏905的表面或表面上方的触摸信号的能力。该触摸信号能够作为控制信号输入至处理器901进行处理。此时,显示屏905还能够用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏905为一个,设置终端900的前面板;在另一些实施例中,显示屏905为至少两个,分别设置在终端900的不同表面或呈折叠设计;在再一些实施例中,显示屏905是柔性显示屏,设置在终端900的弯曲表面上或折叠面上。甚至,显示屏905还能够设置成非矩形的不规则图形,也即异形屏。显示屏905能够采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件906用于采集图像或视频。在一些实施例中,摄像头组件906包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件906还包括闪光灯。闪光灯能够是单色温闪光灯,或者是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,能够用于不同色温下的光线补偿。
音频电路907能够包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波 转换为电信号输入至处理器901进行处理,或者输入至射频电路904以实现语音通信。出于立体声采集或降噪的目的,麦克风能够为多个,分别设置在终端900的不同部位。麦克风还能够是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器901或射频电路904的电信号转换为声波。扬声器能够是传统的薄膜扬声器,也能够是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅能够将电信号转换为人类可听见的声波,也能够将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路907还能够包括耳机插孔。
定位组件908用于定位终端900的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件908能够是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源909用于为终端900中的各个组件进行供电。电源909能够是交流电、直流电、一次性电池或可充电电池。当电源909包括可充电电池时,该可充电电池能够支持有线充电或无线充电。该可充电电池还能够用于支持快充技术。
在一些实施例中,终端900还包括有一个或多个传感器910。该一个或多个传感器910包括但不限于:加速度传感器911、陀螺仪传感器912、压力传感器913、指纹传感器914、光学传感器915以及接近传感器916。
加速度传感器911能够检测以终端900建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器911能够用于检测重力加速度在三个坐标轴上的分量。处理器901能够根据加速度传感器911采集的重力加速度信号,控制触摸显示屏905以横向视图或纵向视图进行用户界面的显示。加速度传感器911还能够用于游戏或者用户的运动数据的采集。
陀螺仪传感器912能够检测终端900的机体方向及转动角度,陀螺仪传感器912能够与加速度传感器911协同采集用户对终端900的3D动作。处理器901根据陀螺仪传感器912采集的数据,能够实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器913能够设置在终端900的侧边框和/或触摸显示屏905的下层。当压力传感器913设置在终端900的侧边框时,能够检测用户对终端900的握持信号,由处理器901根据压力传感器913采集的握持信号进行左右手识别或快捷操作。当压力传感器913设置在触摸显示屏905的下层时,由处理器901根据用户对触摸显示屏905的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器914用于采集用户的指纹,由处理器901根据指纹传感器914采集到的指纹识别用户的身份,或者,由指纹传感器914根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器901授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器914能够被设置终端900的正面、背面或侧面。当终端900上设置有物理按键或厂商Logo时,指纹传感器914能够与物理按键或厂商Logo集成在一起。
光学传感器915用于采集环境光强度。在一个实施例中,处理器901能够根据光学传感器915采集的环境光强度,控制触摸显示屏905的显示亮度。在一些实施例中,当环境光强度较高时,调高触摸显示屏905的显示亮度;当环境光强度较低时,调低触摸显示屏905的显示亮度。在另一个实施例中,处理器901还能够根据光学传感器915采集的环境光强度, 动态调整摄像头组件906的拍摄参数。
接近传感器916,也称距离传感器,通常设置在终端900的前面板。接近传感器916用于采集用户与终端900的正面之间的距离。在一个实施例中,当接近传感器916检测到用户与终端900的正面之间的距离逐渐变小时,由处理器901控制触摸显示屏905从亮屏状态切换为息屏状态;当接近传感器916检测到用户与终端900的正面之间的距离逐渐变大时,由处理器901控制触摸显示屏905从息屏状态切换为亮屏状态。
本领域技术人员能够理解,图9中示出的结构并不构成对终端900的限定,能够包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
图10是本公开实施例提供的一种服务器的结构示意图,服务器1000也即是一种电子设备,该服务器1000可因配置或性能不同而产生比较大的差异,能够包括一个或一个以上处理器(Central Processing Units,CPU)1001和一个或一个以上的存储器1002,其中,该存储器1002中存储有至少一条程序代码,该至少一条程序代码由该处理器1001加载并执行以实现如下操作:
接收数值转移请求,该数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,该标识信息用于表示该人脸图像为实时采集得到的图像;
当该标识信息大于与该摄像头标识对应存储的各个历史标识信息时,对该人脸图像的人脸区域进行人脸识别,得到识别结果;
当该识别结果为通过时,从该第一用户标识对应存储的数值中转移该待转移数值至该第二用户标识对应存储的数值。
上述各个实施例提供的数值转移方法。当然,该服务器1000还能够具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器1000还能够包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括至少一条程序代码的存储器,上述至少一条程序代码可由终端中的处理器执行以实现如下操作:
当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,该人脸图像信息用于表示该摄像头组件历史采集过的所有人脸图像个数;
在该人脸图像中嵌入标识信息,得到携带该标识信息的人脸图像,该标识信息用于表示该人脸图像信息;
发送该携带该标识信息的人脸图像。
在一些实施例中,上述至少一条程序代码可由终端中的处理器执行以实现如下操作:
将该缓存区的目标列表中最大数值确定为该人脸图像信息,该目标列表中存储的各个数值分别对应于一个人脸图像个数。
在一些实施例中,上述至少一条程序代码可由终端中的处理器执行以实现如下操作:
向该缓存区的该目标列表中写入该标识信息与第一目标数值相加所得的数值。
在一些实施例中,上述至少一条程序代码可由终端中的处理器执行以实现如下操作:
将该缓存区的目标地址内存储的数值确定为该人脸图像信息。
在一些实施例中,上述至少一条程序代码可由终端中的处理器执行以实现如下操作:
将该目标地址内存储的数值置为该标识信息与第二目标数值相加所得的数值。
在一些实施例中,该标识信息为该人脸图像信息与第三目标数值相加所得的数值。
在一些实施例中,上述至少一条程序代码可由终端中的处理器执行以实现如下操作:
在该人脸图像中除了人脸区域之外的任一区域内嵌入该标识信息。
在一些实施例中,上述至少一条程序代码可由终端中的处理器执行以实现如下操作:
对该人脸图像进行傅里叶变换,得到该人脸图像的频谱图像;
在该频谱图像中除了人脸区域之外的任一区域内嵌入该标识信息;
对携带该标识信息的频谱图像进行逆傅里叶变换,得到携带该标识信息的人脸图像。
例如,该计算机可读存储介质是ROM(Read-Only Memory,只读存储器)、RAM(Random-Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,只读光盘)、磁带、软盘和光数据存储设备等。
在一些实施例中,还提供一种包括至少一条程序代码的计算机程序或计算机程序产品,当其在计算机设备上运行时,使得计算机设备执行前述各个实施例所提供的人脸图像传输方法或数值转移方法中任一种可能实现方式,在此不作赘述。
本领域普通技术人员能够理解实现上述实施例的全部或部分步骤能够通过硬件来完成,也能够通过程序来指令相关的硬件完成,该程序能够存储于一种计算机可读存储介质中,上述提到的存储介质能够是只读存储器,磁盘或光盘等。
以上所述仅为本公开的可选实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (36)

  1. 一种人脸图像传输方法,应用于终端,所述方法包括:
    当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    发送所述携带所述标识信息的人脸图像。
  2. 根据权利要求1所述的方法,其中,所述从缓存区中读取人脸图像信息包括:
    将所述缓存区的目标列表中最大数值确定为所述人脸图像信息,所述目标列表中存储的各个数值分别对应于一个人脸图像个数。
  3. 根据权利要求2所述的方法,其中,所述在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像之后,所述方法还包括:
    向所述缓存区的所述目标列表中写入所述标识信息与第一目标数值相加所得的数值。
  4. 根据权利要求1所述的方法,其中,所述从缓存区中读取人脸图像信息包括:
    将所述缓存区的目标地址内存储的数值确定为所述人脸图像信息。
  5. 根据权利要求4所述的方法,其中,所述在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像之后,所述方法还包括:
    将所述目标地址内存储的数值置为所述标识信息与第二目标数值相加所得的数值。
  6. 根据权利要求1所述的方法,其中,所述标识信息为所述人脸图像信息与第三目标数值相加所得的数值。
  7. 根据权利要求1至6任一项所述的方法,其中,所述在所述人脸图像中嵌入标识信息包括:
    在所述人脸图像中除了人脸区域之外的任一区域内嵌入所述标识信息。
  8. 根据权利要求1所述的方法,其中,所述在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像包括:
    对所述人脸图像进行傅里叶变换,得到所述人脸图像的频谱图像;
    在所述频谱图像中除了人脸区域之外的任一区域内嵌入所述标识信息;
    对携带所述标识信息的频谱图像进行逆傅里叶变换,得到携带所述标识信息的人脸图像。
  9. 一种数值转移方法,应用于终端,所述方法包括:
    当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
    当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
  10. 根据权利要求9所述的方法,其中,所述基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移包括:
    生成数值转移请求,所述数值转移请求至少包括所述摄像头组件的摄像头标识、携带所述标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;
    向服务器发送所述数值转移请求,由所述服务器基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像进行身份验证,当验证通过时从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
  11. 一种数值转移方法,应用于服务器,所述方法包括:
    接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;
    当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;
    当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
  12. 一种人脸图像传输装置,应用于终端,所述装置包括:
    读取模块,用于当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    嵌入模块,用于在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    发送模块,用于发送所述携带所述标识信息的人脸图像。
  13. 根据权利要求12所述的装置,其中,所述读取模块用于:
    将所述缓存区的目标列表中最大数值确定为所述人脸图像信息,所述目标列表中存储的各个数值分别对应于一个人脸图像个数。
  14. 根据权利要求13所述的装置,其中,所述装置还用于:
    向所述缓存区的所述目标列表中写入所述标识信息与第一目标数值相加所得的数值。
  15. 根据权利要求12所述的装置,其中,所述读取模块用于:
    将所述缓存区的目标地址内存储的数值确定为所述人脸图像信息。
  16. 根据权利要求15所述的装置,其中,所述装置还用于:
    将所述目标地址内存储的数值置为所述标识信息与第二目标数值相加所得的数值。
  17. 根据权利要求12所述的装置,其中,所述标识信息为所述人脸图像信息与第三目标数值相加所得的数值。
  18. 根据权利要求12至17任一项所述的装置,其中,所述嵌入模块用于:
    在所述人脸图像中除了人脸区域之外的任一区域内嵌入所述标识信息。
  19. 根据权利要求12所述的装置,其中,所述嵌入模块用于:
    对所述人脸图像进行傅里叶变换,得到所述人脸图像的频谱图像;
    在所述频谱图像中除了人脸区域之外的任一区域内嵌入所述标识信息;
    对携带所述标识信息的频谱图像进行逆傅里叶变换,得到携带所述标识信息的人脸图像。
  20. 一种数值转移装置,应用于终端,所述装置包括:
    采集模块,用于当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
    读取模块,用于当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    嵌入模块,用于在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    数值转移模块,用于基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
  21. 根据权利要求20所述的装置,其中,所述数值转移模块用于:
    生成数值转移请求,所述数值转移请求至少包括所述摄像头组件的摄像头标识、携带所述标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;
    向服务器发送所述数值转移请求,由所述服务器基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像进行身份验证,当验证通过时从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
  22. 一种数值转移装置,应用于服务器,所述装置包括:
    接收模块,用于接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;
    识别模块,用于当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;
    数值转移模块,用于当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
  23. 一种电子设备,所述电子设备包括一个或多个处理器和一个或多个存储器,所述一 个或多个存储器中存储有至少一条程序代码,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    发送所述携带所述标识信息的人脸图像。
  24. 根据权利要求23所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    将所述缓存区的目标列表中最大数值确定为所述人脸图像信息,所述目标列表中存储的各个数值分别对应于一个人脸图像个数。
  25. 根据权利要求24所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    向所述缓存区的所述目标列表中写入所述标识信息与第一目标数值相加所得的数值。
  26. 根据权利要求23所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    将所述缓存区的目标地址内存储的数值确定为所述人脸图像信息。
  27. 根据权利要求26所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    将所述目标地址内存储的数值置为所述标识信息与第二目标数值相加所得的数值。
  28. 根据权利要求23所述的电子设备,其中,所述标识信息为所述人脸图像信息与第三目标数值相加所得的数值。
  29. 根据权利要求23至28任一项所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    在所述人脸图像中除了人脸区域之外的任一区域内嵌入所述标识信息。
  30. 根据权利要求23所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    对所述人脸图像进行傅里叶变换,得到所述人脸图像的频谱图像;
    在所述频谱图像中除了人脸区域之外的任一区域内嵌入所述标识信息;
    对携带所述标识信息的频谱图像进行逆傅里叶变换,得到携带所述标识信息的人脸图像。
  31. 一种电子设备,所述电子设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条程序代码,所述至少一条程序代码由所述一个或多个处理 器加载并执行以实现如下操作:
    当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
    当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
  32. 根据权利要求31所述的电子设备,其中,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    生成数值转移请求,所述数值转移请求至少包括所述摄像头组件的摄像头标识、携带所述标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识;
    向服务器发送所述数值转移请求,由所述服务器基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像进行身份验证,当验证通过时从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
  33. 一种电子设备,所述电子设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条程序代码,所述至少一条程序代码由所述一个或多个处理器加载并执行以实现如下操作:
    接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;
    当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;
    当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
  34. 一种存储介质,所述存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现如下操作:
    当摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    发送所述携带所述标识信息的人脸图像。
  35. 一种存储介质,所述存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现如下操作:
    当检测到对数值转移选项的触发操作时,调用摄像头组件采集人脸图像;
    当所述摄像头组件采集到任一人脸图像时,从缓存区中读取人脸图像信息,所述人脸图像信息用于表示所述摄像头组件历史采集过的所有人脸图像个数;
    在所述人脸图像中嵌入标识信息,得到携带所述标识信息的人脸图像,所述标识信息用于表示所述人脸图像信息;
    基于所述摄像头组件的摄像头标识以及携带所述标识信息的人脸图像,进行数值转移。
  36. 一种存储介质,所述存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现如下操作:
    接收数值转移请求,所述数值转移请求至少包括摄像头标识、携带标识信息的人脸图像、待转移数值、第一用户标识以及第二用户标识,所述标识信息用于表示所述人脸图像为实时采集得到的图像;
    当所述标识信息大于与所述摄像头标识对应存储的各个历史标识信息时,对所述人脸图像的人脸区域进行人脸识别,得到识别结果;
    当所述识别结果为通过时,从所述第一用户标识对应存储的数值中转移所述待转移数值至所述第二用户标识对应存储的数值。
PCT/CN2020/120316 2019-12-16 2020-10-12 人脸图像传输方法、数值转移方法、装置及电子设备 WO2021120794A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20903189.7A EP3989113A4 (en) 2019-12-16 2020-10-12 FACE IMAGE TRANSMISSION METHOD, METHOD AND DEVICE FOR TRANSMISSION OF NUMERICAL VALUES AND ELECTRONIC DEVICE
JP2022515017A JP7389236B2 (ja) 2019-12-16 2020-10-12 顔画像送信方法、価値転送方法、装置、電子デバイス
US17/528,079 US20220075998A1 (en) 2019-12-16 2021-11-16 Secure face image transmission method, apparatuses, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911300268.6 2019-12-16
CN201911300268.6A CN111062323B (zh) 2019-12-16 2019-12-16 人脸图像传输方法、数值转移方法、装置及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/528,079 Continuation US20220075998A1 (en) 2019-12-16 2021-11-16 Secure face image transmission method, apparatuses, and electronic device

Publications (1)

Publication Number Publication Date
WO2021120794A1 true WO2021120794A1 (zh) 2021-06-24

Family

ID=70301845

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120316 WO2021120794A1 (zh) 2019-12-16 2020-10-12 人脸图像传输方法、数值转移方法、装置及电子设备

Country Status (5)

Country Link
US (1) US20220075998A1 (zh)
EP (1) EP3989113A4 (zh)
JP (1) JP7389236B2 (zh)
CN (1) CN111062323B (zh)
WO (1) WO2021120794A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046365B (zh) * 2019-12-16 2023-05-05 腾讯科技(深圳)有限公司 人脸图像传输方法、数值转移方法、装置及电子设备
CN111062323B (zh) * 2019-12-16 2023-06-02 腾讯科技(深圳)有限公司 人脸图像传输方法、数值转移方法、装置及电子设备
CN113450121B (zh) * 2021-06-30 2022-08-05 湖南校智付网络科技有限公司 用于校园支付的人脸识别方法
CN115205952B (zh) * 2022-09-16 2022-11-25 深圳市企鹅网络科技有限公司 一种基于深度学习的线上学习图像采集方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803289A (zh) * 2016-12-22 2017-06-06 五邑大学 一种智能移动防伪签到方法与系统
CN107679861A (zh) * 2017-08-30 2018-02-09 阿里巴巴集团控股有限公司 资源转移方法、资金支付方法、装置及电子设备
CN108306886A (zh) * 2018-02-01 2018-07-20 深圳市腾讯计算机系统有限公司 一种身份验证方法、装置及存储介质
US20180240212A1 (en) * 2015-03-06 2018-08-23 Digimarc Corporation Digital watermarking applications
CN111062323A (zh) * 2019-12-16 2020-04-24 腾讯科技(深圳)有限公司 人脸图像传输方法、数值转移方法、装置及电子设备

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212285B1 (en) * 1998-04-15 2001-04-03 Massachusetts Institute Of Technology Method and apparatus for multi-bit zoned data hiding in printed images
US7756892B2 (en) * 2000-05-02 2010-07-13 Digimarc Corporation Using embedded data with file sharing
US6608911B2 (en) * 2000-12-21 2003-08-19 Digimarc Corporation Digitally watermaking holograms for use with smart cards
JP2001052143A (ja) 1999-08-09 2001-02-23 Mega Chips Corp 認証用記録媒体および認証システム
JP2001094755A (ja) 1999-09-20 2001-04-06 Toshiba Corp 画像処理方法
US7663670B1 (en) * 2001-02-09 2010-02-16 Digital Imaging Systems Gmbh Methods and systems for embedding camera information in images
JP4541632B2 (ja) * 2002-05-13 2010-09-08 パナソニック株式会社 電子透かし埋め込み装置、その方法及び記録媒体
US6782116B1 (en) * 2002-11-04 2004-08-24 Mediasec Technologies, Gmbh Apparatus and methods for improving detection of watermarks in content that has undergone a lossy transformation
US8509472B2 (en) * 2004-06-24 2013-08-13 Digimarc Corporation Digital watermarking methods, programs and apparatus
JP4799854B2 (ja) * 2004-12-09 2011-10-26 ソニー株式会社 情報処理装置および方法、並びにプログラム
US7370190B2 (en) * 2005-03-03 2008-05-06 Digimarc Corporation Data processing systems and methods with enhanced bios functionality
JP2007116506A (ja) 2005-10-21 2007-05-10 Fujifilm Corp 電子申請用顔画像媒体、その作成装置、無効化装置及び再発行装置
US7861056B2 (en) * 2007-01-03 2010-12-28 Tekelec Methods, systems, and computer program products for providing memory management with constant defragmentation time
US8233677B2 (en) * 2007-07-04 2012-07-31 Sanyo Electric Co., Ltd. Image sensing apparatus and image file data structure
US8289562B2 (en) * 2007-09-21 2012-10-16 Fujifilm Corporation Image processing apparatus, method and recording medium
JP5071471B2 (ja) * 2009-12-18 2012-11-14 コニカミノルタビジネステクノロジーズ株式会社 画像形成装置、画像形成方法および画像形成装置を制御する制御プログラム
CN101777212A (zh) * 2010-02-05 2010-07-14 广州广电运通金融电子股份有限公司 安全卡、卡认证系统、具有该系统的金融设备及认证方法
US9626273B2 (en) * 2011-11-09 2017-04-18 Nec Corporation Analysis system including analysis engines executing predetermined analysis and analysis executing part controlling operation of analysis engines and causing analysis engines to execute analysis
CN104429075B (zh) * 2012-07-09 2017-10-31 太阳专利托管公司 图像编码方法、图像解码方法、图像编码装置及图像解码装置
KR102106539B1 (ko) * 2013-07-01 2020-05-28 삼성전자주식회사 화상 통화동안 비디오 컨텐츠를 인증하는 방법 및 디바이스
US20190238954A1 (en) * 2016-06-22 2019-08-01 Southern Star Corporation Pty Ltd Systems and methods for delivery of audio and video content
CN106485201B (zh) * 2016-09-09 2019-06-28 首都师范大学 超复数加密域的彩色人脸识别方法
CN106920111B (zh) * 2017-02-23 2018-06-26 彭雨妍 产品生产过程信息的处理方法及系统
US10262191B2 (en) * 2017-03-08 2019-04-16 Morphotrust Usa, Llc System and method for manufacturing and inspecting identification documents
CN107066983B (zh) * 2017-04-20 2022-08-09 腾讯科技(上海)有限公司 一种身份验证方法及装置
CN110869963A (zh) 2017-08-02 2020-03-06 麦克赛尔株式会社 生物认证结算系统、结算系统和收银系统
JP6946175B2 (ja) * 2017-12-27 2021-10-06 富士フイルム株式会社 撮影制御システム、撮影制御方法、プログラムおよび記録媒体
US11176373B1 (en) * 2018-01-12 2021-11-16 Amazon Technologies, Inc. System and method for visitor detection algorithm
US10991064B1 (en) * 2018-03-07 2021-04-27 Adventure Soup Inc. System and method of applying watermark in a digital image
CN114780934A (zh) * 2018-08-13 2022-07-22 创新先进技术有限公司 一种身份验证方法及装置
CN110086954B (zh) * 2019-03-26 2020-07-28 同济大学 一种基于数字水印的航线加密方法和执行方法
CN110287841B (zh) * 2019-06-17 2021-09-17 秒针信息技术有限公司 图像传输方法及装置、图像传输系统、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180240212A1 (en) * 2015-03-06 2018-08-23 Digimarc Corporation Digital watermarking applications
CN106803289A (zh) * 2016-12-22 2017-06-06 五邑大学 一种智能移动防伪签到方法与系统
CN107679861A (zh) * 2017-08-30 2018-02-09 阿里巴巴集团控股有限公司 资源转移方法、资金支付方法、装置及电子设备
CN108306886A (zh) * 2018-02-01 2018-07-20 深圳市腾讯计算机系统有限公司 一种身份验证方法、装置及存储介质
CN111062323A (zh) * 2019-12-16 2020-04-24 腾讯科技(深圳)有限公司 人脸图像传输方法、数值转移方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3989113A4 *

Also Published As

Publication number Publication date
EP3989113A1 (en) 2022-04-27
JP7389236B2 (ja) 2023-11-29
EP3989113A4 (en) 2023-01-18
US20220075998A1 (en) 2022-03-10
CN111062323A (zh) 2020-04-24
CN111062323B (zh) 2023-06-02
JP2022547923A (ja) 2022-11-16

Similar Documents

Publication Publication Date Title
JP7338044B2 (ja) 顔画像送信方法、値転送方法、装置および電子機器
WO2021120794A1 (zh) 人脸图像传输方法、数值转移方法、装置及电子设备
CN110290146B (zh) 分享口令的生成方法、装置、服务器及存储介质
CN112235400B (zh) 通信方法、通信系统、装置、服务器及存储介质
WO2019062606A1 (zh) 弹幕信息显示方法、提供方法以及设备
CN111866140B (zh) 融合管理设备、管理系统、服务调用方法及介质
CN110365501B (zh) 基于图形码进行群组加入处理的方法及装置
CN110769050B (zh) 数据处理方法、数据处理系统、计算机设备及存储介质
CN110826103A (zh) 基于区块链的文档权限处理方法、装置、设备及存储介质
CN110752929B (zh) 应用程序的处理方法及相关产品
CN111404991A (zh) 获取云服务的方法、装置、电子设备及介质
CN110677262B (zh) 基于区块链的信息公证方法、装置及系统
CN111193702B (zh) 数据加密传输的方法和装置
CN110597840B (zh) 基于区块链的伴侣关系建立方法、装置、设备及存储介质
CN111694892B (zh) 资源转移方法、装置、终端、服务器及存储介质
CN111970298A (zh) 应用访问方法、装置、存储介质及计算机设备
CN110738491A (zh) 数值转移方法、系统、装置、终端及存储介质
CN112765571B (zh) 权限管理方法、系统、装置、服务器及存储介质
CN115495169A (zh) 数据获取、页面生成方法、装置、设备及可读存储介质
CN112528311B (zh) 数据管理方法、装置及终端
CN112699364A (zh) 验证信息的处理方法、装置、设备及存储介质
CN114124405A (zh) 业务处理方法、系统、计算机设备及计算机可读存储介质
CN114793288B (zh) 权限信息处理方法、装置、服务器及介质
CN116743351A (zh) 密钥管理方法、装置、设备及存储介质
JP2022551241A (ja) メディアデータを再生する方法、装置、システム、デバイス、および記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20903189

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 20903189.7

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2020903189

Country of ref document: EP

Effective date: 20220124

ENP Entry into the national phase

Ref document number: 2022515017

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE