CN111460419A - Internet of things artificial intelligence face verification method and Internet of things cloud server - Google Patents

Internet of things artificial intelligence face verification method and Internet of things cloud server Download PDF

Info

Publication number
CN111460419A
CN111460419A CN202010239943.5A CN202010239943A CN111460419A CN 111460419 A CN111460419 A CN 111460419A CN 202010239943 A CN202010239943 A CN 202010239943A CN 111460419 A CN111460419 A CN 111460419A
Authority
CN
China
Prior art keywords
feature
spectral
living body
information
identification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010239943.5A
Other languages
Chinese (zh)
Other versions
CN111460419B (en
Inventor
周亚琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microgrid Union Technology Chengdu Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011113906.6A priority Critical patent/CN112269976A/en
Priority to CN202010239943.5A priority patent/CN111460419B/en
Priority to CN202011113894.7A priority patent/CN112269975A/en
Publication of CN111460419A publication Critical patent/CN111460419A/en
Application granted granted Critical
Publication of CN111460419B publication Critical patent/CN111460419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an Internet of things artificial intelligence face verification method and an Internet of things cloud server, wherein each suspected living body area of each continuous time node and a related suspected living body area which is related to the current suspected living body area are determined, so that face verification can be performed through an artificial intelligence model after spectral image feature recognition is performed on the basis of the incidence relation between the suspected living body areas and the related suspected living body areas, and therefore the discrimination accuracy of the variation difference of the face part area on the spectral condition in the preset time period can be improved, and the accuracy of living body detection is improved.

Description

Internet of things artificial intelligence face verification method and Internet of things cloud server
Technical Field
The invention relates to the technical field of Internet of things and artificial intelligence, in particular to an Internet of things artificial intelligence face verification method and an Internet of things cloud server.
Background
In the process of face verification, whether a target user is a real living body needs to be verified, so that some fraudulent behaviors are prevented from being detected, and property loss of a user account is prevented. With the rapid development of the 5G technology, human face living body detection is widely used in the control verification process of the Internet of things, and in the traditional human face verification scheme, the change difference of a human face part region on the spectrum condition within a period of time is often difficult to accurately distinguish in the living body detection process, so that the accuracy of the living body detection is low.
Disclosure of Invention
In order to overcome at least the above disadvantages in the prior art, the present invention aims to provide an internet of things artificial intelligence face verification method and an internet of things cloud server, wherein each suspected living body region of each continuous time node and an associated suspected living body region having an association with the current suspected living body region are determined, so that after spectral image feature recognition is performed based on the association relationship between the suspected living body regions and the associated suspected living body regions, face verification is performed through an artificial intelligence model, and thus, the accuracy of distinguishing the variation difference of the face region in the spectral conditions within a preset time period can be improved, and the accuracy of living body detection is improved.
In a first aspect, the invention provides an internet of things artificial intelligence face verification method, which is applied to an internet of things cloud server, wherein the internet of things cloud server is in communication connection with a plurality of internet of things face verification terminals, and the method comprises the following steps:
acquiring a face image data stream of each continuous time node of a target acquisition region in a preset time period, wherein the face image data stream is acquired by the Internet of things face verification terminal when a face verification instruction is detected;
determining each suspected living body area corresponding to the target acquisition area according to the face image data stream of each continuous time node, and respectively determining associated suspected living body areas which are associated with the current suspected living body area from the face image data streams of the remaining time nodes for each suspected living body area, wherein the areas outside the suspected living body areas are non-living body areas;
performing spectral image feature identification on the current suspected living body area, and performing spectral image feature identification on the associated suspected living body area to respectively obtain first spectral image feature identification information of the current suspected living body area and second spectral image feature identification information of the associated suspected living body area, wherein the first spectral image feature identification information and the second spectral image feature identification information respectively comprise spectral position coordinate information of respective corresponding spectral conditions, and the spectral conditions are respectively a plurality of preset spectral forms associated with respective corresponding light reflection features;
generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information;
respectively carrying out living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the living body feature identification information, splicing the identified living body feature units according to a time sequence arrangement mode to obtain a plurality of spliced spectrum feature vector sequences, and identifying each spliced spectrum feature vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition area.
In a possible implementation manner of the first aspect, the step of determining each suspected living body area corresponding to the target acquisition area according to the facial image data stream of each continuous time node includes:
determining light reflection dynamic change information containing light reflection characteristic information of the target acquisition area according to the face image data stream of each continuous time node, and determining first dynamic change information with a first light reflection characteristic and second dynamic change information with a second light reflection characteristic in the light reflection dynamic change information, wherein the first light reflection characteristic is used for representing the light reflection characteristic corresponding to the light reflection intensity being greater than the first preset intensity, and the second light reflection characteristic is used for representing the light reflection characteristic corresponding to the light reflection intensity being less than the second preset intensity;
determining light reflection characteristics of key points of the face position in the light reflection characteristics of the light reflection dynamic change information corresponding to the face position of the target acquisition area;
acquiring the interval size of a first dynamic change pixel value interval on the first dynamic change information and the interval size of a second dynamic change pixel value interval on the second dynamic change information;
if the interval size of the first dynamically-changed pixel value interval and the interval size of the second dynamically-changed pixel value interval are both larger than or equal to a set length, comparing the interval size of the first dynamically-changed pixel value interval with the interval size of the second dynamically-changed pixel value interval, and if the interval size of the first dynamically-changed pixel value interval is larger than the interval size of the second dynamically-changed pixel value interval, taking the first dynamically-changed pixel value interval as a suspected living pixel value interval;
if the interval size of the second dynamically-changed pixel value interval is larger than the interval size of the first dynamically-changed pixel value interval, taking the second dynamically-changed pixel value interval as a suspected living pixel value interval;
if the interval size of the first dynamically changing pixel value interval is equal to the interval size of the second dynamically changing pixel value interval, taking the first dynamically changing pixel value interval or the second dynamically changing pixel value interval as a suspected living pixel value interval;
determining an area which is matched with each suspected living body pixel value interval and is matched with the light reflection characteristics of the key points of the human face position as a suspected living body area to be determined, segmenting the light reflection dynamic change information into a plurality of segmentation dynamic change information according to the determined suspected living body area to be determined, and determining the suspected living body area meeting the conditions as the suspected living body area corresponding to the target acquisition area according to the relation between the change range and the preset range of each segmentation dynamic change information.
In a possible implementation manner of the first aspect, the step of determining, for each suspected living area, associated suspected living areas that are associated with the current suspected living area from the face image data streams of the remaining time nodes includes:
for each suspected living area, acquiring at least one local feature group of the suspected living area, analyzing each local feature group in the at least one local feature group, and acquiring key feature points contained in each local feature group, wherein the local feature groups are used for representing each local feature point of the suspected living area and face part information corresponding to each local feature point;
acquiring a feature point change value, a feature point depth value and a feature point color value of each key feature point in a corresponding time period, wherein the feature point change value is used for describing the feature point change value of each key feature point, the feature point depth value is used for describing the feature point depth value of each key feature point, and the feature point color value is used for describing the feature point color value of each key feature point;
mapping and associating the feature point change value, the feature point depth value and the feature point color value of each key feature point in a corresponding time period, and then merging the feature point change value, the feature point depth value and the feature point color value to obtain a feature value mapping sequence corresponding to each key feature point, wherein the feature value mapping sequence is used for representing the corresponding relation among the feature point change value, the feature point depth value and the feature point color value of each key feature point in the corresponding time period;
and respectively determining associated suspected living areas which are associated with the current suspected living area from the face image data streams of the remaining time nodes according to the feature value mapping sequence corresponding to each key feature point.
In a possible implementation manner of the first aspect, the step of generating living body feature identification information of each current suspected living body area and a corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information includes:
acquiring illumination intensity of common spectral position coordinate information and each spectral position coordinate set between spectral position coordinate information of spectral conditions corresponding to first spectral image feature identification information and second spectral image feature identification information respectively;
under the condition that the common spectrum position coordinate information contains the target spectrum coordinate region according to the illumination intensity, determining the difference of the coordinate region difference between each spectrum position coordinate set of the common spectrum position coordinate information in the set target spectrum coordinate region and each spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate region according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate region, and the spectrum position coordinate set of the common spectrum position coordinate information in the set target spectrum coordinate region and the spectrum position coordinate set in the target spectrum coordinate region have the same difference in coordinate region is adjusted to be in the corresponding target spectrum coordinate region, the target spectrum coordinate area represents a spectrum coordinate area with the illumination intensity within a preset abnormal illumination intensity range;
under the condition that a plurality of spectrum position coordinate sets are contained in a currently set target spectrum coordinate area of common spectrum position coordinate information, determining the difference of coordinate area differences among the spectrum position coordinate sets of the common spectrum position coordinate information in the currently set target spectrum coordinate area according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area, and screening the spectrum position coordinate sets in the currently set target spectrum coordinate area according to the difference of the coordinate area differences among the spectrum position coordinate sets;
setting a label of a target spectrum coordinate area for each spectrum position coordinate set obtained by screening according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area, and adjusting each spectrum position coordinate set to be in the target spectrum coordinate area;
determining a first spectral feature vector sequence and a second spectral feature vector sequence corresponding to the first spectral image feature identification information and the second spectral image feature identification information respectively according to a first spectral position coordinate set in the set target spectral coordinate region, a second spectral position coordinate set in the target spectral coordinate region, a first environmental influence factor parameter of the first spectral image feature identification information and a second environmental influence factor parameter of the second spectral image feature identification information; the first spectral feature vector sequence includes contrast feature points of the first spectral image feature identification information for the second spectral image feature identification information within the coordinate region difference of the common spectral position coordinate information, the second spectral feature vector sequence includes associated feature points of the second spectral image feature identification information for the contrast feature points corresponding to the first spectral image feature identification information within the coordinate region difference of the common spectral position coordinate information, and the first environmental influence factor parameter and the second environmental influence factor parameter are respectively used for representing environmental influence factor parameters corresponding to spectral condition vectors associated with the first spectral image feature identification information and the second spectral image feature identification information respectively;
determining a first candidate spectral feature vector of the first spectral image feature identification information and a second candidate spectral feature vector of the second spectral image feature identification information from the first spectral feature vector sequence and the second spectral feature vector sequence, respectively;
when the first candidate spectral feature vector and the second candidate spectral feature vector are determined, matching is performed on the first candidate spectral feature vector and the second candidate spectral feature vector to obtain matching information, whether the first candidate spectral feature vector and the second candidate spectral feature vector are candidate spectral feature vectors of a multi-combination spectral feature vector or not is judged according to the matching information, if yes, the first candidate spectral feature vector and the second candidate spectral feature vector are respectively converted into a plurality of first combination spectral feature vector sets and second combination spectral feature vector sets with the combination spectral feature vectors according to each combination spectral feature vector, and then the combination of the first combination spectral feature vector set and the second combination spectral feature vector set with the same or similar combination spectral feature vector set and the second combination spectral feature vector set with the combination spectral feature vector set are searched according to the first combination spectral feature vector set and the second combination spectral feature vector set respectively The characteristic part area of the spectral characteristic vector, and the matching information and the spectral characteristic vector set corresponding to the characteristic part area are combined into a corresponding mapping set;
and generating living characteristic identification information of each current suspected living area and the corresponding associated suspected living area according to the mapping set.
In a possible implementation manner of the first aspect, the step of generating living feature identification information of each current suspected living area and a corresponding associated suspected living area according to the mapping set includes:
determining first living body characteristic influence information of the first spectral image characteristic identification information and second living body characteristic influence information of the second spectral image characteristic identification information according to a characteristic part region in the mapping set, a spectral characteristic vector set corresponding to the characteristic part region, a first environmental influence factor parameter corresponding to spectral position coordinate information of spectral conditions of the first spectral image characteristic identification information, and a second environmental influence factor parameter corresponding to spectral position coordinate information of spectral conditions of the second spectral image characteristic identification information;
respectively performing equal-interval segmentation on the first living body feature influence information and the second living body feature influence information to obtain a first segmentation parameter set of the first living body feature influence information and a second segmentation parameter set of the second living body feature influence information; wherein the first segmentation parameter set includes feature influence information of a plurality of first spectral feature vectors of the first living body feature influence information, and the second segmentation parameter set includes feature influence information of a plurality of second spectral feature vectors of the second living body feature influence information;
respectively matching the characteristic influence information of each first spectral feature vector in a first segmentation parameter set corresponding to the first living body characteristic influence information and the characteristic influence information of each second spectral feature vector in a second segmentation parameter set corresponding to the second living body characteristic influence information with the spectral feature vector of each preset spectral feature vector identification information in a preset spectral feature vector identification feature set to obtain first matching information between the first living body characteristic influence information and the preset spectral feature vector identification feature set and second matching information between the second living body characteristic influence information and the preset spectral feature vector identification feature set, the preset spectral feature vector identification feature set comprises corresponding relations between a plurality of verified spectral feature vector identification information and corresponding spectral feature vectors;
sequentially matching the preset spectral feature vector identification information obtained from the first matching information and the second matching information as a matching object until one current spectral feature vector identification information appears in the preset spectral feature vector identification feature set, causing a first coincidence identification information range of third matching information between the first living body feature influence information and the current spectral feature vector identification information and first matching information between the first living body feature influence information and the preset spectral feature vector identification feature set to be larger than a target preset range, and a fourth matching information between the second living body feature influence information and the current spectral feature vector identification information and a second coincidence identification information range of a second matching information between the second living body feature influence information and the current spectral feature vector identification information are larger than the target preset range;
determining a third living body feature corresponding to the current spectral feature vector identification information, and performing feature extraction on the first segmentation parameter set and the second segmentation parameter set according to the third living body feature to obtain a first living body feature and a second living body feature;
when the first living body feature and the second living body feature are not matched, identifying the first living body feature based on a first spectral image feature identification type corresponding to the first spectral image feature identification information to obtain first identification information, and identifying the second living body feature based on a second spectral image feature identification type corresponding to the second spectral image feature identification information to obtain second identification information;
and determining the target living body feature units of the target spectral coordinate areas corresponding to the first identification information and the second identification information respectively existing in the first spectral image feature identification information and the second spectral image feature identification information, thereby generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area.
In a possible implementation manner of the first aspect, the step of identifying the first living body feature based on a first spectral image feature identification type corresponding to the first spectral image feature identification information to obtain first identification information, and identifying the second living body feature based on a second spectral image feature identification type corresponding to the second spectral image feature identification information to obtain second identification information includes:
and determining a first related feature of the first spectral image feature identification information relative to the second spectral image feature identification information and a second related feature of the second spectral image feature identification information relative to the first spectral image feature identification information according to a first face part region corresponding to a first living feature of the first spectral image feature identification information and a second face part region corresponding to a second living feature of the second spectral image feature identification information, so as to obtain the first identification information and the second identification information.
In a possible implementation manner of the first aspect, the step of respectively performing living body feature unit identification on each current suspected living body area and a corresponding associated suspected living body area according to the living body feature identification information includes:
determining a target position area aiming at each current suspected living area and the corresponding associated suspected living area according to the living characteristic identification information;
respectively carrying out living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the face part label and the scanning time sequence of each target position in the target position area to obtain an identified living body feature unit;
splicing the identified living body characteristic units according to a time sequence arrangement mode to obtain a plurality of spliced spectrum characteristic vector sequences;
identifying each spliced spectrum characteristic vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition region;
in the living body feature unit identification process, partitioning a plurality of target positions according to the scanning time sequence of each target position to obtain a plurality of position partitions, wherein each position partition corresponds to a face part label;
for each position partition, generating a face part area corresponding to each target position under the current position partition, for each position partition, dividing target positions with the same spectral reflection points in different face part areas into an object unit, and when the ratio of the position continuity quantity in the target positions of the object unit to the total number of positions under the current position partition exceeds a first threshold, merging spectral reflection paths of each target position in the target positions of the object unit in the face part area to which the target position belongs to obtain a first spectral reflection path;
dividing nodes which only appear once in the region of the face part and have the same face part label and spectral reflection path in different face part regions into an object unit, and merging the spectral reflection paths of each node in the target position of the object unit in the region of the face part when the ratio of the position continuous quantity in the target position of the object unit to the total quantity of positions under the current position partition exceeds a first threshold value to obtain a first spectral reflection path;
dividing target positions which appear only once in the face part area and have the same face part label and spectral reflection path in different face part areas into an object unit, and merging the spectral reflection paths of each target position in the target positions of the object unit in the face part area when the ratio of the number of the target positions in the target positions of the object unit to the total number of positions under the current position subarea exceeds a first threshold value to obtain a second spectral reflection path;
determining a first target position in a current position partition according to the first spectrum reflection path or the second spectrum reflection path, and determining other target positions in the current position partition as second target positions;
and respectively carrying out living characteristic unit identification on each current suspected living area and the corresponding associated suspected living area according to the spectral reflection sequence of the first target position and the second target position in the current position partition.
In a possible implementation manner of the first aspect, the step of identifying each spliced spectral feature vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition region includes:
extracting feature information of each spliced spectrum feature vector sequence based on an artificial intelligence model, inputting the feature information of the spliced spectrum feature vector sequence into a classification layer for classification, and outputting the confidence of the feature information of the spliced spectrum feature vector sequence in each classification label, wherein the classification labels comprise a verification passing label and a verification failing label;
and obtaining a face verification result of the target acquisition region according to the confidence degree of the feature information of the spliced spectrum feature vector sequence in each classification label.
In a possible implementation manner of the first aspect, the artificial intelligence model is obtained by training a pre-configured training sample set and a training classification label corresponding to each training sample in the training sample set based on a deep learning network, where the training sample is a spectral feature vector sequence.
In a second aspect, the embodiment of the invention also provides an internet of things artificial intelligence face verification system, which comprises an internet of things cloud server and a plurality of internet of things face verification terminals in communication connection with the internet of things cloud server;
the internet of things face verification terminal is used for acquiring the face image data stream of each continuous time node in a preset time period of a target acquisition region and sending the face image data stream of each continuous time node in the preset time period to the internet of things cloud server when a face verification instruction is detected;
the Internet of things cloud server is used for acquiring a face image data stream of each continuous time node of a target acquisition region in a preset time period, wherein the face image data stream is acquired by the Internet of things face authentication terminal when a face authentication instruction is detected;
the internet of things cloud server is used for determining each suspected living body area corresponding to the target acquisition area according to the face image data stream of each continuous time node, and for each suspected living body area, respectively determining associated suspected living body areas which are associated with the current suspected living body area from the face image data streams of the rest time nodes, wherein the areas outside the suspected living body areas are non-living body areas;
the internet of things cloud server is used for performing spectral image feature identification on the current suspected living body area and performing spectral image feature identification on the associated suspected living body area to respectively obtain first spectral image feature identification information of the current suspected living body area and second spectral image feature identification information of the associated suspected living body area, wherein the first spectral image feature identification information and the second spectral image feature identification information respectively comprise spectral position coordinate information of respective corresponding spectral conditions, and the spectral conditions are respectively a plurality of preset spectral forms associated with respective corresponding light reflection features;
the Internet of things cloud server is used for generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information;
the Internet of things cloud server is used for respectively carrying out living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the living body feature identification information, splicing the identified living body feature units according to a time sequence arrangement mode to obtain a plurality of spliced spectrum feature vector sequences, and identifying each spliced spectrum feature vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition area.
In a third aspect, an embodiment of the present invention further provides an internet of things artificial intelligence face verification device, which is applied to an internet of things cloud server, where the internet of things cloud server is in communication connection with a plurality of internet of things face verification terminals, and the device includes:
the acquisition module is used for acquiring a human face image data stream of each continuous time node of a target acquisition region in a preset time period, wherein the human face image data stream is acquired by the Internet of things human face verification terminal when a human face verification instruction is detected;
a determining module, configured to determine, according to the face image data stream of each continuous time node, each suspected living body area corresponding to the target acquisition area, and for each suspected living body area, respectively determine, from the face image data streams of the remaining time nodes, a related suspected living body area having a relevance to the current suspected living body area, where an area outside the suspected living body area is a non-living body area;
the first identification module is used for performing spectral image feature identification on the current suspected living body area and performing spectral image feature identification on the associated suspected living body area to respectively obtain first spectral image feature identification information of the current suspected living body area and second spectral image feature identification information of the associated suspected living body area, wherein the first spectral image feature identification information and the second spectral image feature identification information respectively comprise spectral position coordinate information of respective corresponding spectral conditions, and the spectral conditions are respectively a plurality of preset spectral forms associated with respective corresponding light reflection features;
a generating module, configured to generate living body feature identification information of each current suspected living body area and a corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information;
and the second identification module is used for respectively identifying living body feature units of each current suspected living body area and the corresponding associated suspected living body area according to the living body feature identification information, splicing the identified living body feature units in a time sequence arrangement mode to obtain a plurality of spliced spectral feature vector sequences, and identifying each spliced spectral feature vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition area.
In a fourth aspect, an embodiment of the present invention further provides an internet of things cloud server, where the internet of things cloud server includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be in communication connection with at least one internet of things face verification terminal, the machine-readable storage medium is configured to store a program, an instruction, or a code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium, so as to execute the internet of things artificial intelligence face verification method in any one possible design of the first aspect or the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, where instructions are stored, and when executed, cause a computer to perform the method for verifying an artificial intelligence face of an internet of things in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, each suspected living body area of each continuous time node and the associated suspected living body area having the relevance with the current suspected living body area are determined, so that after spectral image feature recognition is performed based on the relevance relationship between the suspected living body areas and the associated suspected living body areas, face verification is performed through an artificial intelligence model, and therefore the accuracy of distinguishing the variation difference of the face part area in the spectral condition within the preset time period can be improved, and the accuracy of living body detection is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of an application scenario of an internet of things artificial intelligence face verification system provided by an embodiment of the invention;
fig. 2 is a schematic flow chart of an internet-of-things artificial intelligence face verification method provided by an embodiment of the invention;
fig. 3 is a schematic diagram of functional modules of an internet of things artificial intelligence face verification device provided by an embodiment of the invention;
fig. 4 is a block diagram schematically illustrating a structure of an internet of things cloud server for implementing the internet of things artificial intelligence face verification method provided by the embodiment of the invention.
Detailed Description
The present invention is described in detail below with reference to the drawings, and the specific operation methods in the method embodiments can also be applied to the apparatus embodiments or the system embodiments.
Fig. 1 is an interaction diagram of an internet-of-things artificial intelligence face verification system 10 according to an embodiment of the present invention. The internet of things artificial intelligence face verification system 10 can comprise an internet of things cloud server 100 and an internet of things face verification terminal 200 in communication connection with the internet of things cloud server 100. The internet of things artificial intelligence face verification system 10 shown in fig. 1 is only one possible example, and in other possible embodiments, the internet of things artificial intelligence face verification system 10 may also include only one of the components shown in fig. 1 or may also include other components.
In this embodiment, the internet of things face verification terminal 200 may include a mobile device, a tablet computer, a laptop computer, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include control devices of smart electrical devices, smart monitoring devices, smart televisions, smart cameras, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include various virtual reality products and the like.
In this embodiment, the internet of things cloud server 100 and the internet of things face verification terminal 200 in the internet of things artificial intelligence face verification system 10 may execute the network security protection method of the internet of things mobile base station described in the following method embodiment in a matching manner, and the detailed description of the following method embodiment may be referred to in the execution steps of the specific internet of things cloud server 100 and the internet of things face verification terminal 200.
In order to solve the technical problem in the foregoing background technology, fig. 2 is a schematic flow chart of an internet of things artificial intelligence face verification method provided in an embodiment of the present invention, and the internet of things artificial intelligence face verification method provided in this embodiment may be executed by the internet of things cloud server 100 shown in fig. 1, and the details of the internet of things artificial intelligence face verification method are described below.
Step S110, a face image data stream of each continuous time node of a target acquisition region in a preset time period, which is acquired when the face verification instruction is detected by the internet of things face verification terminal 200, is acquired.
Step S120, each suspected living body area corresponding to the target acquisition area is determined according to the face image data stream of each continuous time node, and for each suspected living body area, the related suspected living body areas which are related to the current suspected living body area are respectively determined from the face image data streams of the rest time nodes.
Step S130, performing spectral image feature identification on the current suspected living body area, and performing spectral image feature identification on the associated suspected living body area, to respectively obtain first spectral image feature identification information of the current suspected living body area and second spectral image feature identification information of the associated suspected living body area.
Step S140, generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information.
And S150, respectively carrying out living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the living body feature identification information, splicing the identified living body feature units according to a time sequence arrangement mode to obtain a plurality of spliced spectrum feature vector sequences, and identifying each spliced spectrum feature vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition area.
In this embodiment, the internet of things face verification terminal 200 may collect the face image data stream of each continuous time node in the target collection area within a preset time period when a face verification instruction is detected after various internet of things services (for example, services such as smart home control, smart medical linkage, and smart city data retrieval linkage) are enabled. The target acquisition region may be a region that can be acquired by the internet of things face verification terminal 200, and the preset time period may be flexibly set according to different internet of things service requirements, for example, 5 seconds may be set as a preset time period. Each time node may refer to a specific time, or may refer to a sub-time period within the preset time period, which is not limited herein.
In this embodiment, the suspected living area may be understood as an area that needs to be subjected to living detection, and an area outside the suspected living area may be obviously determined as a non-living area in general.
In this embodiment, the first spectral image feature identification information and the second spectral image feature identification information respectively include spectral position coordinate information of respective corresponding spectral conditions, and the spectral conditions may be a plurality of preset spectral forms associated with respective corresponding light reflection features (for example, spectral reflectivities and the like), for example, spectral reflection forms of environments in different spectrums under the spectral conditions.
Based on the above design, in this embodiment, each suspected living body region of each continuous time node and the associated suspected living body region having a correlation with the current suspected living body region are determined, so that after spectral image feature recognition is performed based on the correlation between the suspected living body region and the associated suspected living body region, face verification is performed through an artificial intelligence model, and thus, the accuracy of distinguishing the variation difference of the face region in the spectral condition within a preset time period can be improved, and the accuracy of living body detection is improved.
In a possible implementation manner, for step S120, in order to improve the accuracy of each suspected living body area in the determination process and reduce the identification error, the present embodiment further considers dynamic changes that may occur in the spectral reflection process, for example, the present embodiment may determine, according to the face image data stream of each continuous time node, light reflection dynamic change information containing light reflection characteristic information of the target collection area, and determine, in the light reflection dynamic change information, first dynamic change information having a first light reflection characteristic and second dynamic change information having a second light reflection characteristic.
The first light reflection characteristic may be used to represent a light reflection characteristic having a light reflection intensity greater than a first preset intensity, and the second light reflection characteristic may be used to represent a light reflection characteristic having a light reflection intensity less than a second preset intensity. It should be noted that the first preset intensity and the second preset intensity may be the same or different, and may be flexibly set, and when the first preset intensity and the second preset intensity are not the same, the second preset intensity is smaller than the first preset intensity.
Next, in the light reflection characteristics of the light reflection dynamic change information corresponding to the face position of the target collection area, the light reflection characteristics of key points of the face position are determined, and the interval size of a first dynamic change pixel value interval on the first dynamic change information and the interval size of a second dynamic change pixel value interval on the second dynamic change information are obtained.
And if the interval size of the first dynamic change pixel value interval and the interval size of the second dynamic change pixel value interval are both larger than or equal to the set length, comparing the interval size of the first dynamic change pixel value interval with the interval size of the second dynamic change pixel value interval, and if the interval size of the first dynamic change pixel value interval is larger than the interval size of the second dynamic change pixel value interval, taking the first dynamic change pixel value interval as a suspected living pixel value interval.
Or, if the interval size of the second dynamic change pixel value interval is larger than the interval size of the first dynamic change pixel value interval, the second dynamic change pixel value interval is used as the suspected living body pixel value interval.
Or, if the interval size of the first dynamically changing pixel value interval is equal to the interval size of the second dynamically changing pixel value interval, the first dynamically changing pixel value interval or the second dynamically changing pixel value interval is used as the suspected living pixel value interval.
Therefore, the area which is matched with each suspected living body pixel value interval and is matched with the light reflection characteristics of the key points of the human face position can be determined as the suspected living body area to be determined, the light reflection dynamic change information is segmented into a plurality of pieces of segmentation dynamic change information according to the determined suspected living body area to be determined, and the suspected living body area meeting the conditions is determined as the suspected living body area corresponding to the target acquisition area according to the relation between the change range and the preset range of each piece of segmentation dynamic change information.
For example, in one possible example, when the variation range of each piece of segmentation dynamic variation information is in a preset range, it may be determined that the suspected living body area to be determined satisfies the condition, otherwise, it is determined that the suspected living body area to be determined does not satisfy the condition.
In a possible implementation manner, still referring to step S120, in order to facilitate accurately obtaining a relevant suspected living area which is relevant to the current suspected living area, for each suspected living area, at least one local feature group of the suspected living area may be obtained, and each local feature group in the at least one local feature group is analyzed to obtain a key feature point included in each local feature group.
It should be noted that the local feature group may be used to represent each local feature point of the suspected living body area and face part information corresponding to each local feature point, such as an eye part, a nose part, a lip part, and the like.
On the basis, the feature point change value, the feature point depth value and the feature point color value of each key feature point in the corresponding time period are obtained.
It should be noted that the feature point variation value may be used to describe a feature point variation value of each key feature point, the feature point depth value may be used to describe a feature point depth value of each key feature point, and the feature point color value may be used to describe a feature point color value of each key feature point.
Therefore, the feature point change value, the feature point depth value and the feature point color value of each key feature point in the corresponding time period can be mapped and associated and then combined, so that a feature value mapping sequence corresponding to each key feature point is obtained. It is understood that the merged feature value mapping sequence may be used to represent the correspondence between the feature point change value, the feature point depth value, and the feature point color value of each key feature point in the corresponding time period.
And finally, respectively determining the associated suspected living body areas which are associated with the current suspected living body area from the face image data streams of the remaining time nodes according to the feature value mapping sequence corresponding to each key feature point.
For example, a region having a matching relationship with the feature value mapping sequence corresponding to each key feature point may be searched from the face image data stream of the remaining time nodes, as an associated suspected living body region having an association with the current suspected living body region.
In a possible implementation manner, in step S130, in the process of performing spectral image feature identification on the current suspected living body region and performing spectral image feature identification on the associated suspected living body region, spectral image feature identification may be performed on the current suspected living body region based on a spectral rearrangement spectral feature extraction method, and spectral image feature identification may be performed on the associated suspected living body region, so that the first spectral image feature identification information and the second spectral image feature identification information are the related art with respect to the spectral feature extraction method based on spectral rearrangement, and are not described herein again.
In a possible implementation manner, for step S140, the embodiment may specifically acquire the illumination intensity of the common spectral position coordinate information between the spectral position coordinate information of the spectral conditions corresponding to the first spectral image feature identification information and the second spectral image feature identification information, and each set of spectral position coordinates.
On the basis, under the condition that the target spectrum coordinate area is included in the common spectrum position coordinate information determined according to the illumination intensity, the difference of the coordinate area difference between each spectrum position coordinate set of the common spectrum position coordinate information in the set target spectrum coordinate area and each spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area is determined according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area, and the spectrum position coordinate set of the common spectrum position coordinate information in the set target spectrum coordinate area and the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area are adjusted to be in the corresponding target spectrum coordinate area.
It should be noted that the target spectral coordinate region may represent a spectral coordinate region with an illumination intensity within a preset abnormal illumination intensity range.
Next, when a plurality of spectrum position coordinate sets are included in the currently set target spectrum coordinate region of the common spectrum position coordinate information, the difference between the coordinate region differences between the spectrum position coordinate sets of the common spectrum position coordinate information in the currently set target spectrum coordinate region is determined according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate region, and the spectrum position coordinate sets in the currently set target spectrum coordinate region are screened according to the difference between the coordinate region differences between the spectrum position coordinate sets.
Then, a label of the target spectrum coordinate area can be set for each spectrum position coordinate set obtained through screening according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area, and each spectrum position coordinate set is adjusted to be in the target spectrum coordinate area.
Therefore, the first spectral feature vector sequence and the second spectral feature vector sequence corresponding to the first spectral image feature identification information and the second spectral image feature identification information can be determined according to the first spectral position coordinate set in the set target spectral coordinate region, the second spectral position coordinate set in the target spectral coordinate region, the first environmental influence factor parameter of the first spectral image feature identification information and the second environmental influence factor parameter of the second spectral image feature identification information.
It is worth to be noted that the first spectral feature vector sequence includes a comparison feature point of the first spectral image feature identification information in the coordinate region difference of the common spectral position coordinate information with respect to the second spectral image feature identification information, the second spectral feature vector sequence includes an associated feature point of the second spectral image feature identification information in the coordinate region difference of the common spectral position coordinate information with respect to the comparison feature point corresponding to the first spectral image feature identification information, and the first environmental influence factor parameter and the second environmental influence factor parameter are respectively used for representing an environmental influence factor parameter corresponding to a spectral condition vector associated with each of the first spectral image feature identification information and the second spectral image feature identification information.
Then, a first candidate spectral feature vector of the first spectral image feature identification information and a second candidate spectral feature vector of the second spectral image feature identification information are determined from the first spectral feature vector sequence and the second spectral feature vector sequence, respectively. When a first candidate spectral feature vector and a second candidate spectral feature vector are determined, matching is carried out on the first candidate spectral feature vector and the second candidate spectral feature vector to obtain matching information, whether the first candidate spectral feature vector and the second candidate spectral feature vector are candidate spectral feature vectors of a multi-combination spectral feature vector or not is judged according to the matching information, if yes, the first candidate spectral feature vector and the second candidate spectral feature vector are respectively converted into a plurality of first combination spectral feature vector sets and second combination spectral feature vector sets with combination spectral feature vectors according to each combination spectral feature vector, then feature region areas with the same or similar combination spectral feature vectors as the first combination spectral feature vector set and the second combination spectral feature vector set are searched according to the first combination spectral feature vector set and the second combination spectral feature vector set respectively, and combining the matching information and the spectral feature vector sets corresponding to the feature region into a corresponding mapping set.
Finally, living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area can be generated according to the mapping set.
For example, in one possible implementation manner, the present embodiment may determine the first living body feature influence information of the first spectral image feature identification information and the second living body feature influence information of the second spectral image feature identification information according to the feature region in the mapping set and the spectral feature vector set corresponding to the feature region, the first environmental influence factor parameter corresponding to the spectral position coordinate information of the spectral condition of the first spectral image feature identification information, and the second environmental influence factor parameter corresponding to the spectral position coordinate information of the spectral condition of the second spectral image feature identification information.
It should be noted that the first environmental factor parameter and the second environmental factor parameter may refer to parameters under the respective corresponding spectral environments, such as the spectral environment temperature.
Then, the first living body feature influence information and the second living body feature influence information are respectively divided at equal intervals to obtain a first division parameter set of the first living body feature influence information and a second division parameter set of the second living body feature influence information.
The first segmented parameter set may include feature influence information of a plurality of first spectral feature vectors of the first living body feature influence information, and the second segmented parameter set may include feature influence information of a plurality of second spectral feature vectors of the second living body feature influence information, where the feature influence information may be used to characterize parameters in a corresponding spectral environment.
Then, the feature influence information of each first spectral feature vector in the first segmented parameter set corresponding to the first living body feature influence information and the feature influence information of each second spectral feature vector in the second segmented parameter set corresponding to the second living body feature influence information may be respectively matched with the spectral feature vector of each preset spectral feature vector identification information in the preset spectral feature vector identification feature set, so as to obtain first matching information between the first living body feature influence information and the preset spectral feature vector identification feature set and second matching information between the second living body feature influence information and the preset spectral feature vector identification feature set.
It should be noted that the preset spectral feature vector identification feature set may include a corresponding relationship between a plurality of verified spectral feature vector identification information and corresponding spectral feature vectors.
Then, the preset spectral feature vector identification information obtained from the first matching information and the second matching information is taken as a matching object, and matching is sequentially carried out until a current spectral feature vector identification information appears in the preset spectral feature vector identification feature set, so that a first coincidence identification information range of third matching information between the first living body feature influence information and the current spectral feature vector identification information and first matching information between the first living body feature influence information and a preset spectral feature vector identification feature set is larger than a target preset range, and the fourth matching information between the second living body characteristic influence information and the current spectral feature vector identification information and the second coincidence identification information range of the second matching information between the second living body characteristic influence information and the current spectral feature vector identification information are larger than the target preset range.
On the basis, determining a third living body feature corresponding to the current spectral feature vector identification information, performing feature extraction on the first segmentation parameter set and the second segmentation parameter set according to the third living body feature to obtain a first living body feature and a second living body feature, identifying the first living body feature based on a first spectral image feature identification type corresponding to the first spectral image feature identification information to obtain first identification information when the first living body feature is not matched with the second living body feature, and identifying the second living body feature based on a second spectral image feature identification type corresponding to the second spectral image feature identification information to obtain second identification information.
For example, in one possible implementation, the first and second identification information may be obtained by determining a first correlation feature of the first spectral image feature identification information with respect to the second spectral image feature identification information and a second correlation feature of the second spectral image feature identification information with respect to the first spectral image feature identification information from a first face part region corresponding to a first living feature of the first spectral image feature identification information and a second face part region corresponding to a second living feature of the second spectral image feature identification information.
Thus, the living body feature unit of the target spectral coordinate region corresponding to the first identification information and the second identification information respectively existing in the first spectral image feature identification information and the second spectral image feature identification information can be determined, thereby generating the living body feature identification information of each current suspected living body region and the corresponding associated suspected living body region.
In a possible implementation manner, for step S150, the present embodiment may determine, according to living body feature identification information, a target location area for each current suspected living body area and a corresponding associated suspected living body area, then perform living body feature unit identification on each current suspected living body area and a corresponding associated suspected living body area respectively according to a face part tag and a scanning time sequence of each target location in the target location area, obtain identified living body feature units, and after the identified living body feature units are spliced in a time sequence arrangement manner, obtain a plurality of spliced spectral feature vector sequences, so that each spliced spectral feature vector sequence may be identified based on an artificial intelligence model to obtain a face verification result of the target acquisition area.
In the living body feature unit identification process, a plurality of target positions may be partitioned according to the scanning timing of each target position to obtain a plurality of position partitions, wherein each position partition corresponds to one face part tag.
Then, for each position partition, a face part region corresponding to each target position under the current position partition may be generated, and for each position partition, the target positions having the same spectral reflection point in different face part regions are divided into an object unit, and when a ratio of a position continuity amount in the target positions of the object unit to a total number of positions under the current position partition exceeds a first threshold, spectral reflection paths of each target position in the target positions of the object unit in the face part region to which the target position belongs are merged, so as to obtain a first spectral reflection path.
Furthermore, the nodes that appear only once in the face part region and have the same face part label and spectral reflection path in different face part regions may be divided into an object unit, and when a ratio of a position continuity amount in a target position of the object unit to a total number of positions under a current position partition exceeds a first threshold, the spectral reflection paths of each node in the target position of the object unit in the face part region may be combined to obtain a first spectral reflection path.
For another example, the target positions that appear only once in the face part region and have the same face part label and spectral reflection path in different face part regions may be divided into an object unit, and when the ratio of the number of target positions in the target positions of the object unit to the total number of positions under the current position partition exceeds a first threshold, the spectral reflection paths of each of the target positions of the object unit in the face part region may be combined to obtain a second spectral reflection path.
Thus, the first target position in the current position partition can be determined according to the first spectral reflection path or the second spectral reflection path, and the other target positions in the current position partition can be determined as the second target positions, so that the living body feature unit identification can be respectively carried out on each current suspected living body area and the corresponding associated suspected living body area according to the spectral reflection sequence of the first target position and the second target position in the current position partition.
As a possible example, in the process of identifying each spliced spectral feature vector sequence based on the artificial intelligence model to obtain the face verification result of the target acquisition region, the embodiment may extract the feature information of each spliced spectral feature vector sequence based on the artificial intelligence model, input the feature information of the spliced spectral feature vector sequence into the classification layer for classification, and output the confidence of the feature information of the spliced spectral feature vector sequence in each classification label. Wherein the classification label comprises a verification passing label and a verification failing label.
And then, obtaining a face verification result of the target acquisition region according to the confidence degree of the feature information of the spliced spectrum feature vector sequence in each classification label.
For example, if the confidence of the feature information of the spliced spectral feature vector sequence in the tag passing verification is greater than the set confidence, it indicates that the face verification result is passed, and for example, if the confidence of the feature information of the spliced spectral feature vector sequence in the tag failing verification is greater than the set confidence, it indicates that the face verification result is failed.
Optionally, the artificial intelligence model may be obtained by training a pre-configured training sample set and a training classification label corresponding to each training sample in the training sample set based on a deep learning network, where the training sample is a spectral feature vector sequence, and the specific training mode may refer to a conventional training mode of a deep learning network in the prior art, and the training process is not a key point of the embodiment of the present invention and is not described herein again.
Fig. 3 is a schematic diagram of functional modules of an internet of things artificial intelligence face verification device 300 according to an embodiment of the present invention, in this embodiment, the internet of things artificial intelligence face verification device 300 may be divided into the functional modules according to a method embodiment executed by the internet of things cloud server 100, that is, the following functional modules corresponding to the internet of things artificial intelligence face verification device 300 may be used to execute the method embodiments executed by the internet of things cloud server 100. The internet of things artificial intelligence face verification device 300 may include an obtaining module 310, a determining module 320, a first identifying module 330, a generating module 340, and a second identifying module 350, and the functions of the functional modules of the internet of things artificial intelligence face verification device 300 are described in detail below.
The obtaining module 310 is configured to obtain a face image data stream of each continuous time node of a target collection area in a preset time period, where the target collection area is collected when the face verification instruction is detected by the internet of things face verification terminal 200. The obtaining module 310 may be configured to perform the step S110, and the detailed implementation of the obtaining module 310 may refer to the detailed description of the step S110.
The determining module 320 is configured to determine, according to the face image data stream of each continuous time node, each suspected living body area corresponding to the target acquisition area, and for each suspected living body area, determine, from the face image data streams of the remaining time nodes, a related suspected living body area having a relevance to the current suspected living body area, respectively, where an area outside the suspected living body area is a non-living body area. The determining module 320 may be configured to perform the step S120, and the detailed implementation of the determining module 320 may refer to the detailed description of the step S120.
The first identification module 330 is configured to perform spectral image feature identification on the current suspected living body area, and perform spectral image feature identification on the associated suspected living body area, to obtain first spectral image feature identification information of the current suspected living body area and second spectral image feature identification information of the associated suspected living body area, where the first spectral image feature identification information and the second spectral image feature identification information respectively include spectral position coordinate information of respective corresponding spectral conditions, and the spectral conditions are a plurality of preset spectral forms associated with respective corresponding light reflection features. The first identification module 330 may be configured to perform the step S130, and the detailed implementation of the first identification module 330 may refer to the detailed description of the step S130.
A generating module 340, configured to generate living body feature identification information of each current suspected living body area and a corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information. The generating module 340 may be configured to execute the step S140, and the detailed implementation of the generating module 340 may refer to the detailed description of the step S140.
The second identification module 350 is configured to perform living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the living body feature identification information, splice the identified living body feature units in a time sequence arrangement manner to obtain a plurality of spliced spectral feature vector sequences, and identify each spliced spectral feature vector sequence based on the artificial intelligence model to obtain a face verification result of the target acquisition area. The second identifying module 350 may be configured to perform the step S150, and the detailed implementation of the second identifying module 350 may refer to the detailed description of the step S150.
Further, fig. 4 is a schematic structural diagram of an internet of things cloud server 100 for executing the above-described internet of things artificial intelligence face verification method according to an embodiment of the present invention. As shown in fig. 4, the internet of things cloud server 100 may include a network interface 110, a machine-readable storage medium 120, a processor 130, and a bus 140. The processor 130 may be one or more, and one processor 130 is illustrated in fig. 4 as an example. The network interface 110, the machine-readable storage medium 120, and the processor 130 may be connected by a bus 140 or otherwise, as exemplified by the connection by the bus 140 in fig. 4.
The machine-readable storage medium 120 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for verifying an artificial intelligence face of the internet of things in the embodiment of the present invention (for example, the obtaining module 310, the determining module 320, the first identifying module 330, the generating module 340, and the second identifying module 350 of the artificial intelligence face verification apparatus 300 of the internet of things shown in fig. 3). The processor 130 detects software programs, instructions and modules stored in the machine-readable storage medium 120, so as to execute various functional applications and data processing of the terminal device, that is, to implement the above-mentioned method for verifying an artificial intelligent face of an internet of things, which is not described herein again.
The machine-readable storage medium 120 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the data storage area may store data created according to the use of the terminal, etc. furthermore, the machine-readable storage medium 120 may be a volatile Memory or a non-volatile Memory, or may include both volatile and non-volatile memories, wherein the non-volatile Memory may be a Read-only Memory (ROM), a Programmable Read-only Memory (PROM), an Erasable Programmable Read-only Memory (erase PROM, EPROM), an Electrically Erasable Programmable Read-only Memory (EEPROM), or a flash Memory.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The processor 130 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
The internet of things cloud server 100 can perform information interaction with other devices (such as the internet of things face verification terminal 200) through the network interface 110. Network interface 110 may be a circuit, bus, transceiver, or any other device that may be used to exchange information. Processor 130 may send and receive information using network interface 110.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. An Internet of things artificial intelligence face verification method is applied to an Internet of things cloud server, the Internet of things cloud server is in communication connection with a plurality of Internet of things face verification terminals, and the method comprises the following steps:
acquiring a face image data stream of each continuous time node of a target acquisition region in a preset time period, wherein the face image data stream is acquired by the Internet of things face verification terminal when a face verification instruction is detected;
determining each suspected living body area corresponding to the target acquisition area according to the face image data stream of each continuous time node, and respectively determining associated suspected living body areas which are associated with the current suspected living body area from the face image data streams of the remaining time nodes for each suspected living body area, wherein the areas outside the suspected living body areas are non-living body areas;
performing spectral image feature identification on the current suspected living body area, and performing spectral image feature identification on the associated suspected living body area to respectively obtain first spectral image feature identification information of the current suspected living body area and second spectral image feature identification information of the associated suspected living body area, wherein the first spectral image feature identification information and the second spectral image feature identification information respectively comprise spectral position coordinate information of respective corresponding spectral conditions, and the spectral conditions are respectively a plurality of preset spectral forms associated with respective corresponding light reflection features;
generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information;
respectively carrying out living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the living body feature identification information, splicing the identified living body feature units according to a time sequence arrangement mode to obtain a plurality of spliced spectrum feature vector sequences, and identifying each spliced spectrum feature vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition area.
2. The internet of things artificial intelligence face verification method according to claim 1, wherein the step of determining each suspected living body area corresponding to the target acquisition area according to the face image data stream of each continuous time node includes:
determining light reflection dynamic change information containing light reflection characteristic information of the target acquisition area according to the face image data stream of each continuous time node, and determining first dynamic change information with a first light reflection characteristic and second dynamic change information with a second light reflection characteristic in the light reflection dynamic change information, wherein the first light reflection characteristic is used for representing the light reflection characteristic corresponding to the light reflection intensity being greater than the first preset intensity, and the second light reflection characteristic is used for representing the light reflection characteristic corresponding to the light reflection intensity being less than the second preset intensity;
determining light reflection characteristics of key points of the face position in the light reflection characteristics of the light reflection dynamic change information corresponding to the face position of the target acquisition area;
acquiring the interval size of a first dynamic change pixel value interval on the first dynamic change information and the interval size of a second dynamic change pixel value interval on the second dynamic change information;
if the interval size of the first dynamically-changed pixel value interval and the interval size of the second dynamically-changed pixel value interval are both larger than or equal to a set length, comparing the interval size of the first dynamically-changed pixel value interval with the interval size of the second dynamically-changed pixel value interval, and if the interval size of the first dynamically-changed pixel value interval is larger than the interval size of the second dynamically-changed pixel value interval, taking the first dynamically-changed pixel value interval as a suspected living pixel value interval;
if the interval size of the second dynamically-changed pixel value interval is larger than the interval size of the first dynamically-changed pixel value interval, taking the second dynamically-changed pixel value interval as a suspected living pixel value interval;
if the interval size of the first dynamically changing pixel value interval is equal to the interval size of the second dynamically changing pixel value interval, taking the first dynamically changing pixel value interval or the second dynamically changing pixel value interval as a suspected living pixel value interval;
determining an area which is matched with each suspected living body pixel value interval and is matched with the light reflection characteristics of the key points of the human face position as a suspected living body area to be determined, segmenting the light reflection dynamic change information into a plurality of segmentation dynamic change information according to the determined suspected living body area to be determined, and determining the suspected living body area meeting the conditions as the suspected living body area corresponding to the target acquisition area according to the relation between the change range and the preset range of each segmentation dynamic change information.
3. The method for verifying the artificial intelligent face of the internet of things according to claim 1, wherein the step of respectively determining, for each suspected living body area, associated suspected living body areas which are associated with the current suspected living body area from the face image data streams of the remaining time nodes comprises the steps of:
for each suspected living area, acquiring at least one local feature group of the suspected living area, analyzing each local feature group in the at least one local feature group, and acquiring key feature points contained in each local feature group, wherein the local feature groups are used for representing each local feature point of the suspected living area and face part information corresponding to each local feature point;
acquiring a feature point change value, a feature point depth value and a feature point color value of each key feature point in a corresponding time period, wherein the feature point change value is used for describing the feature point change value of each key feature point, the feature point depth value is used for describing the feature point depth value of each key feature point, and the feature point color value is used for describing the feature point color value of each key feature point;
mapping and associating the feature point change value, the feature point depth value and the feature point color value of each key feature point in a corresponding time period, and then merging the feature point change value, the feature point depth value and the feature point color value to obtain a feature value mapping sequence corresponding to each key feature point, wherein the feature value mapping sequence is used for representing the corresponding relation among the feature point change value, the feature point depth value and the feature point color value of each key feature point in the corresponding time period;
and respectively determining associated suspected living areas which are associated with the current suspected living area from the face image data streams of the remaining time nodes according to the feature value mapping sequence corresponding to each key feature point.
4. The internet of things artificial intelligence face verification method according to any one of claims 1 to 3, wherein the step of generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area according to the first spectral image feature identification information and the second spectral image feature identification information includes:
acquiring illumination intensity of common spectral position coordinate information and each spectral position coordinate set between spectral position coordinate information of spectral conditions corresponding to first spectral image feature identification information and second spectral image feature identification information respectively;
under the condition that the common spectrum position coordinate information contains the target spectrum coordinate region according to the illumination intensity, determining the difference of the coordinate region difference between each spectrum position coordinate set of the common spectrum position coordinate information in the set target spectrum coordinate region and each spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate region according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate region, and the spectrum position coordinate set of the common spectrum position coordinate information in the set target spectrum coordinate region and the spectrum position coordinate set in the target spectrum coordinate region have the same difference in coordinate region is adjusted to be in the corresponding target spectrum coordinate region, the target spectrum coordinate area represents a spectrum coordinate area with the illumination intensity within a preset abnormal illumination intensity range;
under the condition that a plurality of spectrum position coordinate sets are contained in a currently set target spectrum coordinate area of common spectrum position coordinate information, determining the difference of coordinate area differences among the spectrum position coordinate sets of the common spectrum position coordinate information in the currently set target spectrum coordinate area according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area, and screening the spectrum position coordinate sets in the currently set target spectrum coordinate area according to the difference of the coordinate area differences among the spectrum position coordinate sets;
setting a label of a target spectrum coordinate area for each spectrum position coordinate set obtained by screening according to the spectrum position coordinate set of the common spectrum position coordinate information in the target spectrum coordinate area, and adjusting each spectrum position coordinate set to be in the target spectrum coordinate area;
determining a first spectral feature vector sequence and a second spectral feature vector sequence corresponding to the first spectral image feature identification information and the second spectral image feature identification information respectively according to a first spectral position coordinate set in the set target spectral coordinate region, a second spectral position coordinate set in the target spectral coordinate region, a first environmental influence factor parameter of the first spectral image feature identification information and a second environmental influence factor parameter of the second spectral image feature identification information;
determining a first candidate spectral feature vector of the first spectral image feature identification information and a second candidate spectral feature vector of the second spectral image feature identification information from the first spectral feature vector sequence and the second spectral feature vector sequence, respectively;
when the first candidate spectral feature vector and the second candidate spectral feature vector are determined, matching is performed on the first candidate spectral feature vector and the second candidate spectral feature vector to obtain matching information, whether the first candidate spectral feature vector and the second candidate spectral feature vector are candidate spectral feature vectors of a multi-combination spectral feature vector or not is judged according to the matching information, if yes, the first candidate spectral feature vector and the second candidate spectral feature vector are respectively converted into a plurality of first combination spectral feature vector sets and second combination spectral feature vector sets with the combination spectral feature vectors according to each combination spectral feature vector, and then the combination of the first combination spectral feature vector set and the second combination spectral feature vector set with the same or similar combination spectral feature vector set and the second combination spectral feature vector set with the combination spectral feature vector set are searched according to the first combination spectral feature vector set and the second combination spectral feature vector set respectively The characteristic part area of the spectral characteristic vector, and the matching information and the spectral characteristic vector set corresponding to the characteristic part area are combined into a corresponding mapping set;
and generating living characteristic identification information of each current suspected living area and the corresponding associated suspected living area according to the mapping set.
5. The internet of things artificial intelligence face verification method according to claim 4, wherein the step of generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area according to the mapping set includes:
determining first living body characteristic influence information of the first spectral image characteristic identification information and second living body characteristic influence information of the second spectral image characteristic identification information according to a characteristic part region in the mapping set, a spectral characteristic vector set corresponding to the characteristic part region, a first environmental influence factor parameter corresponding to spectral position coordinate information of spectral conditions of the first spectral image characteristic identification information, and a second environmental influence factor parameter corresponding to spectral position coordinate information of spectral conditions of the second spectral image characteristic identification information;
respectively performing equal-interval segmentation on the first living body feature influence information and the second living body feature influence information to obtain a first segmentation parameter set of the first living body feature influence information and a second segmentation parameter set of the second living body feature influence information; wherein the first segmentation parameter set includes feature influence information of a plurality of first spectral feature vectors of the first living body feature influence information, and the second segmentation parameter set includes feature influence information of a plurality of second spectral feature vectors of the second living body feature influence information;
respectively matching the characteristic influence information of each first spectral feature vector in a first segmentation parameter set corresponding to the first living body characteristic influence information and the characteristic influence information of each second spectral feature vector in a second segmentation parameter set corresponding to the second living body characteristic influence information with the spectral feature vector of each preset spectral feature vector identification information in a preset spectral feature vector identification feature set to obtain first matching information between the first living body characteristic influence information and the preset spectral feature vector identification feature set and second matching information between the second living body characteristic influence information and the preset spectral feature vector identification feature set, the preset spectral feature vector identification feature set comprises corresponding relations between a plurality of verified spectral feature vector identification information and corresponding spectral feature vectors;
sequentially matching the preset spectral feature vector identification information obtained from the first matching information and the second matching information as a matching object until one current spectral feature vector identification information appears in the preset spectral feature vector identification feature set, causing a first coincidence identification information range of third matching information between the first living body feature influence information and the current spectral feature vector identification information and first matching information between the first living body feature influence information and the preset spectral feature vector identification feature set to be larger than a target preset range, and a fourth matching information between the second living body feature influence information and the current spectral feature vector identification information and a second coincidence identification information range of a second matching information between the second living body feature influence information and the current spectral feature vector identification information are larger than the target preset range;
determining a third living body feature corresponding to the current spectral feature vector identification information, and performing feature extraction on the first segmentation parameter set and the second segmentation parameter set according to the third living body feature to obtain a first living body feature and a second living body feature;
when the first living body feature and the second living body feature are not matched, identifying the first living body feature based on a first spectral image feature identification type corresponding to the first spectral image feature identification information to obtain first identification information, and identifying the second living body feature based on a second spectral image feature identification type corresponding to the second spectral image feature identification information to obtain second identification information;
and determining the target living body feature units of the target spectral coordinate areas corresponding to the first identification information and the second identification information respectively existing in the first spectral image feature identification information and the second spectral image feature identification information, thereby generating living body feature identification information of each current suspected living body area and the corresponding associated suspected living body area.
6. The method for verifying the artificial intelligent human face through the internet of things according to claim 5, wherein the step of recognizing the first living body feature based on a first spectral image feature recognition type corresponding to the first spectral image feature recognition information to obtain first recognition information, and recognizing the second living body feature based on a second spectral image feature recognition type corresponding to the second spectral image feature recognition information to obtain second recognition information includes:
and determining a first related feature of the first spectral image feature identification information relative to the second spectral image feature identification information and a second related feature of the second spectral image feature identification information relative to the first spectral image feature identification information according to a first face part region corresponding to a first living feature of the first spectral image feature identification information and a second face part region corresponding to a second living feature of the second spectral image feature identification information, so as to obtain the first identification information and the second identification information.
7. The method for verifying the artificial intelligent face of the internet of things according to any one of claims 1 to 6, wherein the step of respectively performing living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the living body feature identification information comprises the following steps:
determining a target position area aiming at each current suspected living area and the corresponding associated suspected living area according to the living characteristic identification information;
respectively carrying out living body feature unit identification on each current suspected living body area and the corresponding associated suspected living body area according to the face part label and the scanning time sequence of each target position in the target position area to obtain an identified living body feature unit;
splicing the identified living body characteristic units according to a time sequence arrangement mode to obtain a plurality of spliced spectrum characteristic vector sequences;
and identifying each spliced spectrum characteristic vector sequence based on an artificial intelligence model to obtain a face verification result of the target acquisition region.
8. The internet-of-things artificial intelligence face verification method according to any one of claims 1 to 6, wherein the step of identifying each spliced spectral feature vector sequence based on an artificial intelligence model to obtain the face verification result of the target acquisition region includes:
extracting feature information of each spliced spectrum feature vector sequence based on an artificial intelligence model, inputting the feature information of the spliced spectrum feature vector sequence into a classification layer for classification, and outputting the confidence of the feature information of the spliced spectrum feature vector sequence in each classification label, wherein the classification labels comprise a verification passing label and a verification failing label;
and obtaining a face verification result of the target acquisition region according to the confidence degree of the feature information of the spliced spectrum feature vector sequence in each classification label.
9. The method for verifying the human face through the artificial intelligence of the internet of things according to claim 8, wherein the artificial intelligence model is obtained through training a training sample set configured in advance and a training classification label corresponding to each training sample in the training sample set based on a deep learning network, and the training samples are spectral feature vector sequences.
10. An internet of things cloud server, characterized in that the internet of things cloud server comprises a processor, a machine-readable storage medium and a network interface, the machine-readable storage medium, the network interface and the processor are connected through a bus system, the network interface is used for being in communication connection with at least one internet of things face verification terminal, the machine-readable storage medium is used for storing programs, instructions or codes, and the processor is used for executing the programs, instructions or codes in the machine-readable storage medium so as to execute the internet of things artificial intelligence face verification method according to any one of claims 1 to 9.
CN202010239943.5A 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and Internet of things cloud server Active CN111460419B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011113906.6A CN112269976A (en) 2020-03-31 2020-03-31 Artificial intelligence face verification method and system of Internet of things
CN202010239943.5A CN111460419B (en) 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN202011113894.7A CN112269975A (en) 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and system and Internet of things cloud server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010239943.5A CN111460419B (en) 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and Internet of things cloud server

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202011113894.7A Division CN112269975A (en) 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and system and Internet of things cloud server
CN202011113906.6A Division CN112269976A (en) 2020-03-31 2020-03-31 Artificial intelligence face verification method and system of Internet of things

Publications (2)

Publication Number Publication Date
CN111460419A true CN111460419A (en) 2020-07-28
CN111460419B CN111460419B (en) 2020-11-27

Family

ID=71683409

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011113894.7A Withdrawn CN112269975A (en) 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and system and Internet of things cloud server
CN202010239943.5A Active CN111460419B (en) 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN202011113906.6A Withdrawn CN112269976A (en) 2020-03-31 2020-03-31 Artificial intelligence face verification method and system of Internet of things

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011113894.7A Withdrawn CN112269975A (en) 2020-03-31 2020-03-31 Internet of things artificial intelligence face verification method and system and Internet of things cloud server

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011113906.6A Withdrawn CN112269976A (en) 2020-03-31 2020-03-31 Artificial intelligence face verification method and system of Internet of things

Country Status (1)

Country Link
CN (3) CN112269975A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801156A (en) * 2021-01-20 2021-05-14 廖彩红 Business big data acquisition method and server for artificial intelligence machine learning
CN114463792A (en) * 2022-02-10 2022-05-10 厦门熵基科技有限公司 Multispectral identification method, multispectral identification device, multispectral identification equipment and readable storage medium

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100177968A1 (en) * 2009-01-12 2010-07-15 Fry Peter T Detection of animate or inanimate objects
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
US20140079297A1 (en) * 2012-09-17 2014-03-20 Saied Tadayon Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
CN103886301A (en) * 2014-03-28 2014-06-25 中国科学院自动化研究所 Human face living detection method
CN104881632A (en) * 2015-04-28 2015-09-02 南京邮电大学 Hyperspectral face recognition method
CN105160289A (en) * 2015-07-03 2015-12-16 深圳市金立通信设备有限公司 Face identification method and terminal
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus
US20160071275A1 (en) * 2014-09-09 2016-03-10 EyeVerify, Inc. Systems and methods for liveness analysis
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN106446772A (en) * 2016-08-11 2017-02-22 天津大学 Cheating-prevention method in face recognition system
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107077608A (en) * 2014-11-13 2017-08-18 英特尔公司 Facial In vivo detection in image biological feature recognition
CN107368810A (en) * 2017-07-20 2017-11-21 北京小米移动软件有限公司 Method for detecting human face and device
WO2018002275A1 (en) * 2016-06-30 2018-01-04 Koninklijke Philips N.V. Method and apparatus for face detection/recognition systems
CN107808115A (en) * 2017-09-27 2018-03-16 联想(北京)有限公司 A kind of biopsy method, device and storage medium
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108038456A (en) * 2017-12-19 2018-05-15 中科视拓(北京)科技有限公司 A kind of anti-fraud method in face identification system
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109492455A (en) * 2017-09-12 2019-03-19 中国移动通信有限公司研究院 Live subject detection and identity identifying method, medium, system and relevant apparatus
CN109697416A (en) * 2018-12-14 2019-04-30 腾讯科技(深圳)有限公司 A kind of video data handling procedure and relevant apparatus
CN110110597A (en) * 2019-04-02 2019-08-09 北京旷视科技有限公司 Biopsy method, device and In vivo detection terminal
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN110659541A (en) * 2018-06-29 2020-01-07 深圳云天励飞技术有限公司 Image recognition method, device and storage medium
CN111432410A (en) * 2020-03-31 2020-07-17 周亚琴 Network security protection method of mobile base station of Internet of things and cloud server of Internet of things

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100177968A1 (en) * 2009-01-12 2010-07-15 Fry Peter T Detection of animate or inanimate objects
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
US20140079297A1 (en) * 2012-09-17 2014-03-20 Saied Tadayon Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
CN103886301A (en) * 2014-03-28 2014-06-25 中国科学院自动化研究所 Human face living detection method
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
US20160071275A1 (en) * 2014-09-09 2016-03-10 EyeVerify, Inc. Systems and methods for liveness analysis
WO2016040487A2 (en) * 2014-09-09 2016-03-17 Eyeverify Systems and methods for liveness analysis
CN107077608A (en) * 2014-11-13 2017-08-18 英特尔公司 Facial In vivo detection in image biological feature recognition
CN104881632A (en) * 2015-04-28 2015-09-02 南京邮电大学 Hyperspectral face recognition method
CN105160289A (en) * 2015-07-03 2015-12-16 深圳市金立通信设备有限公司 Face identification method and terminal
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus
WO2018002275A1 (en) * 2016-06-30 2018-01-04 Koninklijke Philips N.V. Method and apparatus for face detection/recognition systems
CN106446772A (en) * 2016-08-11 2017-02-22 天津大学 Cheating-prevention method in face recognition system
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107368810A (en) * 2017-07-20 2017-11-21 北京小米移动软件有限公司 Method for detecting human face and device
CN109492455A (en) * 2017-09-12 2019-03-19 中国移动通信有限公司研究院 Live subject detection and identity identifying method, medium, system and relevant apparatus
US20190095701A1 (en) * 2017-09-27 2019-03-28 Lenovo (Beijing) Co., Ltd. Living-body detection method, device and storage medium
CN107808115A (en) * 2017-09-27 2018-03-16 联想(北京)有限公司 A kind of biopsy method, device and storage medium
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108038456A (en) * 2017-12-19 2018-05-15 中科视拓(北京)科技有限公司 A kind of anti-fraud method in face identification system
CN110659541A (en) * 2018-06-29 2020-01-07 深圳云天励飞技术有限公司 Image recognition method, device and storage medium
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109697416A (en) * 2018-12-14 2019-04-30 腾讯科技(深圳)有限公司 A kind of video data handling procedure and relevant apparatus
CN110110597A (en) * 2019-04-02 2019-08-09 北京旷视科技有限公司 Biopsy method, device and In vivo detection terminal
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN111432410A (en) * 2020-03-31 2020-07-17 周亚琴 Network security protection method of mobile base station of Internet of things and cloud server of Internet of things

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LI等: ""A Compound Face Recognition System Design"", 《JOURNAL OF NATIONAL UNIVERSITY OF DEFENCE TECHNOLOGY》 *
WEIWEN LIU: ""Face liveness detection using analysis of Fourier spectra based on hair"", 《2014 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION》 *
施晓倩: ""基于空间-光谱信息融合的Gabor PCA高光谱人脸识别算法研究"", 《计算机应用与软件》 *
王悦扬: "基于多光谱成像的人脸活体检测", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801156A (en) * 2021-01-20 2021-05-14 廖彩红 Business big data acquisition method and server for artificial intelligence machine learning
CN112801156B (en) * 2021-01-20 2021-09-10 曙光星云信息技术(北京)有限公司 Business big data acquisition method and server for artificial intelligence machine learning
CN114463792A (en) * 2022-02-10 2022-05-10 厦门熵基科技有限公司 Multispectral identification method, multispectral identification device, multispectral identification equipment and readable storage medium

Also Published As

Publication number Publication date
CN112269976A (en) 2021-01-26
CN111460419B (en) 2020-11-27
CN112269975A (en) 2021-01-26

Similar Documents

Publication Publication Date Title
US20220101644A1 (en) Pedestrian re-identification method, device, electronic device and computer-readable storage medium
CN111242097B (en) Face recognition method and device, computer readable medium and electronic equipment
CN111626123A (en) Video data processing method and device, computer equipment and storage medium
CN111641809B (en) Security monitoring method based on Internet of things and artificial intelligence and cloud communication server
CN111695495B (en) Face recognition method, electronic equipment and storage medium
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
CN111432410B (en) Network security protection method of mobile base station of Internet of things and cloud server of Internet of things
CN111460419B (en) Internet of things artificial intelligence face verification method and Internet of things cloud server
CN108960412B (en) Image recognition method, device and computer readable storage medium
CN111091080A (en) Face recognition method and system
CN111723226B (en) Information management method based on big data and Internet and artificial intelligence cloud server
CN112633297A (en) Target object identification method and device, storage medium and electronic device
CN110427962A (en) A kind of test method, electronic equipment and computer readable storage medium
CN111476191A (en) Artificial intelligent image processing method based on intelligent traffic and big data cloud server
CN109034048A (en) Face recognition algorithms models switching method and apparatus
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
CN114119460A (en) Semiconductor image defect identification method, semiconductor image defect identification device, computer equipment and storage medium
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN111783812A (en) Method and device for identifying forbidden images and computer readable storage medium
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN111291749A (en) Gesture recognition method and device and robot
CN115620211A (en) Performance data processing method and system of flame-retardant low-smoke halogen-free sheath
CN111800790B (en) Information analysis method based on cloud computing and 5G interconnection and man-machine cooperation cloud platform
CN113705559A (en) Character recognition method and device based on artificial intelligence and electronic equipment
CN113837006A (en) Face recognition method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 401, building B, phase II, yundian science and Technology Park, 104 Yunda West Road, Kunming Economic Development Zone, Yunnan Province

Applicant after: Zhou Yaqin

Address before: 458030 No. 327 Haihe Road, Hebi economic and Technological Development Zone, Henan, Hebi

Applicant before: Zhou Yaqin

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201105

Address after: B808 Fu'an science and technology building, no.013, Gaoxin South 1st Road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: SHENZHEN WEIWANG LIHE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 401, building B, phase II, yundian science and Technology Park, 104 Yunda West Road, Kunming Economic Development Zone, Yunnan Province

Applicant before: Zhou Yaqin

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230303

Address after: No. 502, Building 5, No. 528, Yuefei Road, Shibantan Street, Xindu District, Chengdu, Sichuan, 610000

Patentee after: Microgrid union Technology (Chengdu) Co.,Ltd.

Address before: 518052 b808, Fu'an technology building, 013 Gaoxin South 1st Road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN WEIWANG LIHE INFORMATION TECHNOLOGY CO.,LTD.