CN108932774B - Information detection method and device - Google Patents

Information detection method and device Download PDF

Info

Publication number
CN108932774B
CN108932774B CN201810641278.5A CN201810641278A CN108932774B CN 108932774 B CN108932774 B CN 108932774B CN 201810641278 A CN201810641278 A CN 201810641278A CN 108932774 B CN108932774 B CN 108932774B
Authority
CN
China
Prior art keywords
user
certificate
image
face image
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810641278.5A
Other languages
Chinese (zh)
Other versions
CN108932774A (en
Inventor
安耀祖
龚顺强
郁昌存
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JD Digital Technology Holdings Co Ltd
Jingdong Technology Holding Co Ltd
Original Assignee
JD Digital Technology Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JD Digital Technology Holdings Co Ltd filed Critical JD Digital Technology Holdings Co Ltd
Priority to CN201810641278.5A priority Critical patent/CN108932774B/en
Publication of CN108932774A publication Critical patent/CN108932774A/en
Application granted granted Critical
Publication of CN108932774B publication Critical patent/CN108932774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • G07C9/257Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition electronically
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application discloses an information detection method and device. One embodiment of the system comprises: detecting the shot image, and determining whether the shot image contains a facial image of a target user meeting a preset condition; in response to the fact that the face image of the target user meeting the preset conditions is contained in the shot image, the authority certificate placed by the target user and the certificate used for representing the identity of the user are identified, and whether the user information of the certificate is consistent with the user information associated with the authority certificate or not is determined; in response to determining that the user information of the certificate and the user information associated with the permission certificate are consistent, determining whether a user of a face image display of a target user meeting a preset condition matches a user of a user avatar display; and in response to determining that the user of the face image display of the target user meeting the preset condition is matched with the user of the head portrait display of the user, sending an instruction for controlling the gate to be opened. The embodiment improves the detection speed of the target user.

Description

Information detection method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an information detection method and device.
Background
With the development of network technology, image recognition technology is widely applied. The application of image recognition technology in the field of information detection makes the traditional information detection technology have a great leap.
In the existing information detection system applied to the authority credential, the authority credential information and the identity information of the user holding the authority credential are usually detected and verified, so as to determine whether the authority credential is valid.
Disclosure of Invention
The embodiment of the application provides an information detection method and device.
In a first aspect, an embodiment of the present application provides an information detection method, including: detecting the shot image, and determining whether the shot image contains a facial image of a target user meeting a preset condition; in response to the fact that the face image of the target user meeting the preset conditions is contained in the shot image, the authority certificate placed by the target user and the certificate used for representing the identity of the user are identified, whether the user information of the certificate is consistent with the user information associated with the authority certificate is determined, wherein the user information of the certificate comprises a user head portrait; in response to determining that the user information of the certificate and the user information associated with the permission certificate are consistent, determining whether a user of a face image display of a target user meeting a preset condition matches a user of a user avatar display; and in response to determining that the user of the face image display of the target user meeting the preset condition is matched with the user of the head portrait display of the user, sending an instruction for controlling the gate to be opened.
In some embodiments, detecting the captured image and determining whether the captured image contains a facial image of the target user satisfying a preset condition includes: inputting the shot image into a first neural network trained in advance to obtain a first output result, wherein the first output result is used for indicating whether the shot image contains a face image or not; based on the obtained first output result, it is determined whether the photographed image contains a face image of a target user that satisfies a preset condition.
In some embodiments, determining whether the user of the face image display of the target user who satisfies the preset condition matches the user of the user avatar display comprises: extracting the determined face image of the target user meeting the preset condition from the shot image; inputting the extracted face image into a pre-trained second neural network to obtain a second output result, wherein the second output result is used for indicating whether a face area displayed by the extracted face image comprises an occlusion; in response to determining that the resulting second output result indicates that the facial region includes an obstruction, processing the extracted facial image to generate a facial image with the obstruction removed; it is determined whether the user of the generated face image display with the obstruction removed matches the user of the user avatar display.
In some embodiments, processing the extracted face image to generate an occlusion-removed face image comprises: establishing a skin color model of the extracted face image based on the extracted face image; acquiring a pixel point region of a shielding object displayed in the extracted face image; performing linear interpolation processing on pixel point regions of the shielding objects displayed in the extracted facial image based on the skin color model; and performing median filtering on the interpolated face image to generate a face image with the obstruction removed.
In some embodiments, identifying the rights voucher placed by the target user and the document used to characterize the identity of the user, determining whether the user information of the document and the user information associated with the rights voucher are consistent, comprises: acquiring an authority certificate and an image of a certificate; performing text recognition on the authority certificate and the image of the certificate; based on the text recognition result, it is determined whether the user information of the certificate and the user information associated with the authority credential coincide.
In some embodiments, prior to determining whether the user information of the certificate and the user information associated with the rights credential are consistent, the method further comprises: determining whether the user head portrait is consistent with the user head portrait in the stored certificate; in response to determining that the user avatar is consistent with the user avatar in the stored document, instructions are sent prompting the target user to place the entitlement credential and the document for characterizing the user's identity.
In a second aspect, an embodiment of the present application provides an information detecting apparatus, including: a detection unit configured to detect a captured image, and determine whether a face image of a target user satisfying a preset condition is included in the captured image; the identification unit is configured to respond to the fact that the face image of the target user meeting the preset conditions is contained in the shot image, identify the authority certificate placed by the target user and the certificate used for representing the identity of the user, and determine whether the user information of the certificate is consistent with the user information associated with the authority certificate, wherein the user information of the certificate comprises a user head portrait; a matching unit configured to determine whether a user of a face image display of a target user satisfying a preset condition matches a user of a user avatar display in response to determining that the user information of the certificate and the user information associated with the authority credential coincide; a transmitting unit configured to transmit an instruction to control the gate to be turned on in response to a determination that a user of the face image display of the target user who satisfies a preset condition matches a user of the user avatar display.
In some embodiments, the detection unit is further configured to: inputting the shot image into a first neural network trained in advance to obtain a first output result, wherein the first output result is used for indicating whether the shot image contains a face image or not; based on the obtained first output result, it is determined whether the photographed image contains a face image of a target user that satisfies a preset condition.
In some embodiments, the matching unit comprises: an extraction unit configured to extract, from the captured image, a face image of the determined target user that satisfies a preset condition; a second output result obtaining subunit configured to input the extracted face image to a pre-trained second neural network, resulting in a second output result indicating whether a face area displayed by the extracted face image includes an obstruction; a processing subunit configured to, in response to determining that the resulting second output result indicates that the face region includes an obstruction, process the extracted face image, generating a face image with the obstruction removed; a matching subunit configured to determine whether the user of the generated face image display with the obstruction removed matches the user of the user avatar display.
In some embodiments, the processing subunit is further configured to: establishing a skin color model of the extracted face image based on the extracted face image; acquiring a pixel point region of a shielding object displayed in the extracted face image; performing linear interpolation processing on pixel point regions of the shielding objects displayed in the extracted facial image based on the skin color model; and performing median filtering on the interpolated face image to generate a face image with the obstruction removed.
In some embodiments, the identification unit is further configured to: acquiring an authority certificate and an image of a certificate; performing text recognition on the authority certificate and the image of the certificate; based on the text recognition result, it is determined whether the user information of the certificate and the user information associated with the authority credential coincide.
In some embodiments, the apparatus is further configured to: determining whether the user head portrait is consistent with the user head portrait in the stored certificate; in response to determining that the user avatar is consistent with the user avatar in the stored document, instructions are sent prompting the target user to place the entitlement credential and the document for characterizing the user's identity.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any embodiment of the control method.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements a method as in any embodiment of the control method.
According to the information detection method and the device, firstly, the shot image is detected, so that whether the shot image contains the face image of the target user meeting the preset condition is determined, then when the shot image is detected to contain the face image of the target user meeting the preset condition, the authority certificate placed by the target user and the certificate used for representing the user identity are identified, and whether the user information of the certificate is consistent with the user information related to the authority certificate is determined. And finally, under the condition that the user information of the certificate is determined to be matched with the user displayed in the certificate for representing the user identity, sending an instruction for controlling the gate to be opened, thereby improving the detection speed of the authority certificate information.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an information detection method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of an information detection method according to the present application;
FIG. 4 is a flow chart of yet another embodiment of an information detection method according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of an information detection apparatus according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the information detection method or information detection apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a control device 101, a gate 102, and a camera device, wherein the camera device includes cameras 103, 104, 105. The system architecture 100 also includes networks 106, 107. The network 106 is used to provide a medium for communication links between the cameras 103, 104, 105 and the control device 101. The network 107 is used to provide a medium for a communication link between the control device 101 and the gate 102. The networks 106, 107 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The cameras 103, 104, 105 may interact with the control device 101 via the network 106 to receive or transmit messages or the like. The cameras 103, 104, 105 may be various types of cameras including, but not limited to, visible light cameras, infrared cameras, near infrared cameras, and the like.
The gate 102 may interact with the control device 101 through the network 107 to receive instructions or the like sent by the control device 101. The gate 102 may be installed at an entrance, an exit, etc. to control the ingress and egress of personnel.
The control device 101 may include, but is not limited to, a server, etc., and may be another type of control device. The control device 101 is configured to receive an image captured by the imaging device, authority credential information placed by a target user, and credential information indicating user identity, perform various analyses on the received image and information, and send a control instruction to the gate 102 according to an analysis result to open or keep a blocking body of the gate 102 in a closed state.
It should be noted that the control method provided in the embodiment of the present application is generally executed by the control device 101. The relative positions of the camera, the control device and the gate can be set as required, and are not limited herein.
The control device may be hardware or software. When the control device is hardware, the control device may include a plurality of servers, or may include a single server, so that the control device may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the control means is software, it may be implemented as a plurality of software or software modules (for example to provide distributed services), or as a single software or software module. And is not particularly limited herein.
It is understood that the number of cameras included in the camera device, the control device, the gate, the network, and the camera device in fig. 1 is merely illustrative. Any number of cameras, control devices, gates, networks, and the number of cameras included in the cameras may be provided as desired.
With continued reference to FIG. 2, a flow 200 of one embodiment of an information detection method according to the present application is shown. The information detection method comprises the following steps:
step 201, detecting the shot image, and determining whether the shot image contains the face image of the target user meeting the preset condition.
In the present embodiment, the execution subject of the information detection method (e.g., the control apparatus shown in fig. 1) may receive a captured image from a capturing device (e.g., the cameras 103, 104, 105 shown in fig. 1) by a wired connection manner or a wireless connection manner. The captured image is then detected. Here, the execution subject may detect the photographed image in real time or may detect the photographed image based on a preset time interval. Here, the preset time interval may be, for example, 30 seconds, 1 minute, or the like. Here, the photographed image may be detected by various means.
As an example, the captured image may be detected based on a statistical method. Specifically, the execution subject may first convert the photographed image into a histogram. Then, the histogram is smoothed by a gaussian function. Then, the executing body may perform eye positioning on the image after the smoothing processing based on the eye feature value, and determine whether the image includes the eye of the target user. When it is determined that the image contains the eyes and the positions of the eyes in the image, the entire face image may be extracted based on the singular value features. When the execution subject detects a face image of a target user from a captured image, it may be determined whether the face image satisfies a preset condition. The preset condition includes, but is not limited to, at least one of the following: the proportion of the face image in the shot image is larger than a preset threshold value, and the resolution of the face image is larger than the preset threshold value; the number of facial feature points included in the facial image is greater than a preset threshold. Here, the target user may be a user corresponding to a currently detected face image, or may be a user corresponding to a face image detected based on a preset time interval.
In some optional implementation manners of this embodiment, the executing subject may further input the captured image to a first neural network trained in advance, so as to obtain a first output result. Here, the first output result is used to indicate whether a face image is included in the captured image. Here, the first neural network may be an existing convolutional neural network, a full convolutional neural network, a cyclic neural network, or the like. The first neural network may be used to characterize a correspondence between the image and the first output result. When the first neural network is a convolutional neural network or a recurrent neural network, the first output result may include a probability value indicating that the captured image includes a face image and a probability value indicating that the captured image does not include a face image; when the first neural network is yolo2, the first output result may include a probability indicating that the captured image includes a face image. And when the probability that the face image is contained in the shot image is larger than a preset threshold value, the position of the face image in the shot image can be further included in the first output result. This position is represented by a display frame in the captured image.
Specifically, in this optional implementation manner, the first neural network may be obtained by training through the following steps:
first, a set of training samples is obtained. Here, the training sample may include a sample image and annotation information corresponding to the sample image. Wherein the annotation information is used for indicating whether the sample image contains the face image.
Then, the sample images of at least one training sample in the training sample set are respectively input to the initial first neural network, and a probability value of including the face image and a probability value of not including the face image corresponding to each sample image in the at least one sample image are obtained. And then, comparing the obtained probability value with a preset probability value of the face image in the sample image to determine whether the initial first neural network reaches a preset optimization target. The preset optimization goal may be that the recognition accuracy of the initial first neural network reaches a preset threshold. If the initial first neural network does not reach the preset optimization target, the parameters of the first neural network model can be adjusted by using a back propagation algorithm and a gradient descent algorithm until the adjusted first neural network model reaches the preset optimization target.
Step 202, in response to the fact that the shot image contains the facial image of the target user meeting the preset conditions, the authority certificate placed by the target user and the certificate used for representing the identity of the user are identified, and whether the user information of the certificate is consistent with the user information associated with the authority certificate or not is determined.
In this embodiment, according to whether the captured image determined in step 201 contains the facial image of the target user satisfying the preset condition, when the execution subject determines that the captured image contains the facial image of the target user satisfying the preset condition, the authority credential placed by the target user and the certificate for representing the user identity may be identified, so as to determine whether the user information of the certificate and the user information associated with the authority credential are consistent. Here, the permission credential includes, but is not limited to, at least one of: license plates, airline tickets, ship tickets, admission tickets, access cards, and the like. The document used for characterizing the identity of the user includes but is not limited to at least one of the following: identification cards, driver's licenses, passports, passes, and the like. Here, the user information of the certificate may include, but is not limited to, a user's avatar, the user's certificate number, the expiration date of the user's certificate, and the like. The credential information includes, but is not limited to, at least one of: time information, field information, train number/flight number/ticket number, user certificate number, user work card number, and the like. The user information associated with the permission credential includes, but is not limited to, a user license number, a user license plate number, time information of a train/flight/ticket corresponding to the user license number, number information, and the like. Here, the execution main body may confirm whether the certificate number of the user associated with the authority certificate is the same as the certificate number of the certificate for representing the user's identity, and may also confirm whether authority certificate information of the authority certificate obtained by the user through the certificate number of the certificate for representing the identity is the same as the read authority certificate information. Thus, if the information is determined to be the same, the user information of the certificate is determined to be consistent with the user information associated with the credential.
In embodiments, identifying the rights credential placed by the target user and the certificate characterizing the user's identity may include a variety of implementations.
As an example, the execution body may store user information of a user in advance. The pre-stored user information can be filled in by the user when the user obtains the authority certificate; the authority credential may be stored by the executing agent by invoking identity information of the user stored by a third party when the user obtains the authority credential. The executing main body can read the authority credential information placed by the target user and the certificate information used for representing the identity of the user by using the existing reading method of the authority credential information and the certificate information, so that the authority credential information and the certificate information used for representing the identity of the user are identified.
In other optional implementation manners of the embodiment, the execution principal may further obtain an authority credential placed by the user and an image of a certificate for characterizing the identity of the user. Then, by using a conventional technique for recognizing characters in an image, for example, an OCR (Optical Character Recognition) technique, text Recognition is performed on the acquired image of the authorization ticket and the certificate, and text information of the image of the authorization ticket and the certificate is read and used as information of the authorization ticket and information of the certificate. So that the execution subject can determine whether the user information of the certificate and the user information associated with the authority certificate are consistent according to the result of the text recognition.
Step 203, responding to the fact that the user information of the certificate is consistent with the user information associated with the authority certificate, and determining whether the user of the face image display of the target user meeting the preset conditions is matched with the user of the head portrait display of the user.
In this embodiment, in the case where the user information of the certificate determined in step 202 and the user information associated with the authority credential coincide with each other, the execution agent may extract the facial feature points of the acquired face image of the target user and the facial feature points of the user avatar in the certificate for characterizing the user identity, and then compare the acquired facial feature points of the face image of the target user with the facial feature points of the user avatar in the certificate, thereby determining whether the user displayed by the face image of the target user satisfying the preset condition matches the user displayed by the user avatar in the certificate, that is, the same user.
In this embodiment, facial feature point extraction may be performed on the acquired facial image satisfying the preset condition and the user's avatar in the certificate by using a pre-established active shape model. The active shape model is a conventional technique, and is not described herein again. Here, the facial feature points may include extreme points, boundary points, and interpolation points. The extreme point usually has only one definition in the local range of the face, and may include facial points such as the pupil, the tip of the nose, and the nostril of the eye; the boundary points are usually extracted uniformly on the local or overall contour edge of the face, and may be, for example, facial contour points, eyebrow contour points, lip contour points, etc.; usually, the interpolation points are obtained by interpolating other boundary points where the facial part has no obvious texture feature, and the interpolation points can be, for example, the center point of the mouth, the eyebrow, some occluded points, and the like.
And step 204, responding to the fact that the user of the face image display of the target user meeting the preset condition is matched with the user of the head portrait display of the user, and sending an instruction for controlling the gate to be opened.
In this embodiment, in the case where the user of the face image display of the target user satisfying the preset condition determined in step 203 matches the user of the user avatar display in the certificate, an instruction to control the gate to be turned on may be sent to the gate for controlling the passage of the user. Thus, the gate may open the barricade for passage of the target user.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the information detection method according to the present embodiment. In the application scenario of fig. 3, the control device 301 may detect the image captured by the camera 302, so as to determine whether the captured image contains the face image of the user a satisfying that "the face resolution is greater than the preset threshold". When the control device 301 detects that the captured image includes the face image of the user a satisfying that the "face resolution is greater than the preset threshold value", the identification card placed on the identification information reading device 303 by the user a and the train ticket placed on the scanning device 304 may be identified, so as to determine whether the user information of the identification card placed by the user a is consistent with the user information of the train ticket. When the user information of the identity card is determined to be consistent with the user information of the train ticket, whether the head portrait of the user in the identity card and the facial image in the shot image are both the images of the user A can be further determined. In the case where it is determined that the head image of the user in the identification card and the face image in the captured image are both images of the user a, an instruction to control the gate 305 to be turned on is transmitted.
The information detection method provided by the embodiment of the application comprises the steps of firstly detecting a shot image to determine whether the shot image contains a facial image of a target user meeting preset conditions, then identifying an authority certificate placed by the target user and a certificate for representing the identity of the user when the shot image is detected to contain the facial image of the target user meeting the preset conditions, and determining whether user information of the certificate is consistent with user information associated with the authority certificate. And finally, under the condition that the user information of the certificate is determined to be matched with the user displayed in the certificate for representing the user identity, sending an instruction for controlling the gate to be opened, thereby improving the detection speed of the authority certificate information.
With further reference to fig. 4, a flow 400 of yet another embodiment of a control method according to the present application is shown. The process 400 of the control method includes the following steps:
step 401, detecting the shot image, and determining whether the shot image contains the face image of the target user meeting the preset condition.
In the present embodiment, the execution subject of the information detection method (e.g., the control apparatus shown in fig. 1) may receive a captured image from a capturing device (e.g., the cameras 103, 104, 105 shown in fig. 1) by a wired connection manner or a wireless connection manner. The captured image is then detected. Here, the execution subject may detect the photographed image in real time or may detect the photographed image based on a preset time interval. Here, the preset time interval may be, for example, 30 seconds, 1 minute, or the like. Here, the photographed image may be detected using a statistical-based method, thereby determining whether a face image of a preset condition is included in the photographed image. The captured image may also be input to a first neural network trained in advance, so that whether a face image of a preset condition is included in the captured image is determined according to an output result of the first neural network.
Step 402, in response to determining that the captured image contains a facial image of the target user meeting a preset condition, determining whether the user's avatar is consistent with the user's avatar in the stored certificate.
In this embodiment, the execution main body may identify a certificate placed by a user and used for representing the identity of the user, acquire certificate information of the certificate used for representing the identity of the user, and store the acquired certificate information in a preset time period. Here, the preset time period may be a time period having a preset time interval from the current time, for example, a time period 1min before the current time, a time period 5min before the current time, and the like. According to the fact that the shot image determined in step 401 contains the facial image of the target user meeting the preset condition, the execution subject can determine whether the user head portrait in the document for representing the identity of the user is the same as the user head portrait in the document stored in the preset time period, so that whether the user head portrait and the user head portrait are consistent. Here, it may be determined whether the two are the same by using an existing image feature point matching method.
And step 403, in response to determining that the user head portrait is consistent with the user head portrait in the certificate stored in the preset time period, sending an instruction for prompting the target user to place the authority certificate and the certificate for representing the identity of the user.
In this embodiment, according to the fact that the user head portrait in the document for representing the user identity determined in step 402 is consistent with the user head portrait in the document stored in the preset time period, the current document information of the document for representing the user identity may be determined to be the document information of the previous user, and thus it may be determined that the target user does not place the authorization credential and the document for representing the user identity. Thus, the executing agent may send instructions prompting the target user to place the entitlement credential and the certificate characterizing the user's identity.
Step 404, identifying the authority certificate placed by the target user and the certificate for representing the user identity, and determining whether the user information of the certificate is consistent with the user information associated with the authority certificate.
In this embodiment, the executing agent may identify the authority credential placed by the target user and the certificate for representing the user identity, so as to determine whether the user information of the certificate and the user information associated with the authority credential are consistent. Here, the permission credential includes, but is not limited to, at least one of: license plates, airline tickets, ship tickets, admission tickets, access cards, and the like. The document used for characterizing the identity of the user includes but is not limited to at least one of the following: identification cards, driver's licenses, passports, passes, and the like. Here, the user information of the certificate may include, but is not limited to, a user's avatar, the user's certificate number, the expiration date of the user's certificate, and the like. The credential information includes, but is not limited to, at least one of: time information, field information, train number/flight number/ticket number, user certificate number, user work card number, and the like. The user information associated with the permission credential includes, but is not limited to, a user license number, a user license plate number, time information of a train/flight/ticket corresponding to the user license number, number information, and the like. Here, the execution main body may confirm whether the certificate number of the user associated with the authority certificate is the same as the certificate number of the certificate for representing the user's identity, and may also confirm whether authority certificate information of the authority certificate obtained by the user through the certificate number of the certificate for representing the identity is the same as the read authority certificate information. Thus, if the information is determined to be the same, the user information of the certificate is determined to be consistent with the user information associated with the credential.
In step 405, the determined face image of the target user satisfying the preset condition is extracted from the captured image.
In this embodiment, the execution subject may further extract the face image of the target user satisfying the preset condition from the captured image according to the face image of the target user satisfying the preset condition determined in step 401. Here, the execution subject described above may extract the face image of the target user satisfying the preset condition from the captured image using an existing feature extraction technique.
Step 406, inputting the extracted facial image to a pre-trained second neural network to obtain a second output result.
In this embodiment, the executing entity may input the facial image extracted in step 405 to a second neural network trained in advance, so as to obtain a second output result. Here, the second neural network may be an existing convolutional neural network, a full convolutional neural network, a recurrent neural network, or the like. The second neural network may be used to characterize a correspondence between the extracted facial image and the second output result. Here, the second output result described above may be used to indicate whether or not the face area displayed by the extracted face image includes an obstruction. Here, the second output result may include a probability value that the preset face region does not include an obstruction and a probability value that the preset face region includes an obstruction of each category in the preset set of obstruction categories. Here, the face region may include, but is not limited to, at least one of: the area around the eyes, the area around the mouth, etc.
Specifically, the second neural network may be obtained by training as follows:
first, a set of training samples is obtained. Here, the training sample may include a sample image and annotation information corresponding to the sample image. Wherein the annotation information is used to indicate whether the face region presented in the sample image contains an obstruction.
As an example, the sample image may include an eye surrounding area image with glasses, an eye surrounding area image without glasses, a mouth surrounding area image with a mask, a mouth surrounding area image without a mask, and the like. The label information corresponding to the sample image may be used to indicate whether the sample image displays glasses, a mask, or the like.
Then, the sample images of at least one training sample in the training sample set are respectively input to the initial second neural network, and a plurality of probability values corresponding to each sample image in the at least one sample image are obtained, wherein the probability values are used for indicating the probability that the face area in the sample image does not include the obstruction and the probability that the obstruction of the face area in the sample image belongs to the obstruction in the preset obstruction class set. Then, the shielding object class corresponding to the maximum probability value in the probability values is compared with the corresponding labeled shielding object class, so that whether the identification accuracy of the initial second neural network reaches a preset threshold value is determined. When the type of the shielding object corresponding to the maximum probability value is the same as the labeling information of the sample image, whether the maximum probability value reaches a preset threshold value can be further determined. When the maximum probability value reaches a preset threshold value, it can be determined that the training of the second neural network is completed at the moment. If the maximum probability value does not reach the preset threshold value, the parameters of the initial second neural network can be adjusted by using a back propagation algorithm and a gradient descent algorithm until the identification accuracy of the adjusted second neural network reaches the preset threshold value.
Step 407, in response to determining that the resulting second output result indicates that the face region includes an obstruction, processes the extracted face image to generate a face image with the obstruction removed.
In this embodiment, when the resulting second output result determined in step 406 indicates that the face area includes an obstruction, the execution subject described above may process the determined face image using various methods, thereby generating a face image with the obstruction removed.
As an alternative implementation manner, the obtained face image of the target user satisfying the preset condition is used as an original face image, and the execution subject may reconstruct the face image in the future by using a two-dimensional generalized principal component analysis method, and extract an area in the original face image, which is blocked by the blocking object, by comparing the original face image with the reconstructed face image. And compensating the area shielded by the non-shielding object in the reconstructed face image by using the original face image, and finally reconstructing the compensated image by continuously utilizing two-dimensional generalized principal component analysis until the difference between two adjacent reconstructed face images is smaller than a preset threshold value.
In some other alternative implementations, a skin color model of the extracted face image may also be established based on the extracted face image. Then, a pixel point region of the shielding object displayed in the extracted face image is obtained. And then, based on the skin color model, carrying out linear interpolation processing on the pixel point region of the shielding object displayed in the extracted face image. And finally, performing median filtering on the interpolated face image to generate a face image with the obstruction removed.
Step 408, determine if the user of the generated face image display with the obstruction removed matches the user of the user avatar display.
In this embodiment, the executing main body may respectively extract the generated facial feature points of the face image with the obstruction removed and the facial feature points of the user avatar in the document for representing the identity of the user, and then compare the generated facial feature points of the face image with the facial feature points of the user avatar in the document, so as to determine whether the user displayed by the generated face image with the obstruction removed matches with the user displayed by the user avatar in the document, that is, whether the user displayed by the face image with the obstruction removed is the same user.
And step 409, responding to the fact that the user of the face image display of the target user meeting the preset conditions is matched with the user of the head portrait display of the user, and sending an instruction for controlling the gate to be opened.
In this embodiment, the executing agent may send an instruction for controlling the gate to be opened to the gate for controlling the passage of the user if the user of the generated face image display for removing the obstruction determined in step 408 matches the user of the user avatar display. Thus, the gate may open the barricade for passage of the target user.
The specific processing manners of step 401, step 404, step 408 and step 409 in this embodiment may refer to the related descriptions of the implementation manners of step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2, and are not described in detail herein.
As can be seen from fig. 4, unlike the embodiment shown in fig. 2, the present embodiment highlights a step of prompting the user to place an authority credential and a certificate for characterizing the identity of the user, a step of determining whether the acquired user image satisfying the preset condition includes an obstruction, and a step of processing a face image including the obstruction, thereby further enhancing the accuracy of identification of the target user.
With further reference to fig. 5, as an implementation of the method shown in fig. 4, the present application provides an embodiment of an information detection apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied to various electronic devices.
As shown in fig. 5, the information detection apparatus 500 of the present embodiment includes: detection unit 501, identification unit 502, matching unit 503 and transmission unit 504. The detection unit 501 is configured to detect a captured image, and determine whether the captured image contains a facial image of a target user meeting a preset condition; the identification unit 502 is configured to identify an authority certificate placed by a target user and a certificate for representing the identity of the user in response to determining that a face image of the target user meeting a preset condition is included in a captured image, and determine whether user information of the certificate and user information associated with the authority certificate are consistent, wherein the user information of the certificate comprises a user head portrait; a matching unit 503 configured to determine whether a user of the face image display of the target user satisfying a preset condition matches a user of the user avatar display in response to determining that the user information of the certificate and the user information associated with the authority credential coincide; a transmitting unit 504 configured to transmit an instruction to control the gate to be turned on in response to a determination that the user of the face image display of the target user who satisfies the preset condition matches the user of the user avatar display.
In the present embodiment, in control device 500: for specific processing of the detecting unit 501, the identifying unit 502, the matching unit 503 and the sending unit 504 and beneficial effects thereof, reference may be made to related descriptions of implementation manners of step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2, and details are not described herein again.
In some optional implementations of this embodiment, the detection unit 501 is further configured to: inputting the shot image into a first neural network trained in advance to obtain a first output result, wherein the first output result is used for indicating whether the shot image contains a face image or not; based on the obtained first output result, it is determined whether the photographed image contains a face image of a target user that satisfies a preset condition.
In some optional implementations of this embodiment, the matching unit 503 includes: an extraction unit (not shown) configured to extract a face image of the determined target user satisfying a preset condition from the captured image; a second output result obtaining subunit (not shown) configured to input the extracted face image to a pre-trained second neural network, resulting in a second output result indicating whether a face area displayed by the extracted face image includes an obstruction; a processing subunit (not shown) configured to, in response to a determination that the resulting second output result indicates that the face region includes an obstruction, process the extracted face image, generating a face image with the obstruction removed; a matching subunit (not shown) configured to determine whether the user of the generated occlusion removed face image display matches the user of the user avatar display.
In some optional implementations of the present embodiment, the processing subunit (not shown) is further configured to: establishing a skin color model of the extracted face image based on the extracted face image; acquiring a pixel point region of a shielding object displayed in the extracted face image; performing linear interpolation processing on pixel point regions of the shielding objects displayed in the extracted facial image based on the skin color model; and performing median filtering on the interpolated face image to generate a face image with the obstruction removed.
In some optional implementations of the present embodiment, the identifying unit 502 is further configured to: acquiring an authority certificate and an image of a certificate; performing text recognition on the authority certificate and the image of the certificate; based on the text recognition result, it is determined whether the user information of the certificate and the user information associated with the authority credential coincide.
In some optional implementations of the present embodiment, the information detecting apparatus 500 is further configured to: determining whether the user head portrait is consistent with the user head portrait in the certificate stored in the preset time period; and in response to determining that the user head portrait is consistent with the user head portrait in the certificate stored in the preset time period, sending an instruction for prompting the target user to place the authority certificate and the certificate for representing the identity of the user.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use in implementing an electronic device (e.g., the control device shown in FIG. 1) of an embodiment of the present application is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
To the I/O interface 605, AN input section 606 including a keyboard, a mouse, and the like, AN output section 607 including a network interface card such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 608 including a hard disk, and the like, and a communication section 609 including a network interface card such as a L AN card, a modem, and the like, the communication section 609 performs communication processing via a network such as the internet, a drive 610 is also connected to the I/O interface 605 as necessary, a removable medium 611 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted into the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a detection unit, an identification unit, a matching unit, and a transmission unit. Here, the names of the units do not constitute a limitation of the units themselves in some cases, and for example, the detection unit may also be described as a "unit that detects a captured image".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: detecting the shot image, and determining whether the shot image contains a facial image of a target user meeting a preset condition; in response to the fact that the face image of the target user meeting the preset conditions is contained in the shot image, the authority certificate placed by the target user and the certificate used for representing the identity of the user are identified, whether the user information of the certificate is consistent with the user information associated with the authority certificate is determined, wherein the user information of the certificate comprises a user head portrait; in response to determining that the user information of the certificate and the user information associated with the permission certificate are consistent, determining whether a user of a face image display of a target user meeting a preset condition matches a user of a user avatar display; and in response to determining that the user of the face image display of the target user meeting the preset condition is matched with the user of the head portrait display of the user, sending an instruction for controlling the gate to be opened.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. An information detection method, comprising:
detecting the shot image, and determining whether the shot image contains a facial image of a target user meeting a preset condition;
in response to the fact that the face image of the target user meeting the preset condition is contained in the shot image, identifying an authority certificate placed by the target user and a certificate for representing the identity of the user, and determining whether user information of the certificate is consistent with user information associated with the authority certificate, wherein the user information of the certificate comprises a user head portrait;
in response to determining that the user information of the certificate and the user information associated with the permission credential are consistent, determining whether a user of a facial image display of a target user that satisfies the preset condition matches a user of an avatar display of the user;
in response to determining that the user of the facial image display of the target user who meets the preset condition matches the user of the user avatar display, sending an instruction for controlling the gate to be opened;
wherein the determining whether the user whose face image of the target user meets the preset condition is displayed matches the user whose avatar is displayed includes:
extracting the determined face image of the target user meeting the preset condition from the shot image;
inputting the extracted face image into a pre-trained second neural network to obtain a second output result, wherein the second output result is used for indicating whether a face area displayed by the extracted face image comprises a blocking object or not, the second output result comprises a probability value that a preset face area does not comprise the blocking object and a probability value that the preset face area comprises the blocking object of each category in a preset blocking object category set, and the face area comprises an area around an oral part;
in response to determining that the resulting second output result indicates that the face region includes an obstruction, processing the extracted face image to generate a face image with the obstruction removed;
determining whether a user of the generated occlusion removed facial image display matches a user of the user avatar display;
wherein the processing the extracted face image to generate a face image from which an obstruction is removed includes:
establishing a skin color model of the extracted face image based on the extracted face image;
acquiring a pixel point region of a shielding object displayed in the extracted face image;
based on the skin color model, carrying out linear interpolation processing on pixel point regions of the shielding objects displayed in the extracted facial image;
and performing median filtering on the interpolated face image to generate a face image with the obstruction removed.
2. The method according to claim 1, wherein the detecting the captured image and determining whether the captured image contains a facial image of a target user satisfying a preset condition comprises:
inputting the shot image into a first neural network trained in advance to obtain a first output result, wherein the first output result is used for indicating whether the shot image contains a face image or not;
determining whether the photographed image contains a face image of the target user satisfying the preset condition based on the obtained first output result.
3. The method of any one of claims 1-2, wherein the identifying the rights voucher placed by the target user and the certificate characterizing the identity of the user, the determining whether the user information of the certificate and the user information associated with the rights voucher are consistent, comprises:
acquiring the authority certificate and an image of the certificate;
performing text recognition on the authority certificate and the image of the certificate;
based on the text recognition result, determining whether the user information of the certificate and the user information associated with the authority credential are consistent.
4. The method of any of claims 1-2, wherein prior to determining whether the user information of the certificate and the user information associated with the permission credential are consistent, the method further comprises:
determining whether the user head portrait is consistent with the user head portrait in the stored certificate;
in response to determining that the user avatar is consistent with a user avatar in a stored document, sending an instruction prompting the target user to place an entitlement credential and a document for characterizing a user identity.
5. An information detection apparatus comprising:
a detection unit configured to detect a captured image, and determine whether a face image of a target user satisfying a preset condition is included in the captured image;
the identification unit is configured to respond to the fact that the face image of the target user meeting the preset condition is contained in the shot image, identify the authority certificate placed by the target user and the certificate for representing the identity of the user, and determine whether the user information of the certificate is consistent with the user information associated with the authority certificate, wherein the user information of the certificate comprises a user head portrait;
a matching unit configured to determine whether a user of a face image display of a target user satisfying the preset condition matches a user of an avatar display of the user in response to determining that the user information of the certificate and the user information associated with the authority credential coincide;
a transmitting unit configured to transmit an instruction to control gate on in response to determining that a user of a face image display of a target user satisfying the preset condition matches a user of the user avatar display;
wherein the matching unit includes:
an extraction unit configured to extract, from the captured image, a face image of the determined target user that satisfies a preset condition;
a second output result obtaining subunit configured to input the extracted face image to a second neural network trained in advance, to obtain a second output result, where the second output result is used to indicate whether a face region displayed by the extracted face image includes a blocking object, where the second output result includes a probability value that a preset face region does not include a blocking object and a probability value that a preset face region includes a blocking object of each category in a preset blocking object category set, and the face region includes a region around an oral part;
a processing subunit configured to, in response to determining that the resulting second output result indicates that the face region includes an obstruction, process the extracted face image, generating a face image with the obstruction removed;
a matching subunit configured to determine whether a user of the generated occlusion removed face image display matches a user of the user avatar display;
wherein the processing subunit is further configured to:
establishing a skin color model of the extracted face image based on the extracted face image;
acquiring a pixel point region of a shielding object displayed in the extracted face image;
based on the skin color model, carrying out linear interpolation processing on pixel point regions of the shielding objects displayed in the extracted facial image;
and performing median filtering on the interpolated face image to generate a face image with the obstruction removed.
6. The apparatus of claim 5, wherein the detection unit is further configured to:
inputting the shot image into a first neural network trained in advance to obtain a first output result, wherein the first output result is used for indicating whether the shot image contains a face image or not;
determining whether the photographed image contains a face image of the target user satisfying the preset condition based on the obtained first output result.
7. The apparatus according to one of claims 5-6, wherein the identification unit is further configured to:
acquiring the authority certificate and an image of the certificate;
performing text recognition on the authority certificate and the image of the certificate;
based on the text recognition result, determining whether the user information of the certificate and the user information associated with the authority credential are consistent.
8. The apparatus of one of claims 5-6, wherein the apparatus is further configured to:
determining whether the user head portrait is consistent with the user head portrait in the stored certificate;
in response to determining that the user avatar is consistent with a user avatar in a stored document, sending an instruction prompting the target user to place an entitlement credential and a document for characterizing a user identity.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-4.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN201810641278.5A 2018-06-21 2018-06-21 Information detection method and device Active CN108932774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810641278.5A CN108932774B (en) 2018-06-21 2018-06-21 Information detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810641278.5A CN108932774B (en) 2018-06-21 2018-06-21 Information detection method and device

Publications (2)

Publication Number Publication Date
CN108932774A CN108932774A (en) 2018-12-04
CN108932774B true CN108932774B (en) 2020-07-31

Family

ID=64446297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810641278.5A Active CN108932774B (en) 2018-06-21 2018-06-21 Information detection method and device

Country Status (1)

Country Link
CN (1) CN108932774B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115748B (en) * 2019-06-21 2023-08-25 腾讯科技(深圳)有限公司 Certificate image recognition method, device, terminal and storage medium
CN110490854A (en) * 2019-08-15 2019-11-22 中国工商银行股份有限公司 Obj State detection method, Obj State detection device and electronic equipment
CN110942033B (en) * 2019-11-28 2023-05-26 重庆中星微人工智能芯片技术有限公司 Method, device, electronic equipment and computer medium for pushing information
CN113614700A (en) * 2020-03-03 2021-11-05 华为技术有限公司 Image display monitoring method, device and equipment
CN114639191A (en) * 2022-04-22 2022-06-17 三峡新能源海上风电运维江苏有限公司 Control method of intelligent lock applied to offshore wind farm

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0805409A2 (en) * 1994-03-21 1997-11-05 I.D. Tec, S.L. Biometric security process for authenticating identity and credit cards, visas, passports and facial recognation
CN101038686A (en) * 2007-01-10 2007-09-19 北京航空航天大学 Method for recognizing machine-readable travel certificate
EP2382605A1 (en) * 2009-01-07 2011-11-02 Magnetic Autocontrol GmbH Apparatus for a checkpoint
CN204010092U (en) * 2014-08-13 2014-12-10 林香山 The self-service boarding system in airport
CN104392528A (en) * 2014-12-02 2015-03-04 中国民航大学 Unattended airport boarding gate management system and control method thereof
CN104408433A (en) * 2014-12-02 2015-03-11 中国民航大学 Identity identification system for personnel by air at airport boarding gate and control method
CN106981114A (en) * 2017-03-29 2017-07-25 安徽大鸿智能科技有限公司 The gate control system that vehicle driver subject one is taken an examination
CN107393102A (en) * 2017-09-01 2017-11-24 江苏阅衡智能科技有限公司 A kind of automatic ticket checking system based on recognition of face
CN107729852A (en) * 2017-10-24 2018-02-23 贵阳宏益房地产开发有限公司 Facial information register method, apparatus and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020579B (en) * 2011-09-22 2015-11-25 上海银晨智能识别科技有限公司 The spectacle-frame minimizing technology of face identification method and system, facial image and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0805409A2 (en) * 1994-03-21 1997-11-05 I.D. Tec, S.L. Biometric security process for authenticating identity and credit cards, visas, passports and facial recognation
CN101038686A (en) * 2007-01-10 2007-09-19 北京航空航天大学 Method for recognizing machine-readable travel certificate
EP2382605A1 (en) * 2009-01-07 2011-11-02 Magnetic Autocontrol GmbH Apparatus for a checkpoint
CN204010092U (en) * 2014-08-13 2014-12-10 林香山 The self-service boarding system in airport
CN104392528A (en) * 2014-12-02 2015-03-04 中国民航大学 Unattended airport boarding gate management system and control method thereof
CN104408433A (en) * 2014-12-02 2015-03-11 中国民航大学 Identity identification system for personnel by air at airport boarding gate and control method
CN106981114A (en) * 2017-03-29 2017-07-25 安徽大鸿智能科技有限公司 The gate control system that vehicle driver subject one is taken an examination
CN107393102A (en) * 2017-09-01 2017-11-24 江苏阅衡智能科技有限公司 A kind of automatic ticket checking system based on recognition of face
CN107729852A (en) * 2017-10-24 2018-02-23 贵阳宏益房地产开发有限公司 Facial information register method, apparatus and system

Also Published As

Publication number Publication date
CN108932774A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN108932774B (en) Information detection method and device
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
CN108509915B (en) Method and device for generating face recognition model
EP3382601B1 (en) Face verifying method and apparatus
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
KR20200006987A (en) Access control method, access control device, system and storage medium
CN109034069B (en) Method and apparatus for generating information
CN108491823B (en) Method and device for generating human eye recognition model
CN110163078A (en) The service system of biopsy method, device and application biopsy method
CN108229375B (en) Method and device for detecting face image
CN107832721B (en) Method and apparatus for outputting information
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
CN108549848B (en) Method and apparatus for outputting information
CN109766785A (en) A kind of biopsy method and device of face
CN108133197B (en) Method and apparatus for generating information
CN108108711B (en) Face control method, electronic device and storage medium
CN112434578A (en) Mask wearing normative detection method and device, computer equipment and storage medium
US20210248401A1 (en) System and method for face spoofing attack detection
CN111931628B (en) Training method and device of face recognition model and related equipment
KR20210155303A (en) Online Test System using face contour recognition AI to prevent the cheating behaviour and method thereof
CN109241934A (en) Method and apparatus for generating information
CN108399401B (en) Method and device for detecting face image
WO2023202400A1 (en) Training method and apparatus for segmentation model, and image recognition method and apparatus
CN111178233A (en) Identity authentication method and device based on living body authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant after: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant before: BEIJING JINGDONG FINANCIAL TECHNOLOGY HOLDING Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee before: Jingdong Digital Technology Holding Co.,Ltd.

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee after: Jingdong Digital Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.