CN110516623A - A kind of face identification method, device and electronic equipment - Google Patents
A kind of face identification method, device and electronic equipment Download PDFInfo
- Publication number
- CN110516623A CN110516623A CN201910810373.8A CN201910810373A CN110516623A CN 110516623 A CN110516623 A CN 110516623A CN 201910810373 A CN201910810373 A CN 201910810373A CN 110516623 A CN110516623 A CN 110516623A
- Authority
- CN
- China
- Prior art keywords
- image
- user
- face
- pedestrian
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Abstract
The present embodiments relate to electronic information technical fields, disclose a kind of face identification method, device and electronic equipment, this method whether there is the face of the user in the image by obtaining and judging user, in the presence of judge face with the presence or absence of blocking, the masking ratio of face is obtained when existing and blocking and judges whether to be more than or equal to preset ratio, when being more than or equal to according to default pedestrian weight recognizer, extract the image of the pedestrian to match in history image with the walking posture of user, with the identity information of the determination user, when being less than according to default face retrieving algorithm, the facial image of user is restored, to determine the identity information of the user according to the image after reduction, this method can identify subscriber identity information in the case where face is blocked.
Description
Technical field
The present embodiments relate to electronic information technical field, in particular to a kind of face identification method, device and electronics
Equipment.
Background technique
Identification is carried out to user by the biological information of human body, to confirm the identification skill of the identity information of user
Art safety with higher, the technology are the developing direction of future secure identification technology.
Currently, being received significant attention based on the identity recognizing technology of recognition of face because of its convenience.But in practical application
In, such as in the public places such as cell monitoring system, there may be faces to block by user or pedestrian, or face has not been obtained only
The case where obtaining the figure viewed from behind or leaning to one side, in this case, can not be directly by detecting user's face, to identify user
Identity.
Summary of the invention
In view of the above drawbacks of the prior art, the purpose of the embodiment of the present invention is that providing a kind of face identification method, device
And electronic equipment, Face datection can be carried out in the case where face is blocked or face has not been obtained, and identify subscriber identity information.
The purpose of the embodiment of the present invention is that be achieved through the following technical solutions:
In order to solve the above technical problems, wrapping in a first aspect, provide a kind of face identification method in the embodiment of the present invention
It includes:
Obtain the image of user;
Judge the face that whether there is the user in described image;
If it exists, then judge that the face of the user whether there is to block;
If it exists, then the masking ratio of the face of the user is obtained;
Judge whether the masking ratio is more than or equal to preset ratio;
If more than or be equal to, according to default pedestrian weight recognizer, extract walking with the user in history image
The image for the pedestrian that posture matches;
According to the image of extraction, the identity information of the user is determined;
If being less than, according to face retrieving algorithm is preset, the facial image of user described in described image is restored;
According to the image after reduction, the identity information of the user is determined.
In some embodiments, the basis presets pedestrian's weight recognizer, extracts in history image with the user's
The image for the pedestrian that walking posture matches, specifically includes:
Human synovial by preset posture recognizer, from several human regions that described image identifies the user
Point, and, the human joint points from several human regions of pedestrian each in each history image;
Each human region of each human region of the user and each pedestrian is matched one by one, is obtained described each
The matching score of human region;
The matching score of each human region of weighted calculation, obtains the matching value of the user and the pedestrian;
Judge whether the matching value is more than preset threshold;
If being more than, confirms the user and the pedestrian is same people, extracted from the history image described in being confirmed
The image of pedestrian.
In some embodiments, the human region includes head zone;
Each human region by each human region of the user and each pedestrian matches one by one, obtains institute
The step of stating the matching score of each human region further comprises:
The Euclidean distance of the user's head image and the head image of each pedestrian is calculated by presetting FaceNet model,
Obtain the matching score of the head zone.
In some embodiments, the human region further includes body region, and the body region includes torso area, a left side
Leg region and right leg region;
Each human region by each human region of the user and each pedestrian matches one by one, obtains institute
The step of stating the matching score of each human region further comprises:
HSV space is converted by the image color space of the body region;
According to the HSV space, each pixel in the user's body image and the pedestrian body image is obtained
Value;
According to the value of the pixel, obtain the user's body image histogram and the pedestrian body image it is straight
The difference of side's figure, obtains the matching score.
In some embodiments, the matching score of each human region of the weighted calculation, obtains the user and institute
The step of stating the matching value of pedestrian, specifically includes:
Obtain the weight coefficient of each human region;
According to the matching score and the weight coefficient, the matching value is obtained, the calculating for calculating the matching value is public
Formula is as follows:
Similar_score=α * head_score+ β * body_score+ γ1*lleg_score+γ2*rleg_score
Wherein, [α, beta, gamma1,γ2] be each human region weight coefficient, Similar_score is described
With value, head_score is the matching score of head zone, and body_score is the matching score of torso area, lleg_score
For the matching score in left leg region, rleg_score is the matching score in right leg region.
In some embodiments, the basis presets face retrieving algorithm, to the face figure of user described in described image
As the step of being restored, further comprise:
Part is carried out to the source images for the face that face public data is concentrated to erase, and obtains image of erasing;
It erases described in foundation the corresponding relationship of image and the source images, by depth residual error network to face also master mould
It is trained;
The facial image of the user is inputted the face also master mould to calculate, the figure after obtaining the reduction
Picture.
In some embodiments, in the image according to after reduction, the step of determining the identity information of the user it
Before, the method also includes:
According to the image after the source images and the reduction, reduction index is calculated;
Judge whether the reduction index is more than or equal to pre-set level;
If so, the step of executing the image according to after reduction, determining the identity information of the user.
In some embodiments, the image according to after the source images and the reduction calculates reduction index, calculates
The calculation formula for restoring index is as follows:
MSE=(1/S) * ∑ (I-J) 2
PSNR=10*log (2552/MSE)
Wherein, I is the source images, and J is the image after the reduction, and S is the area of the source images, and MSE is described
The mean square error of image after source images and the reduction, PSNR are the reduction index.
In order to solve the above technical problems, second aspect, a kind of face identification device is provided in the embodiment of the present invention, is wrapped
It includes:
First obtains module, for obtaining the image of user;
First judgment module, the face for judging to whether there is the user in described image;
Second judgment module, for, there are when the face of the user, judging the face of the user in described image
With the presence or absence of blocking;
Second obtains module, when for the face in the user in the presence of blocking, obtains blocking for the face of the user
Ratio;
Third judgment module, for judging whether the masking ratio is more than or equal to preset ratio;
Extraction module, for being identified according to default pedestrian when the masking ratio is more than or equal to preset ratio again
Algorithm extracts the image of the pedestrian to match in history image with the walking posture of the user;
First determining module determines the identity information of the user for the image according to extraction;
Recovery module is used for when the masking ratio is less than preset ratio, according to default face retrieving algorithm, to described
The facial image of user described in image restores;
Second determining module, for determining the identity information of the user according to the image after reduction.
In order to solve the above technical problems, the third aspect, the embodiment of the invention provides a kind of electronic equipment, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
A processor executes, so that at least one described processor is able to carry out method described in first aspect as above.
In order to solve the above technical problems, fourth aspect, the embodiment of the invention also provides a kind of computer-readable storage mediums
Matter, the computer-readable recording medium storage have computer executable instructions, and the computer executable instructions are based on making
Calculation machine executes method described in first aspect as above.
In order to solve the above technical problems, the 5th aspect, the embodiment of the invention also provides a kind of computer program product, institutes
Stating computer program product includes the computer program being stored on computer readable storage medium, and the computer program includes
Program instruction makes the computer execute method described in first aspect as above when described program instruction is computer-executed.
Compared with prior art, the beneficial effects of the present invention are: being in contrast to the prior art, in the embodiment of the present invention
A kind of face identification method, device and electronic equipment are provided, whether this method deposits in the image by obtaining and judging user
In the face of the user, in the presence of judge the face of the user with the presence or absence of blocking, exist block when, described in acquisition
The masking ratio of the face of user simultaneously judges whether the masking ratio is more than or equal to preset ratio.Wherein, being greater than or
When person is equal to preset ratio, according to default pedestrian weight recognizer, the walking posture phase in history image with the user is extracted
The image of matched pedestrian, with the identity information of the determination user;When being less than preset ratio, is restored and calculated according to default face
Method restores the facial image of user described in described image, to determine the body of the user according to the image after reduction
Part information, method provided in an embodiment of the present invention can identify subscriber identity information in the case where face is blocked.
Detailed description of the invention
It is illustrated in one or more embodiments by the picture in corresponding attached drawing, these are exemplary
Illustrate not constitute the restriction to embodiment, the element/module and step in attached drawing with same reference numbers label are expressed as
Similar element/module and step, unless there are special statement, composition does not limit the figure in attached drawing.
Fig. 1 is the exemplary system architecture schematic diagram of the embodiment of the face identification method applied to the embodiment of the present invention;
Fig. 2 is a kind of flow chart of face identification method provided in an embodiment of the present invention;
Fig. 3 is a sub-process figure of step 160 in method shown in Fig. 2;
Fig. 4 is another sub-process figure of step 160 in method shown in Fig. 2;
Fig. 5 is a sub-process figure of step 180 in method shown in Fig. 2;
Fig. 6 is another sub-process figure of step 180 in method shown in Fig. 2;
Fig. 7 is a kind of structural schematic diagram of face identification device provided in an embodiment of the present invention;
Fig. 8 is the hardware configuration signal of the electronic equipment provided in an embodiment of the present invention for executing above-mentioned face identification method
Figure.
Specific embodiment
The present invention is described in detail combined with specific embodiments below.Following embodiment will be helpful to the technology of this field
Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that the ordinary skill of this field
For personnel, without departing from the inventive concept of the premise, various modifications and improvements can be made.These belong to the present invention
Protection scope.
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not
For limiting the application.
It should be noted that each feature in the embodiment of the present invention can be combined with each other, in this Shen if do not conflicted
Within protection scope please.In addition, though having carried out functional module division in schematic device, shows patrol in flow charts
Sequence is collected, but in some cases, it can be shown in the sequence execution in the module division being different from device or flow chart
The step of out or describing.In addition, the printed words such as " first " employed herein, " second ", " third " to data and do not execute secondary
Sequence is defined, and is distinguished to function and the essentially identical identical entry of effect or similar item.
Unless otherwise defined, technical and scientific term all used in this specification is led with technology of the invention is belonged to
The normally understood meaning of the technical staff in domain is identical.Used term is only in the description of the invention in this specification
The purpose of description specific embodiment is not intended to the limitation present invention.Term "and/or" used in this specification includes
Any and all combinations of one or more related listed items.
In addition, as long as technical characteristic involved in the various embodiments of the present invention described below is each other not
Constituting conflict can be combined with each other.
It referring to Figure 1, is the exemplary system architecture schematic diagram of the embodiment applied to face identification method of the invention.
As shown in Figure 1, the system structure includes: electronic equipment 10 and video camera 20, the electronic equipment 10 and the video camera 20 are logical
Letter connection, the electronic equipment 10 can obtain the image that the video camera 20 acquires.The communication connection can be network company
It connects, may include various connection types, such as wired, wireless communication or fiber optic cables etc..
The electronic equipment 10 can complete identification 20 acquired image of video camera in the case where face is blocked
The identity information of user.Specifically, the electronic equipment 10 can judge in image with the presence or absence of face, face with the presence or absence of screening
It keeps off, block whether be greater than preset ratio, and the weight recognizer of the pedestrian by being stored in advance in electronic equipment 10 and face are also
Former algorithm after carrying out image procossing to facial image, passes through the identity information of treated image recognition user.
The electronic equipment 10 is the device that can be stored great amount of images data and carry out calculation process to image data, example
Such as, the terminal server that can be physics is connect with the video camera 20 by certain communication protocol communication by network.Into
One step, it is also possible to Cloud Server, cloud host, cloud service platform, cloud computing platform etc., can similarly passes through network and institute
Video camera 20 is stated to connect by certain communication protocol communication.The network can be Ethernet either local area network, the communication
Agreement can be the communication protocols such as TCP/IP, NETBEUI and IPX/SPX, and the communication connection can be wireless connection or have
Line connection, specifically, can be configured according to actual needs.
In some other embodiment, the electronic equipment 10 can also be a kind of human face recognition machine with camera function
Device people not only can acquire image by the photographic device of itself, can also communicate to connect with the video camera 20, acquisition is taken the photograph
The image that camera 20 acquires.
It should be noted that face identification method provided by the embodiment of the present application is generally held by above-mentioned electronic equipment 10
Row, correspondingly, face identification device is generally positioned in the electronic equipment 10.
The video camera 20 is the device that can acquire image, and described image can be video, be also possible to picture, this hair
In bright embodiment by taking picture as an example.The video camera 20 can be camera, shooting mobile phone, video recorder, video camera or night vision device
Deng the device for acquiring image, the video camera 20 can be communicated to connect with the electronic equipment 10, the figure that will be acquired in real time
As information is sent to the electronic equipment 10.The video camera 20 can be several, such as 1,2,34.Several
Image camera 20 can be identical image capture device, or different image capture device, to meet not
Same demand.
Specifically, with reference to the accompanying drawing, the embodiment of the present invention is further elaborated.
The embodiment of the invention provides a kind of face identification method, this method can be executed by above-mentioned electronic equipment 10, please be joined
See Fig. 2, it illustrates a kind of flow chart of face identification method according to applied by above system structure, this method includes but not
It is limited to following steps:
Step 110: obtaining the image of user.
In embodiments of the present invention, the image of user, the video camera can be obtained by the video camera 20 as described in figure 1 above
20 can be arranged its installation site with the application scenarios of face identification method according to an embodiment of the present invention.For example, needing to out
When entering the subscriber identity informations of the public places such as cell and identified, one or more video cameras 20 can be mounted on
Each exit of cell obtains the image for entering and leaving the user of cell with user.It is adopted in real time it should be noted that only obtaining video camera 20
In the image of collection, comprising user/pedestrian image, to be determined further and handle.
Step 120: judging the face that whether there is the user in described image.If it exists, 130 are gone to step, if
It is not present, return step 110.
The embodiment of the present invention uses the method for recognition of face to determine the identity information of user, therefore, what is acquired in real time
In user images, need to filter out there are the judgement of the image further progress of user's face, if it does not exist the image of user's face,
Illustrate although image has taken user but do not taken the head of user, return step 110 reacquires the image of user.
Step 130: judging that the face of the user whether there is and block.If it exists, 141 are gone to step;If it does not exist,
Go to step 142.
When passing through video camera 20 within sweep of the eye due to user, it is understood that there may be lead to people with shelters such as cap, masks
The case where face part is blocked, therefore, it is also desirable to judge the face of user with the presence or absence of blocking, with judge whether can be directly to people
Face is identified.
Step 141: obtaining the masking ratio of the face of the user, and go to step 150.
Wherein, when user's face, which exists, to be blocked, need to obtain the masking ratio of user's face, and according to its masking ratio
Corresponding algorithm is used to obtain the complete facial image of the user.
Step 142: according to the face, determining the identity information of the user.
If user's face there is no blocking, then, can pedestrian in history image according to the pre-stored data facial image
Record and its corresponding identity information, the face is matched with the facial image of the pedestrian in history image, with identification
The identity information of the user.
It should be noted that in embodiments of the present invention, recognizing the corresponding user's body of facial image of the user of acquisition
After part information, image/facial image of the user of the acquisition and its corresponding subscriber identity information are stored to the history figure
As in, the history image data set in more new database.
Step 150: judging whether the masking ratio is more than or equal to preset ratio.If more than or be equal to, jump
To step 160;If being less than, 180 are gone to step.
After determining that face is blocked and obtains its masking ratio, judge whether its masking ratio is more than preset ratio, with
Determine whether the face part not being blocked is enough the foundation as identified subscriber identity information, further, according to difference
Algorithm identification user identity information.
The preset ratio can set the reducing power of facial image according in following default face retrieving algorithms
Fixed, the ratio being at most blocked with the face that can be restored of the default face retrieving algorithm is the preset ratio.For example,
The default face retrieving algorithm is at best able to be blocked half in face, i.e., when restoring facial image in the range of 50%, with
50% is used as the preset ratio.
Step 160: according to default pedestrian weight recognizer, extracting the walking posture phase in history image with the user
The image of the pedestrian matched, and go to step 170.
When masking ratio is more than or equal to preset ratio, since the facial image by not blocking can not restore
The facial image identified, therefore, in embodiments of the present invention, additionally it is possible to pass through default pedestrian's weight recognizer, identification
In history image, the image of the pedestrian to match with the walking posture of the user images is obtained and user's walking posture phase
The image of the user can be obtained in matched pedestrian, the image for obtaining the pedestrian.
Step 170: according to the image of extraction, determining the identity information of the user.
Further, the face characteristic that user can be extracted from the image of the extraction, can be true according to face characteristic
Determine the identity information of user.
Step 180: according to default face retrieving algorithm, the facial image of user described in described image is restored,
And go to step 190.
When masking ratio is less than preset ratio, the range for illustrating that face is blocked is little, the model that face is blocked
It is trapped among within the scope of the reduction of the default face retrieving algorithm, at this point it is possible to by preset face retrieving algorithm in image
Face restored.
Step 190: according to the image after reduction, determining the identity information of the user.
Further, the face characteristic that user can be extracted from the image after the reduction, can be with according to face characteristic
Determine the identity information of user.Specifically, the identity information that user can be determined according to face characteristic can be according to existing
The a variety of face recognition algorithms having carry out recognition of face, in conjunction with the facial image record in history image to determine user identity
Information.
A kind of face identification method is provided in the embodiment of the present invention, this method passes through in the image for obtaining and judging user
With the presence or absence of the face of the user, in the presence of judge the face of the user with the presence or absence of blocking, exist block when, obtain
It takes the masking ratio of the face of the user and judges whether the masking ratio is more than or equal to preset ratio.Wherein, In
When more than or equal to preset ratio, according to default pedestrian weight recognizer, the walking in history image with the user is extracted
The image for the pedestrian that posture matches, with the identity information of the determination user;When being less than preset ratio, according to default face
Retrieving algorithm restores the facial image of user described in described image, to determine the use according to the image after reduction
The identity information at family, method provided in an embodiment of the present invention can identify subscriber identity information in the case where face is blocked.
In some embodiments, Fig. 3 is referred to, is a sub-process figure of step 160 in method shown in Fig. 2, the step
160 specifically include:
Step 161: through preset posture recognizer, from several human regions that described image identifies the user
Human joint points, and, the human joint points from several human regions of pedestrian each in each history image.
Step 162: each human region of each human region of the user and each pedestrian being matched one by one, is obtained
To the matching score of each human region.
Step 163: the matching score of each human region of weighted calculation obtains the matching of the user and the pedestrian
Value.
Step 164: judging whether the matching value is more than preset threshold.If being more than, 165 are gone to step;If not exceeded,
Go to step 161.
Step 165: confirming the user and the pedestrian is same people, extracted from the history image described in being confirmed
The image of pedestrian.
In embodiments of the present invention, since history image can be more parts of images, there may be one in every portion image
Or multiple pedestrians, therefore, it is necessary to by each of each pedestrian in each human region of the user and each history image
Human region is matched one by one, to obtain the matching score of each human region.It is described by preset posture recognizer,
The matching process for the step for extracting the image of the pedestrian to match in history image with the walking posture of the user is to use
Matching process based on local feature.
Firstly, the posture to pedestrian identifies, using OpenPose human body attitude recognizer to the human body detected
It carries out 2D Attitude estimation and human body is divided into multiple human regions using human joint points, after obtaining multiple human regions, extract
The characteristic information of each human region, by the image of the characteristic information of human region each in the image of user and each pedestrian each one
The characteristic information of body region corresponds matched respectively, obtains the matching score of each human region.
Then, matching result is obtained after the matching score of each human region being weighted, weighted calculation is obtained
Matching value, when being more than preset threshold, determines that matching degree is higher compared with preset threshold, that is to say pedestrian image and user's figure
The walking posture similarity of picture is higher, thus may determine that pedestrian and user are the same persons.Further, obtain the pedestrian's
Face characteristic can recognize the identity information of the user.The preset threshold can be according to the preset posture recognizer to figure
Recognition capability, accuracy of identification etc. of walking posture determines as in, does not need the restriction for sticking to the embodiment of the present invention.
Specifically, human body is divided into head zone and body region with the human joint points of neck by the embodiment of the present invention
Domain, then with the human joint points of two thigh roots, body region, which is divided into torso area, left leg region and right leg region, is
Example carries out characteristic matching.Obtain head zone, torso area, this four regions of left leg region and right leg region matching score
Afterwards, the matching value can be obtained after the matching score in this four regions being weighted, judge matching value whether be more than
It whether is the same person when threshold value is to judge pedestrian and user.
Further, in some embodiments, the human region includes head zone, refers to Fig. 4, is shown in Fig. 2
Another sub-process figure of step 160, the step 162 specifically include in method:
Step 1621: calculating the user's head image and the head image of each pedestrian by presetting FaceNet model
Euclidean distance obtains the matching score of the head zone.
Specifically, the mapping put on the user's head image and the head image to theorem in Euclid space of each pedestrian is obtained, is counted
Calculate feature the reflecting in theorem in Euclid space of the corresponding a certain head image of head image of the user's head image and each pedestrian
The distance of exit point, the as described Euclidean distance, the Euclidean distance can be used for characterizing the head of the user's head image and each pedestrian
The similarity of portion's image, therefore, the Euclidean distance can be used as the matching score of head zone.
For example, obtaining the mapping put on user's head image and the head image to theorem in Euclid space of each pedestrian, user is calculated
Head image in the mapping point and each pedestrian of nose this feature in theorem in Euclid space head image in nose this feature
The distance of mapping point in theorem in Euclid space, the distance of mapping point of two noses in theorem in Euclid space be it is described it is European away from
From the value is smaller, illustrates that two head images are more similar.
In some embodiments, the human region further includes body region, and the body region includes torso area, a left side
Leg region and right leg region, continuing with referring to fig. 4, the step 162 further comprises:
Step 1622: converting HSV space for the image color space of the body region.
Step 1623: according to the HSV space, obtaining each in the user's body image and the pedestrian body image
The value of pixel.
Step 1624: according to the value of the pixel, obtaining the histogram and pedestrian's body of the user's body image
The difference of the histogram of body image obtains the matching score.
In embodiments of the present invention, since video camera 20 obtains the image of user, which is to be with rgb color space,
Its image parameter is characterized, for the statistics convenient for pixel in histogram, converts HSV sky for the image color space of body region
Between, pixel point color information is characterized with tone H, saturation degree S, these three color parameters of lightness V.Then, according to each pixel
The value of each pixel in the user's body image and the pedestrian body image is calculated in the color parameter of HSV space, it will
The value of identical pixel carries out accumulative superposition, is converted to the calculating lesser histogram of cost after counting its quantity.Finally obtain
The difference of the histogram of the histogram of the user's body image and the pedestrian body image, can be obtained the matching score.
Specifically, the image color space of trunk and left leg and right leg is first transformed into HSV space from RGB, takes each picture
The value of vegetarian refreshments is Val=δ * H+ ε * S+ ζ * V, calculates histogram, two people's (user and pedestrian) trunks and left leg and right leg to Val
Val histogram difference, as two people are in the similarity of trunk and left leg and right leg/matching score.Wherein, [δ, ε, ζ] is
Another kind of weight coefficient, such is divided into two groups again, respectively corresponds body and leg (left leg and right leg), i.e., body and leg will be adopted
With different [δ, ε, ζ] color weights.
In some embodiments, continuing with referring to fig. 4, the step 163 is specifically included:
Step 1631: obtaining the weight coefficient of each human region.
Step 1632: according to the matching score and the weight coefficient, obtaining the matching value.
After obtaining above-mentioned head zone, torso area, left leg region and the matching score in right leg region, by this four
Matching score with region is weighted, and described can obtain matching value.Further, judge the matching value whether be more than
Preset threshold can be obtained whether the pedestrian and the user are the same persons.
Wherein, the calculation formula for calculating the matching value is as follows:
Similar_score=α * head_score+ β * body_score+ γ1*lleg_score+γ2*rleg_score
Wherein, [α, beta, gamma1,γ2] be each human region weight coefficient, Similar_score is described
With value, head_score is the matching score of head zone, and body_score is the matching score of torso area, lleg_score
For the matching score in left leg region, rleg_score is the matching score in right leg region.
In some embodiments, Fig. 5 is referred to, is a sub-process figure of step 180 in method shown in Fig. 2, the step
180 specifically include:
Step 181: part being carried out to the source images for the face that face public data is concentrated and is erased, image of erasing is obtained.
Step 182: the corresponding relationship of erase described in foundation image and the source images, by depth residual error network to face
Also master mould is trained.
Step 183: the facial image of the user being inputted into the face also master mould and is calculated, the reduction is obtained
Image afterwards.
In embodiments of the present invention, firstly, passing through such as LFW (Labeled Faces in the Wild), FDDB
Some face public data collection of (Face Detection Data Set and Benchmark) and MegaFace etc., to it
In face locally erase, and the corresponding relationship erased between image and source images is established, using depth residual error network
Res-Net is trained face also master mould.It wherein, is progress face reduction, it is also necessary to by the face figure in history image
As being recorded in correlation data concentration.Then the facial image being blocked is input in face reduction model and is calculated
Image after being restored.
It should be noted that in embodiments of the present invention, the face public data collection further includes face in history image
Image recorded data collection, the data set is after the identity information of each recognition and verification user, by the identity information of the user and right
The user images that should be obtained are updated as the element of set.It is can also be in the history image comprising in advance to user's
The image that daily face is recorded.
It on the basis of the pre-training model of public data collection, is recorded, is obtained described by the daily face to user
History image, then history image is put into pre-training model and carries out transfer learning.Using the also master mould of the face after migration, also
Original is restored to obtain complete face, then by face characteristic model, the feature vector of above-mentioned complete face is calculated, with correlation data
The feature vector of face in the facial image record of concentration carries out modular arithmetic, obtains lineup's face similarity degree, finds similarity
Face in the corresponding facial image record of maximum value, so that it is determined that the identity information of user.
Further, in some embodiments, Fig. 6 is referred to, is another sub-process of step 180 in method shown in Fig. 2
Figure, before the step 190, the method also includes:
Step 184: according to the image after the source images and the reduction, calculating reduction index.
In embodiments of the present invention, the evaluation index restored using PSNR value (peak signal-to-noise ratio value) as face, if source
Image is I, and image of erasing is I ', and the image after I ' is restored by face also master mould is J, and the area of source images is S, MSE
For source images I and the mean square error for going back original image J, then
The calculation formula for calculating reduction index is as follows:
MSE=(1/S) * ∑ (I-J) 2
PSNR=10*log (2552/MSE)
Wherein, I is the source images, and J is the image after the reduction, and S is the area of the source images, and MSE is described
The mean square error of image after source images and the reduction, PSNR are the reduction index.The source images are that the user is preparatory
The standard picture for including in the identity information of registration, for example, it may be the certificate photo of user, ID Card Image etc..
Wherein, PSNR value is bigger, shows that source images I and the gap for going back original image J are smaller, reduction effect is more ideal.
Step 185: judging whether the reduction index is more than or equal to pre-set level.If so, going to step 190;
If it is not, going to step 180.
After getting the reduction index PSNR value, it is also necessary to further judge whether the value is more than or equal to default finger
Mark, when being greater than pre-set level, the gap very little of the image J after illustrating source images I and reduction, reduction effect is ideal, then
The identity information of user can be further determined according to the image after the reduction.
The embodiment of the present invention also provided is a kind of face identification device, refer to Fig. 7, and it illustrates the embodiment of the present application to mention
A kind of structure of the face identification device supplied, which includes: the first acquisition module 210, first judgment module
220, the second judgment module 230, second obtains module 240, third judgment module 250, extraction module 260, the first determining module
270, recovery module 280, the second determining module 290.
The first acquisition module 210 is used to obtain the image of user.
The first judgment module 220 is used to judge to whether there is in described image the face of the user.
Second judgment module 230 is used to judge the user there are when the face of the user in described image
Face with the presence or absence of blocking.
The second acquisition module 240 is used to obtain the face of the user when the face of the user exists and blocks
Masking ratio.
The third judgment module 250 is for judging whether the masking ratio is more than or equal to preset ratio.
The extraction module 260 is used for when the masking ratio is more than or equal to preset ratio, according to default pedestrian
Weight recognizer, extracts the image of the pedestrian to match in history image with the walking posture of the user.
First determining module 270 is used for the image according to extraction, determines the identity information of the user.
The recovery module 280 is used for when the masking ratio is less than preset ratio, according to default face retrieving algorithm,
The facial image of user described in described image is restored.
Second determining module 290 is used to determine the identity information of the user according to the image after reduction.
A kind of face identification device 200 is provided in the embodiment of the present invention, which obtains module 210 by first and obtain
And judge the face that whether there is the user in the image of user by first judgment module 220, in the presence of pass through second
Judgment module 230 judges that the face of the user with the presence or absence of blocking, when in the presence of blocking, is obtained by the second acquisition module 240
It takes the masking ratio of the face of the user and judges whether the masking ratio is greater than or waits by third judgment module 250
In preset ratio.Wherein, when being more than or equal to preset ratio, extraction module 260 is mentioned according to default pedestrian weight recognizer
The image of the pedestrian to match in history image with the walking posture of the user is taken, so that the first determining module 270 can be true
The identity information of the fixed user;When being less than preset ratio, recovery module 280 is according to default face retrieving algorithm, to described
The facial image of user described in image restores, so that the second determining module 290 can be determined according to the image after reduction
The identity information of the user, face identification device provided in an embodiment of the present invention can be identified in the case where face is blocked
Subscriber identity information.
In some embodiments, the extraction module 260 is also used to know by preset posture recognizer from described image
Human joint points on several human regions of the not described user, and, each pedestrian is several from each history image
Human joint points on human region;Each human region of the user and each human region of each pedestrian are carried out one by one
Matching, obtains the matching score of each human region;The matching score of each human region of weighted calculation, obtains the use
The matching value at family and the pedestrian;Judge whether the matching value is more than preset threshold;If being more than, the user and described is confirmed
Pedestrian is same people, and the image of the pedestrian confirmed is extracted from the history image.
In some embodiments, the human region includes head zone;The extraction module 260 is also used to by default
FaceNet model calculates the Euclidean distance of the user's head image and the head image of each pedestrian, obtains the head zone
Matching score.
In some embodiments, the human region further includes body region, and the body region includes torso area, a left side
Leg region and right leg region;The extraction module 260 is also used to convert the image color space of the body region to HSV sky
Between;According to the HSV space, the value of each pixel in the user's body image and the pedestrian body image is obtained;Root
According to the value of the pixel, the difference of the histogram of the user's body image and the histogram of the pedestrian body image is obtained,
Obtain the matching score.
In some embodiments, the extraction module 260 is also used to obtain the weight coefficient of each human region;
According to the matching score and the weight coefficient, the matching value is obtained, the calculation formula for calculating the matching value is as follows:
Similar_score=α * head_score+ β * body_score+ γ1*lleg_score+
γ2*rleg_score
Wherein, [α, beta, gamma1,γ2] be each human region weight coefficient, Similar_score is described
With value, head_score is the matching score of head zone, and body_score is the matching score of torso area, lleg_score
For the matching score in left leg region, rleg_score is the matching score in right leg region.
In some embodiments, the source images for the face that the recovery module 280 is also used to concentrate face public data
It carries out part to erase, obtains image of erasing;It erases described in foundation the corresponding relationship of image and the source images, passes through depth residual error
Network is trained face also master mould;The facial image of the user is inputted the face also master mould to calculate,
Image after obtaining the reduction.
In some embodiments, the recovery module 280 is also used to according to the image after the source images and the reduction,
Calculate reduction index;Judge whether the reduction index is more than or equal to pre-set level;If so, executing described according to reduction
Image afterwards, the step of determining the identity information of the user.
In some embodiments, the calculation formula for calculating reduction index is as follows:
MSE=(1/S) * ∑ (I-J) 2
PSNR=10*log (2552/MSE)
Wherein, I is the source images, and J is the image after the reduction, and S is the area of the source images, and MSE is described
The mean square error of image after source images and the reduction, PSNR are the reduction index.
The embodiment of the invention also provides a kind of electronic equipment, refer to Fig. 8, and it illustrates be able to carry out Fig. 2 to Fig. 6 institute
State the hardware configuration of the server of face identification method.The electronic equipment 10 can be electronic equipment 10 shown in FIG. 1.
The electronic equipment 10 includes: at least one processor 11;And at least one described 11 communication link of processor
The memory 12 connect, in Fig. 8 by it by taking a processor 11 as an example.The memory 12 is stored with can be by described at least one
The instruction that device 11 executes is managed, described instruction is executed by least one described processor 11, so that at least one described processor 11
It is able to carry out face identification method described in above-mentioned Fig. 2 to Fig. 6.The processor 11 and the memory 12 can pass through bus
Or other modes connect, in Fig. 8 for being connected by bus.
Memory 12 is used as a kind of non-volatile computer readable storage medium storing program for executing, can be used for storing non-volatile software journey
Sequence, non-volatile computer executable program and module, such as the corresponding program of face identification method in the embodiment of the present application
Instruction/module, for example, attached modules shown in Fig. 7.Processor 11 passes through operation storage in memory 12 non-volatile
Software program, instruction and module, thereby executing the various function application and data processing of server, i.e. the realization above method
Embodiment face identification method.
Memory 12 may include storing program area and storage data area, wherein storing program area can storage program area,
Application program required at least one function;Storage data area, which can be stored, uses created number according to face identification device
According to etc..In addition, memory 12 may include high-speed random access memory, it can also include nonvolatile memory, such as extremely
A few disk memory, flush memory device or other non-volatile solid state memory parts.In some embodiments, memory
12 it is optional include the memory remotely located relative to processor 11, these remote memories can pass through network connection to face
Identification device.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile radio communication and its group
It closes.
One or more of modules are stored in the memory 12, when by one or more of processors 11
When execution, the face identification method in above-mentioned any means embodiment is executed, for example, executing the side of Fig. 2 described above to Fig. 6
Method step realizes the function of each module and each unit in Fig. 7.
Method provided by the embodiment of the present application can be performed in the said goods, has the corresponding functional module of execution method and has
Beneficial effect.The not technical detail of detailed description in the present embodiment, reference can be made to method provided by the embodiment of the present application.
The embodiment of the present application also provides a kind of non-volatile computer readable storage medium storing program for executing, the computer-readable storage
Media storage has computer executable instructions, which is executed by one or more processors, for example, executing
The method and step of Fig. 2 described above to Fig. 6 realizes the function of each module in Fig. 7.
The embodiment of the present application also provides a kind of computer program products, including are stored in that non-volatile computer is readable to be deposited
Calculation procedure on storage media, the computer program include program instruction, are computer-executed constantly when described program instructs,
The computer is set to execute the face identification method in above-mentioned any means embodiment, for example, executing Fig. 2 described above to figure
6 method and step realizes the function of each module in Fig. 7.
A kind of face identification method, device and electronic equipment are provided in the embodiment of the present invention, this method is by obtaining simultaneously
Judge the face that whether there is the user in the image of user, in the presence of judge the face of the user with the presence or absence of hiding
Gear obtains the masking ratio of the face of the user and judges whether the masking ratio is greater than or waits when in the presence of blocking
In preset ratio.Wherein, when being more than or equal to preset ratio, according to default pedestrian weight recognizer, history image is extracted
In the image of pedestrian that matches with the walking posture of the user, with the identity information of the determination user;It is default being less than
When ratio, according to default face retrieving algorithm, the facial image of user described in described image is restored, according to reduction
Image afterwards determines the identity information of the user, and method provided in an embodiment of the present invention can be in the case where face be blocked
Identify subscriber identity information.
It should be noted that the apparatus embodiments described above are merely exemplary, wherein described be used as separation unit
The unit that part illustrates may or may not be physically separated, and component shown as a unit can be or can also
Not to be physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to reality
Need that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.
Through the above description of the embodiments, those of ordinary skill in the art can be understood that each embodiment
The mode of general hardware platform can be added to realize by software, naturally it is also possible to pass through hardware.Those of ordinary skill in the art can
With understand all or part of the process realized in above-described embodiment method be can be instructed by computer program it is relevant hard
Part is completed, and the program can be stored in a computer-readable storage medium, the program is when being executed, it may include as above
State the process of the embodiment of each method.Wherein, the storage medium can be magnetic disk, CD, read-only memory (Read-
Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;At this
It under the thinking of invention, can also be combined between the technical characteristic in above embodiments or different embodiment, step can be with
It is realized with random order, and there are many other variations of different aspect present invention as described above, for simplicity, they do not have
Have and is provided in details;Although the present invention is described in detail referring to the foregoing embodiments, the ordinary skill people of this field
Member is it is understood that it is still possible to modify the technical solutions described in the foregoing embodiments, or to part of skill
Art feature is equivalently replaced;And these are modified or replaceed, each reality of the present invention that it does not separate the essence of the corresponding technical solution
Apply the range of a technical solution.
Claims (10)
1. a kind of face identification method characterized by comprising
Obtain the image of user;
Judge the face that whether there is the user in described image;
If it exists, then judge that the face of the user whether there is to block;
If it exists, then the masking ratio of the face of the user is obtained;
Judge whether the masking ratio is more than or equal to preset ratio;
If more than or be equal to, according to default pedestrian weight recognizer, extract walking posture with the user in history image
The image of the pedestrian to match;
According to the image of extraction, the identity information of the user is determined;
If being less than, according to face retrieving algorithm is preset, the facial image of user described in described image is restored;
According to the image after reduction, the identity information of the user is determined.
2. extracting history the method according to claim 1, wherein the basis presets pedestrian's weight recognizer
The image of the pedestrian to match in image with the walking posture of the user, specifically includes:
By preset posture recognizer, human joint points from several human regions that described image identifies the user,
And from the human joint points on several human regions of pedestrian each in each history image;
Each human region of each human region of the user and each pedestrian is matched one by one, obtains each human body
The matching score in region;
The matching score of each human region of weighted calculation, obtains the matching value of the user and the pedestrian;
Judge whether the matching value is more than preset threshold;
If being more than, confirms the user and the pedestrian is same people, the pedestrian confirmed is extracted from the history image
Image.
3. according to the method described in claim 2, it is characterized in that, the human region includes head zone;
Each human region by each human region of the user and each pedestrian matches one by one, obtains described each
The step of matching score of human region, further comprise:
The Euclidean distance that the user's head image and the head image of each pedestrian are calculated by presetting FaceNet model, obtains
The matching score of the head zone.
4. according to the method described in claim 3, it is characterized in that, the human region further includes body region, the body
Region includes torso area, left leg region and right leg region;
Each human region by each human region of the user and each pedestrian matches one by one, obtains described each
The step of matching score of human region, further comprise:
HSV space is converted by the image color space of the body region;
According to the HSV space, the value of each pixel in the user's body image and the pedestrian body image is obtained;
According to the value of the pixel, the histogram of the user's body image and the histogram of the pedestrian body image are obtained
Difference, obtain the matching score.
5. according to the method described in claim 4, it is characterized in that, the matching of each human region of the weighted calculation point
Number, specifically includes the step of obtaining the matching value of the user and the pedestrian:
Obtain the weight coefficient of each human region;
According to the matching score and the weight coefficient, the matching value is obtained, calculates the calculation formula of the matching value such as
Under:
Similar_score=α * head_score+ β * body_score+ γ1*lleg_score+γ2*rleg_score
Wherein, [α, beta, gamma1,γ2] be each human region weight coefficient, Similar_score be the matching
Value, head_score are the matching score of head zone, and body_score is the matching score of torso area, and lleg_score is
The matching score in left leg region, rleg_score are the matching score in right leg region.
6. the method according to claim 1, wherein the basis presets face retrieving algorithm, to described image
Described in the facial image of user the step of restoring, further comprise:
Part is carried out to the source images for the face that face public data is concentrated to erase, and obtains image of erasing;
It erases described in foundation the corresponding relationship of image and the source images, face also master mould is carried out by depth residual error network
Training;
The facial image of the user is inputted the face also master mould to calculate, the image after obtaining the reduction.
7. according to the method described in claim 6, it is characterized in that, determining the user in the image according to after reduction
Identity information the step of before, the method also includes:
According to the image after the source images and the reduction, reduction index is calculated;
Judge whether the reduction index is more than or equal to pre-set level;
If so, the step of executing the image according to after reduction, determining the identity information of the user.
8. the method according to the description of claim 7 is characterized in that the figure according to after the source images and the reduction
Picture calculates reduction index, and the calculation formula for calculating reduction index is as follows:
MSE=(1/S) * ∑ (I-J)
PSNR=10*log (255/MSE)
Wherein, I is the source images, and J is the image after the reduction, and S is the area of the source images, and MSE is the source figure
As the mean square error with the image after the reduction, PSNR is the reduction index.
9. a kind of face identification device characterized by comprising
First obtains module, for obtaining the image of user;
First judgment module, the face for judging to whether there is the user in described image;
Second judgment module, in described image there are when the face of the user, judge the user face whether
In the presence of blocking;
Second obtains module, when for the face in the user in the presence of blocking, obtains the masking ratio of the face of the user;
Third judgment module, for judging whether the masking ratio is more than or equal to preset ratio;
Extraction module, for weighing recognizer according to default pedestrian when the masking ratio is more than or equal to preset ratio,
Extract the image of the pedestrian to match in history image with the walking posture of the user;
First determining module determines the identity information of the user for the image according to extraction;
Recovery module is used for when the masking ratio is less than preset ratio, according to default face retrieving algorithm, to described image
Described in the facial image of user restore;
Second determining module, for determining the identity information of the user according to the image after reduction.
10. a kind of electronic equipment characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
It manages device to execute, so that at least one described processor is able to carry out the method according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910810373.8A CN110516623B (en) | 2019-08-29 | 2019-08-29 | Face recognition method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910810373.8A CN110516623B (en) | 2019-08-29 | 2019-08-29 | Face recognition method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110516623A true CN110516623A (en) | 2019-11-29 |
CN110516623B CN110516623B (en) | 2022-03-22 |
Family
ID=68629184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910810373.8A Active CN110516623B (en) | 2019-08-29 | 2019-08-29 | Face recognition method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110516623B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461047A (en) * | 2020-04-10 | 2020-07-28 | 北京爱笔科技有限公司 | Identity recognition method, device, equipment and computer storage medium |
CN111624617A (en) * | 2020-05-28 | 2020-09-04 | 联想(北京)有限公司 | Data processing method and electronic equipment |
CN112132057A (en) * | 2020-09-24 | 2020-12-25 | 天津锋物科技有限公司 | Multi-dimensional identity recognition method and system |
CN112800885A (en) * | 2021-01-16 | 2021-05-14 | 南京众鑫云创软件科技有限公司 | Data processing system and method based on big data |
CN112836655A (en) * | 2021-02-07 | 2021-05-25 | 上海卓繁信息技术股份有限公司 | Method and device for identifying identity of illegal actor and electronic equipment |
CN114554113A (en) * | 2022-04-24 | 2022-05-27 | 浙江华眼视觉科技有限公司 | Express item code recognition machine express item person drawing method and device |
CN115205950A (en) * | 2022-09-16 | 2022-10-18 | 北京吉道尔科技有限公司 | Intelligent traffic subway passenger detection and settlement method and system based on block chain |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104574555A (en) * | 2015-01-14 | 2015-04-29 | 四川大学 | Remote checking-in method adopting face classification algorithm based on sparse representation |
WO2017036115A1 (en) * | 2015-09-01 | 2017-03-09 | 京东方科技集团股份有限公司 | Identity identifying device and method for manufacturing same, and identity identifying method |
CN106910176A (en) * | 2017-03-02 | 2017-06-30 | 中科视拓(北京)科技有限公司 | A kind of facial image based on deep learning removes occlusion method |
CN107423678A (en) * | 2017-05-27 | 2017-12-01 | 电子科技大学 | A kind of training method and face identification method of the convolutional neural networks for extracting feature |
CN108062542A (en) * | 2018-01-12 | 2018-05-22 | 杭州智诺科技股份有限公司 | The detection method for the face being blocked |
CN108710859A (en) * | 2018-05-23 | 2018-10-26 | 曜科智能科技(上海)有限公司 | Method for detecting human face and equipment, face identification method and equipment and storage medium |
CN109344842A (en) * | 2018-08-15 | 2019-02-15 | 天津大学 | A kind of pedestrian's recognition methods again based on semantic region expression |
US20190122039A1 (en) * | 2017-10-23 | 2019-04-25 | Wistron Corp. | Image detection method and image detection device for determining posture of a user |
WO2019134246A1 (en) * | 2018-01-03 | 2019-07-11 | 平安科技(深圳)有限公司 | Facial recognition-based security monitoring method, device, and storage medium |
CN110032940A (en) * | 2019-03-13 | 2019-07-19 | 华中科技大学 | A kind of method and system that video pedestrian identifies again |
-
2019
- 2019-08-29 CN CN201910810373.8A patent/CN110516623B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104574555A (en) * | 2015-01-14 | 2015-04-29 | 四川大学 | Remote checking-in method adopting face classification algorithm based on sparse representation |
WO2017036115A1 (en) * | 2015-09-01 | 2017-03-09 | 京东方科技集团股份有限公司 | Identity identifying device and method for manufacturing same, and identity identifying method |
CN106910176A (en) * | 2017-03-02 | 2017-06-30 | 中科视拓(北京)科技有限公司 | A kind of facial image based on deep learning removes occlusion method |
CN107423678A (en) * | 2017-05-27 | 2017-12-01 | 电子科技大学 | A kind of training method and face identification method of the convolutional neural networks for extracting feature |
US20190122039A1 (en) * | 2017-10-23 | 2019-04-25 | Wistron Corp. | Image detection method and image detection device for determining posture of a user |
WO2019134246A1 (en) * | 2018-01-03 | 2019-07-11 | 平安科技(深圳)有限公司 | Facial recognition-based security monitoring method, device, and storage medium |
CN108062542A (en) * | 2018-01-12 | 2018-05-22 | 杭州智诺科技股份有限公司 | The detection method for the face being blocked |
CN108710859A (en) * | 2018-05-23 | 2018-10-26 | 曜科智能科技(上海)有限公司 | Method for detecting human face and equipment, face identification method and equipment and storage medium |
CN109344842A (en) * | 2018-08-15 | 2019-02-15 | 天津大学 | A kind of pedestrian's recognition methods again based on semantic region expression |
CN110032940A (en) * | 2019-03-13 | 2019-07-19 | 华中科技大学 | A kind of method and system that video pedestrian identifies again |
Non-Patent Citations (2)
Title |
---|
LEI LUO 等: "Nuclear-L1 Norm Joint Regression for Face Reconstruction and Recognition", 《HTTP://VIGIR.MISSOURI.EDU/~GDESOUZA/RESEARCH/CONFERENCE_CDS/ACCV_2014/PAGES/PDF/84.PDF》 * |
孙辉 等: "Bayer图像色彩还原线性插值方法", 《液晶与显示》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461047A (en) * | 2020-04-10 | 2020-07-28 | 北京爱笔科技有限公司 | Identity recognition method, device, equipment and computer storage medium |
CN111624617A (en) * | 2020-05-28 | 2020-09-04 | 联想(北京)有限公司 | Data processing method and electronic equipment |
CN112132057A (en) * | 2020-09-24 | 2020-12-25 | 天津锋物科技有限公司 | Multi-dimensional identity recognition method and system |
CN112800885A (en) * | 2021-01-16 | 2021-05-14 | 南京众鑫云创软件科技有限公司 | Data processing system and method based on big data |
CN112800885B (en) * | 2021-01-16 | 2023-09-26 | 南京众鑫云创软件科技有限公司 | Data processing system and method based on big data |
CN112836655A (en) * | 2021-02-07 | 2021-05-25 | 上海卓繁信息技术股份有限公司 | Method and device for identifying identity of illegal actor and electronic equipment |
CN114554113A (en) * | 2022-04-24 | 2022-05-27 | 浙江华眼视觉科技有限公司 | Express item code recognition machine express item person drawing method and device |
CN115205950A (en) * | 2022-09-16 | 2022-10-18 | 北京吉道尔科技有限公司 | Intelligent traffic subway passenger detection and settlement method and system based on block chain |
CN115205950B (en) * | 2022-09-16 | 2023-04-25 | 广州昇创达科技有限公司 | Block chain-based intelligent traffic subway passenger detection and checkout method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110516623B (en) | 2022-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110516623A (en) | A kind of face identification method, device and electronic equipment | |
CN109819208B (en) | Intensive population security monitoring management method based on artificial intelligence dynamic monitoring | |
CN108596277B (en) | Vehicle identity recognition method and device and storage medium | |
CN111460962B (en) | Face recognition method and face recognition system for mask | |
US20200012923A1 (en) | Computer device for training a deep neural network | |
CN104881637B (en) | Multimodal information system and its fusion method based on heat transfer agent and target tracking | |
CN109614882A (en) | A kind of act of violence detection system and method based on human body attitude estimation | |
CN106919921B (en) | Gait recognition method and system combining subspace learning and tensor neural network | |
CN108229335A (en) | It is associated with face identification method and device, electronic equipment, storage medium, program | |
CN107423690A (en) | A kind of face identification method and device | |
CN107122744A (en) | A kind of In vivo detection system and method based on recognition of face | |
CN110414441B (en) | Pedestrian track analysis method and system | |
KR20170006355A (en) | Method of motion vector and feature vector based fake face detection and apparatus for the same | |
CN110070029A (en) | A kind of gait recognition method and device | |
CN111639580A (en) | Gait recognition method combining feature separation model and visual angle conversion model | |
KR20200119425A (en) | Apparatus and method for domain adaptation-based object recognition | |
KR102215535B1 (en) | Partial face image based identity authentication method using neural network and system for the method | |
CN116311400A (en) | Palm print image processing method, electronic device and storage medium | |
KR20200084946A (en) | Smart cctv apparatus for analysis of parking | |
KR100567765B1 (en) | System and Method for face recognition using light and preprocess | |
Park | Face Recognition: face in video, age invariance, and facial marks | |
Bhaumik et al. | Analysis and detection of human faces by using minimum distance classifier for surveillance | |
Menon | Leveraging Facial Recognition Technology in Criminal Identification | |
Khamele et al. | An approach for restoring occluded images for face-recognition | |
CN112183202B (en) | Identity authentication method and device based on tooth structural features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230508 Address after: 518000 C5, college industrialization complex building, Shenzhen Virtual University Park, No. 2, Yuexing Third Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong Patentee after: Shenzhen Weidang Life Technology Co.,Ltd. Address before: 518000 floor 2, torch venture building, No. 22, Yanshan Road, Nanshan District, Shenzhen, Guangdong Patentee before: INTERNATIONAL INTELLIGENT MACHINES Co.,Ltd. |