CN114373195A - Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium - Google Patents

Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium Download PDF

Info

Publication number
CN114373195A
CN114373195A CN202210013295.0A CN202210013295A CN114373195A CN 114373195 A CN114373195 A CN 114373195A CN 202210013295 A CN202210013295 A CN 202210013295A CN 114373195 A CN114373195 A CN 114373195A
Authority
CN
China
Prior art keywords
palm
living body
image
network
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210013295.0A
Other languages
Chinese (zh)
Inventor
徐志通
陈书楷
杨奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Entropy Technology Co ltd
Original Assignee
Xiamen Entropy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Entropy Technology Co ltd filed Critical Xiamen Entropy Technology Co ltd
Priority to CN202210013295.0A priority Critical patent/CN114373195A/en
Publication of CN114373195A publication Critical patent/CN114373195A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a lighting scene self-adaptive palm anti-counterfeiting method, a device, equipment and a storage medium, wherein a palm image to be detected is obtained firstly, and a target area palm image is extracted from the palm image to be detected; inputting the target area palmogram into a pre-trained palm anti-counterfeiting model, and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene; and finally, judging whether the palm image to be detected is the living body palm image according to the category probability value. The method can be used for carrying out anti-counterfeiting authentication on the to-be-detected palm images under different illumination scenes, and is high in identification accuracy.

Description

Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium
Technical Field
The application relates to the technical field of biological characteristic processing, in particular to a palm anti-counterfeiting method, a palm anti-counterfeiting device, palm anti-counterfeiting equipment and a palm anti-counterfeiting storage medium with self-adaptive illumination scenes.
Background
Palm recognition is a high-safety identity recognition technology and has a wide application prospect, such as military police, education industry, daily attendance and the like. In attendance, a camera of an attendance device is generally used to collect a palm image of a user, and then the palm image is compared with a basic palm image stored in a database, so as to determine whether the person is a person. However, when palm recognition is adopted, palm forgery prevention judgment needs to be performed.
In the current commonly used attendance equipment, after a camera of the equipment is debugged to proper brightness in a normal illumination scene, the brightness of an acquired image is relatively balanced, and if the attendance equipment is used in a strong illumination condition (such as sunlight reverse illumination), the image is easily seriously exposed, so that the image details are lost; or if the attendance checking equipment is in a dark lighting environment (such as indoor dark light), the image is not obvious in details due to too dark brightness, and a plurality of pockmarks exist, so that the subsequent series of algorithm performances are influenced by overexposure or too dark of the image. Therefore, the existing palm anti-counterfeiting method has a good anti-counterfeiting effect for a certain specific illumination scene, and when the scene is switched to other illumination scenes, the anti-counterfeiting performance is greatly reduced.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, a device, and a storage medium for palm anti-counterfeiting with adaptive illumination scene.
In a first aspect, an embodiment of the present application provides an illumination scene adaptive palm anti-counterfeiting method, including:
acquiring a palm image to be detected, and extracting a target area palm image from the palm image to be detected;
inputting the target area palmogram into a pre-trained palm anti-counterfeiting model, and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene;
and judging whether the palm image to be detected is a living body palm image according to the category probability value.
In a second aspect, an embodiment of the present application provides an illumination scene adaptive palm anti-counterfeiting device, where the device includes:
the image acquisition module is used for acquiring a palm image to be detected and extracting a target area palm image from the palm image to be detected;
the category probability value output module is used for inputting the target area palmogram into a pre-trained palm anti-counterfeiting model and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene;
and the judging module is used for judging whether the palm image to be detected is a living body palm image according to the category probability value.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory; one or more processors coupled with the memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs being configured to perform the illumination scene adaptive palm anti-counterfeiting method provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be invoked by a processor to execute the illumination scene adaptive palm anti-counterfeiting method provided in the first aspect.
According to the illumination scene self-adaptive palm anti-counterfeiting method, the illumination scene self-adaptive palm anti-counterfeiting device, the illumination scene self-adaptive palm anti-counterfeiting equipment and the storage medium, firstly, a to-be-detected palm image is obtained, and a target area palm image is extracted from the to-be-detected palm image; inputting the target area palmogram into a pre-trained palm anti-counterfeiting model, and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene; and finally, judging whether the palm image to be detected is the living body palm image according to the category probability value.
According to the illumination scene self-adaptive palm anti-counterfeiting method, because the palm anti-counterfeiting network is trained by adopting the image sample marked with illumination scene information and living body/non-living body information (namely the illumination scene is known and the living body palm or the non-living body palm is known) so as to obtain a pre-trained palm anti-counterfeiting model, the pre-trained palm anti-counterfeiting model can be used for identifying the to-be-detected palm images in different illumination scenes and judging whether the to-be-detected palm images are living bodies or non-living body palms; therefore, the method can be used for carrying out anti-counterfeiting authentication on the to-be-detected palm images under different illumination scenes, and is high in identification accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is an application scenario schematic diagram of a lighting scenario adaptive palm anti-counterfeiting method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for palm anti-counterfeiting with adaptive illumination scene according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a palm anti-counterfeit network according to an embodiment of the present application;
fig. 4 is a schematic diagram of a palm anti-counterfeiting method adaptive to an illumination scene according to an embodiment of the present application;
FIG. 5 is a schematic flowchart of a palm anti-counterfeit model training process according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating brightness adjustment of an image sample according to an embodiment of the present application;
fig. 7 is a block diagram of a lighting scene adaptive palm anti-counterfeiting device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer-readable storage medium provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely below, and it should be understood that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For more detailed explanation of the present application, a lighting scene adaptive palm anti-counterfeiting method, a lighting scene adaptive palm anti-counterfeiting device, a terminal device and a computer storage medium provided by the present application are specifically described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application scenario of the illumination scenario adaptive palm anti-counterfeiting method provided in this application embodiment, where the application scenario includes a terminal device 100 provided in this application embodiment, and the terminal device 100 may be various electronic devices (such as structure diagrams of 102, 104, 106, and 108) having a display screen, including but not limited to a smart phone, a computer device, and a palm recognition device (e.g., an attendance device), where the computer device may be at least one of a desktop computer, a portable computer, a laptop computer, a tablet computer, and the like. When the palm image of the user is acquired, the terminal device 100 executes the illumination scene adaptive palm anti-counterfeiting method of the present application, and please refer to the illumination scene adaptive palm anti-counterfeiting method embodiment in the specific process.
Next, the terminal device 100 may be generally referred to as one of a plurality of terminal devices, and the present embodiment is only illustrated by the terminal device 100. Those skilled in the art will appreciate that the number of terminal devices described above may be greater or fewer. For example, the number of the terminal devices may be only a few, or the number of the terminal devices may be tens of or hundreds, or may be more, and the number and the type of the terminal devices are not limited in the embodiment of the present application. The terminal device 100 may be configured to perform an illumination scene adaptive palm anti-counterfeiting method provided in the embodiment of the present application.
In an optional implementation manner, the application scenario may include a server in addition to the terminal device 100 provided in the embodiment of the present application, where a network is disposed between the server and the terminal device. Networks are used as the medium for providing communication links between terminal devices and servers. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers are merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server may be a server cluster composed of a plurality of servers. Wherein, the terminal device interacts with the server through the network to receive or send messages and the like. The server may be a server that provides various services. The server may be configured to perform the steps of the illumination scene adaptive palm anti-counterfeiting method provided in the embodiment of the present application. In addition, when the terminal device executes the illumination scene adaptive palm anti-counterfeiting method provided in the embodiment of the present application, a part of the steps may be executed at the terminal device, and a part of the steps may be executed at the server, which is not limited herein.
Based on this, the embodiment of the application provides a palm anti-counterfeiting method with a self-adaptive illumination scene. Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a flow chart of an illumination scene adaptive palm anti-counterfeiting method according to an embodiment of the present application, which is described by taking the method applied to the terminal device in fig. 1 as an example, and includes the following steps:
step S110, acquiring a palm image to be detected, and extracting a target area palm image from the palm image to be detected.
The palm image to be detected refers to any palm image needing anti-counterfeiting detection. The image to be classified may be a picture generated by an image acquisition device (such as a smart terminal, a camera device, etc.) shooting a palm of a user, etc.
In addition, the palm image to be detected (i.e., whether the image is tilted), color, size, resolution, and the like are not limited as long as the minimum requirements for image recognition can be met.
The target region palmogram is an ROI (region of interest) region with a preset size extracted from a palm image to be detected, wherein the region of interest is delineated from a processed image in a frame, circle, ellipse, irregular polygon and other modes in machine vision and image processing. Wherein the predetermined size may be 122 x 122. And extracting a target area palm image from the palm image to be detected, so that the obtained palm information is richer, and the later anti-counterfeiting judgment is facilitated.
And step S120, inputting the target area palmogram into a pre-trained palm anti-counterfeiting model, and outputting a category probability value.
The palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene.
Specifically, model training is to give an input vector and a target output value, input the input vector into one or more network structures or functions to obtain an actual output value, calculate an offset according to the target output value and the actual output value, and judge whether the offset is within an allowable range; if the training is within the allowable range, finishing the training and fixing the related parameters; if the deviation is not in the allowable range, some parameters in the network structure or the function are continuously adjusted until the training is finished and the related parameters are fixed when the deviation is in the allowable range or a certain finishing condition is reached, and finally the trained model can be obtained according to the fixed related parameters.
The training of the palm anti-counterfeiting model in the embodiment actually comprises the following steps: inputting image samples with illumination scene information and living body/non-living body information into an improved palm anti-counterfeiting network as input vectors, and taking the image types of the image samples as target output values; and solving a hidden layer, outputting the output of each layer unit, solving the deviation between a target output value and an actual output value, calculating the error of the neurons in the network layer when the deviation is in an unallowable range, solving the error gradient, updating the weight, solving the hidden layer again, outputting the output of each layer unit, solving the deviation between a target value and the actual value until the deviation is in the allowable range, finishing training, and fixing the weight and the threshold value so as to obtain the pre-trained palm anti-counterfeiting model. In addition, a palm anti-counterfeiting network is adopted in the embodiment; as shown in fig. 3, the palm anti-counterfeiting network includes a feature pyramid network, a full convolution network, multiple scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing image samples of only one kind of illumination scene. Different scale convolution networks are respectively used for processing image samples of different illumination scenes.
Furthermore, the lighting scene information is used to determine a lighting scene in which the image sample is located, such as a high-light scene, a normal-light scene, or a dark-light scene. The live/non-live information is used to determine whether the image sample is a live image or a non-live image.
Image categories include, but are not limited to, high light scene live, high light scene non-live, normal light scene non-live, dim light scene live, and dim light scene non-live.
And step S130, judging whether the palm image to be detected is the living body palm image according to the category probability value.
Specifically, the palm image of the target region is processed by the palm anti-counterfeiting model, so that a probability value of an image category (namely, the image category) can be output, and the image category can be determined according to the category probability value. Optionally, selecting the classification category corresponding to the maximum category probability value as the classification category of the to-be-detected palm image, and selecting the category corresponding to the maximum category probability value as the category of the image; and then judging whether the palm image to be detected is a living body palm image according to the classification of the palm image to be detected.
In one embodiment, in step S120, inputting the target area palmogram into a pre-trained palm anti-counterfeit model, including: judging whether the illumination brightness of the target palmogram is within a preset brightness range; if yes, inputting the target palm diagram into the pre-trained palm anti-counterfeiting model.
Specifically, because the illumination brightness l of the test environment of the target area palm diagram is unknown, and the over-exposed and over-dark palm images cannot be used for identification, so that the significance of performing anti-counterfeiting on the palms under such illumination is not great, in this embodiment, a frame filtering unit may be added, specifically, referring to fig. 4, the over-exposed and over-dark palm images (i.e., the target area palm diagram) are filtered, the palm images with the illumination brightness within a certain range are retained for anti-counterfeiting processing, and the illumination brightness l retention range is set between 20lux and 10000lux (i.e., the preset brightness range in this embodiment). If l is lower than 20lux, the palmogram of the target area is considered to be too dark; or l is higher than 10000lux, the target area is considered to be overexposed, and the target area and the palmogram are discarded. When the illumination brightness l of the target area palmogram falls within the reserved range, the illumination scene of the target area palmogram is predicted through a scene prediction unit (namely a structure formed by a characteristic pyramid network, a full convolution network and a plurality of scale convolution networks in the palm anti-counterfeiting model), and the prediction result has prediction scores corresponding to three scenes, namely a strong illumination scene prediction score ShwNormal lighting scene prediction score SsnAnd dark lighting scene prediction score SagWherein
Shw+Ssn+Sag1, and 0 < Shw<1、0<Ssn<1、0<Sag<1
And taking the scene category with the highest prediction score as a final classification scene. In this embodiment, a frame filtering unit is provided, and a part of images with poor brightness quality can be effectively removed according to the brightness setting range by extracting the illumination brightness of the test image, especially in an outdoor strong light irradiation environment. Before palm anti-counterfeiting prediction is carried out, the scene prediction unit is adopted to enable the palm anti-counterfeiting model to be suitable for palm living body prediction under different illumination brightness, the anti-counterfeiting performance of the palm anti-counterfeiting model under different illumination scenes is improved, the sensitivity of the model to illumination change is reduced, and the anti-counterfeiting performance and the stability of an algorithm are improved.
Then the palm features are continuously input into a palm living body classifier for living body prediction, and the prediction probability P of the true (living body)/false (non-living body) of the palm is outputlAnd PfCombining scene prediction results, respectively adding the living bodies P of the high-light scene (namely outdoor living bodies)hw_lNon-living body (i.e. outdoor prosthesis) P in high-light scenehw_fNormal light scene living body (i.e. indoor living body) Psn_lNon-living body (i.e. an endoprosthesis) P in a normal lighting scenesn_fDark light scene living body (i.e., dark light living body) Pag_lAnd dark lighting scene non-living body (i.e. dark light prosthesis) Pag_fFinally, the prediction probability scores of the living bodies and the non-living bodies under each scene are as follows:
Phw_l=Shw·Pl,Psn_l=Ssn·Pl,Pag_l=Sag·Pl
Phw_f=Shw·Pf,Psn_f=Ssn·Pf,Pag_f=Sag·Pf
classifycj_lf=max{Phw_l,Phw_f,Psn_l,Psn_f,Pag_l,Pag_f}
the final lighting scene and living body category is classifycj_lfThe corresponding scene and category.
The illumination scene self-adaptive palm anti-counterfeiting method provided by the embodiment of the application comprises the steps of firstly obtaining a palm image to be detected, and extracting a target area palm image from the palm image to be detected; inputting the target area palmogram into a pre-trained palm anti-counterfeiting model, and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene; and finally, judging whether the palm image to be detected is the living body palm image according to the category probability value.
According to the illumination scene self-adaptive palm anti-counterfeiting method, because the palm anti-counterfeiting network is trained by adopting the image sample marked with illumination scene information and living body/non-living body information (namely the illumination scene is known and the living body palm or the non-living body palm is known) so as to obtain a pre-trained palm anti-counterfeiting model, the pre-trained palm anti-counterfeiting model can be used for identifying the to-be-detected palm images in different illumination scenes and judging whether the to-be-detected palm images are living bodies or non-living body palms; therefore, the method can be used for carrying out anti-counterfeiting authentication on the to-be-detected palm images under different illumination scenes, and is high in identification accuracy.
Next, an embodiment of training of the palm anti-counterfeiting model is further given, and the detailed description is as follows:
referring to fig. 5, in an embodiment, a training method of a palm anti-counterfeit model includes:
step S210, acquiring an image sample, and marking the image sample by respectively adopting the illumination scene information and the living body/non-living body information to obtain the image sample marked with the illumination scene information and the living body/non-living body information.
Specifically, a relatively large number (e.g., several tens, several hundreds, several thousands, several tens of thousands, etc.) of image samples are prepared first. The image sample can comprise a palm picture of the collected living human and a palm picture obtained by acquiring the palm picture by self shooting by adopting the shooting device. Generally, the more image samples, the more accurate the model trained; but too many image samples will slow down the model training. Therefore, in practical applications, an appropriate number of image samples are selected, for example, 64 image samples may be selected, which include 32 living bodies and 32 non-living bodies (i.e., 32 prostheses). Wherein a data training set may be established when preparing the image samples, and the image samples are stored into the data training set.
Alternatively, multiple initial palm images may be acquired at the time of acquiring the image sample, and a target region palm map is extracted from each initial palm image to form the image sample, for example, the initial palm images are cropped to 122 × 122 target region palm maps.
After the image sample is acquired, the image sample needs to be marked with illumination scene information and living/non-living information. Optionally, in the marking process, an illumination scene tag may be used to mark the illumination scene information of the image sample; live/non-live information tagging is performed on the image sample using live, non-live tags. The specific process is as follows: the image sample mark is mainly used for marking the acquired image sample with a corresponding class label. For a target area palmogram, two label categories, namely, an illumination scene label and a live, non-live label, are often marked. For example, if a target area palmogram is of a type requiring a highlight scene and a live body, a highlight scene label and a live body label are marked on the target area palmogram.
In one embodiment, tagging image samples with lighting scene information includes: respectively calculating average pixel values of the three channels by adopting a preset intra-model image of the image sample; calculating an image brightness value of the image sample according to the average pixel values of the three channels; and comparing the image brightness value with a preset brightness threshold value, and marking the image sample by adopting illumination scene information according to a comparison result.
Specifically, the R-channel pixel mean value, the G-channel pixel mean value and the B-channel pixel mean value are calculated according to the image within a preset range (for example, within 0.6 times of the center of the image sample) of the image sample, and then the image average brightness is calculated according to the R-channel pixel mean value, the G-channel pixel mean value and the B-channel pixel mean value, which is specifically calculated as follows
Figure BDA0003458628310000091
hc=ceil(0.6*h),wc=ceil(0.6*w)
Where h, w represents the scale of the image sample, which may be 122 x 122, hc,wcRepresents the image height and image width within 0.6 times of the center of the image sample, if hc,wcAnd if floating point numerical values exist, rounding up. Rt(i,j),Gt(i,j),Bt(i, j) represents R, G, B channel pixel values in the ith row and jth column within 0.6 times the center of the image sample, and R _ mean, G _ mean, and B _ mean represent R, G, B channel pixel means, respectively.
The reason why the channel pixel mean value is calculated by adopting the image within 0.6 times of the center of the 122 × 122 scale image sample is that the palm texture details can be contained as much as possible, and meanwhile, the influence of background illumination in the palm ROI area detected from the original image by the palm detection algorithm on the pixel mean value calculation can be reduced, so that the accuracy of brightness calculation is improved. And calculating the palm image brightness through the three-channel pixel mean value of the palm area in the range of 0.6 times of the image sample in the following way:
Limg=0.299*R_mean+0.587*G_mean+0.114*B_mean
wherein L isimgRepresenting image luminance values.
For the image sample, after the image brightness value is calculated, the image brightness value L can be calculated according to the image brightness valueimgWith a predetermined brightness threshold muαAnd muβA comparison is made to determine whether the image sample is an (outdoor) high-light scene, an (indoor) normal-light scene, or a dark-light scene. And finally, setting a scene pseudo label for each picture in the image sample according to the determined illumination scene. The specific scene tag settings may be as follows
Figure BDA0003458628310000101
Wherein the preset brightness threshold comprises muαAnd muβIn which μαSet to 4500lux for distinguishing (outdoor) high-light scenes from (indoor) normal-light scenes; mu.sβSet to 300lux for distinguishing (indoor) normal lighting scenes from dark lighting scenes.
Alternatively, the scene pseudo label may be adaptively determined by an algorithm when the scene pseudo label is set for the image sample.
According to the comparison result of the image brightness value of the image sample and the preset brightness threshold value, an illumination scene label can be added to the image sample, and brightness amplification can be performed to prevent data of each scene from being unbalanced. The specific method of brightness expansion is as follows:
in one embodiment, labeling an image sample with illumination scene information and living/non-living information, respectively, to obtain an image sample labeled with illumination scene information and living/non-living information, comprises: respectively adopting illumination scene information and living body/non-living body information to mark the initial image sample so as to obtain the initial image sample marked with the illumination scene information and the living body/non-living body information; adjusting the illumination brightness of the initial image sample, and updating the illumination scene information of the image sample after the illumination brightness adjustment to obtain an adjusted image sample marked with the illumination scene information and living body/non-living body information; and forming the image sample marked with the illumination scene information and the living body/non-living body information according to the initial image sample marked with the illumination scene information and the living body/non-living body information and the adjustment image sample marked with the illumination scene information and the living body/non-living body information.
In one embodiment, performing illumination brightness adjustment on an initial image sample, and updating illumination scene information of the image sample after illumination brightness adjustment includes: and adjusting the brightness of the initial image sample twice, and updating the illumination scene information of the image sample after each illumination brightness adjustment, so that the illumination scene information of the formed image sample comprises strong illumination scene information, normal illumination scene information and dark illumination scene information.
In particular, when the illumination of the image sampleWhen the scene label is (outdoor) strong light scene, the image brightness L is0∈(μα,10000]Dimming twice, namely dimming once to a (indoor) normal illumination scene and dimming twice to a dark illumination scene; when the illumination scene label of the image sample is (indoor) normal illumination scene, the image brightness L1∈(μβα]The method comprises the following steps of (1) brightening once and dimming once, namely, brightening once for an outdoor strong-illumination scene, and dimming once for a dark-illumination scene; when the illumination scene label of the image sample is a dark illumination scene, the image brightness L2∈(20,μβ]And (3) brightening twice, namely, brightening once is an (indoor) normal illumination scene, and brightening twice is an (outdoor) strong illumination scene.
The specific brightness adjustment range needs to calculate a corresponding brightness adjustment coefficient according to the brightness value of each image, taking an (outdoor) strong illumination scene as an example, in order to ensure that the (outdoor) strong illumination scene falls within an (indoor) normal illumination scene illumination range after being subjected to brightness adjustment once, and falls within a dark illumination scene illumination range after being subjected to brightness adjustment once, the specific adjustment coefficient and the adjusted brightness calculation mode are as follows:
Figure BDA0003458628310000111
L0_1=[λmin·L0,λmax·L0]∈(μβ,μα]
Figure BDA0003458628310000112
L0_2=[ηmin·L0_1,ηmax·L0_1]∈(20,μβ]
where ceil denotes floating point number rounding up and floor denotes floating point number rounding down. Lambda [ alpha ]minAnd λmaxRepresenting the upper and lower limits of the brightness adjustment coefficient from (outdoor) strong illumination scene to (indoor) normal illumination scene, etaminAnd ηmaxAnd the upper and lower limits of the brightness adjustment coefficient from an outdoor strong illumination scene to an indoor normal illumination scene are represented. L is0_1The brightness of the palm after one-time brightness adjustment in the (outdoor) strong illumination scene is shown to fall within the range of the (indoor) normal illumination scene, L0_2And the brightness of the palm of the (outdoor) strong-illumination scene after brightness adjustment is shown to fall within the range of the dark-illumination scene. Wherein the upper limit of the luminance is set to 10000lux and the lower limit is set to 20 lux.
The image brightness adjustment mode for distinguishing the (indoor) normal illumination scene from the dark illumination scene is similar to the brightness adjustment mode for the (outdoor) strong illumination scene, and is not described herein again.
The illumination scene of the image sample with the adjusted brightness is changed, and at this time, the illumination scene label of the image sample is updated, and the updated result is shown in fig. 6.
And step S220, constructing a palm anti-counterfeiting network.
The palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier.
Referring to fig. 3, the palm anti-counterfeiting network includes a feature pyramid network, a full convolution network (i.e., the feature extractor in fig. 3), a plurality of scale convolution networks and a palm living body classifier, wherein the plurality of scale convolution networks refer to convolution modules with different convolution kernel sizes, such as 3 × 3,5 × 5 and 7 × 7. However, for one image sample, not all three scale convolution operations are performed, and when a network structure is designed, one scale convolution kernel is selected according to a scene classification loss value activated by an illumination scene classification neuron through designing a branch switch to perform convolution processing on the image sample so as to obtain a subsequent feature map.
A Feature Pyramid Network (FPN) is a feature extractor designed according to the concept of a feature pyramid, with the aim of improving accuracy and speed. The FPN consists of two parts from bottom to top and from top to bottom and is laterally connected; from bottom to top, the traditional convolution network is used for feature extraction, with the deep convolution, the spatial resolution is reduced, the spatial information is lost, but the high-level semantic information is detected more. The same size feature maps in the pyramid (i.e., bottom-up and top-down configurations) belong to the same level. The final layer output of each level is the feature map of the pyramid. The side connections are convolved by 1 x 1 and summed with the results of the up-sampled top-to-bottom connections. The part from top to bottom generates coarse granularity characteristics, and the part from bottom to top is added with fine granularity characteristics through lateral connection.
Step S230, inputting the image sample marked with the illumination scene information and the living body/non-living body information into a characteristic pyramid network for up-sampling operation, and outputting a first characteristic diagram; and carrying out down-sampling operation on the first feature map to obtain a second feature map.
Step S240, inputting the first feature map into the full convolution network, and outputting the global feature map.
In one embodiment, the palm anti-counterfeiting network further comprises a fourth scale convolutional network; inputting the image sample marked with the illumination scene information and the living body/non-living body information into a feature pyramid network for up-sampling operation, and outputting a first feature map, wherein the method comprises the following steps: inputting the image sample marked with the illumination scene information and the living body/non-living body information into a fourth scale convolution network, and outputting a fourth feature map; and inputting the fourth feature map into the feature pyramid network for up-sampling operation, and outputting the first feature map.
Specifically, before inputting the image sample into the feature pyramid network, a fourth scale convolution network (e.g., 1 × 1 convolution) may be used to obtain the image input feature map
Figure BDA0003458628310000121
(i.e., the fourth feature map), the feature pyramid network then performs an upsampling operation (i.e., the third average pooling operation) on the feature map X to generate a new feature map (i.e., the first feature map)
Figure BDA0003458628310000122
In the new feature map (i.e. first feature map) XpFirstly, three times of up-sampling operation (namely down-sampling operation) is reversely carried out, and meanwhile, the three times of up-sampling operation and the down-sampling operation are transversely connected with the middle characteristic diagram with the same scale generated in the pooling process, and finally, the pooled post-recovery is obtainedReturning to the feature map at the most original resolution (i.e., the second feature map), which is performed to highlight the image detail features; second, X ispAnd (namely, the first feature map) adopts a full convolution network as a feature extractor to extract global features to obtain a global feature map.
And step S250, carrying out neuron activation of the illumination scene on the global feature map, and calculating classification loss function values under different illumination scenes.
After obtaining the global feature map, connecting a normalization layer and a frelu linear activation layer on the global feature to activate the illumination scene label classification neurons to obtain the classification loss function values loss _ cj under three illumination scenes1、loss_cj2And loss _ cj3
Step S260, selecting one scale convolution network from the plurality of scale convolution networks according to the classification loss function value.
And step S270, inputting the second feature map into the selected scale convolution network, and inputting the third feature map.
In one embodiment, the plurality of scale convolutional networks includes a first scale convolutional network, a second scale convolutional network, and a third scale convolutional network; selecting a scale convolution network from a plurality of scale convolution networks based on the classification loss function values, comprising: when the classification loss function value is a dark illumination scene loss function value, selecting a first scale convolution network; when the classification loss function value is the normal illumination scene loss function value, selecting a second scale convolution network; and when the classification loss function value is the loss function value of the high-light scene, selecting a third scale convolution network.
Specifically, the first scale convolutional network may be a 3 × 3 convolutional network, the second scale convolutional network may be a 7 × 7 convolutional network, and the third scale convolutional network may be a 5 × 5 convolutional network. Through analysis, the palm detail features of (indoor) normal illumination scenes are relatively more obvious, so that a convolution kernel with 7 x 7 scales is adopted, the larger receptive field can focus on the whole features rather than the details; for (outdoor) high-light scenes, a convolution kernel with 5 x 5 scale is chosen, since bright light reflections may cause the palm texture to be less noticeable, in contrast requiring more attention to texture details; for a dim light scene, palm texture details are less obvious, so small-scale convolution is more needed, and detailed features are highlighted through a 3 x 3 area on a feature map, so that in the embodiment, a 3 x 3 convolution kernel is designed to be corresponding to dim light scene neuron activation, a 5 x 5 convolution kernel is corresponding to (outdoor) strong light scene neuron activation, and a 7 x 7 convolution kernel is corresponding to (indoor) normal light scene neuron activation.
Obtaining the value of the loss function of classification _ cj under three illumination scenes1、loss_cj2And loss _ cj3Then, selecting the illumination classification neuron with the minimum loss value for activation, wherein the formula is as follows:
cj_act=min{loss_cj1,loss_cj2,loss_cj3}
if cj _ act is less _ cj1Activating the neurons in the dark illumination scene, closing the branch switch of the 3 x 3 convolution kernel, and extracting the partial features F of the palm in the dark illumination sceneag(ii) a If cj _ act is less _ cj2Activating (outdoor) bright illumination scene neurons, closing 5 x 5 convolution kernel branch switches, and extracting (outdoor) bright illumination scene palm local features Fhw(ii) a If cj _ act is less _ cj3Then activating (indoor) normal illumination scene neurons, closing 7 x 7 convolution kernel branch switches, and extracting indoor normal illumination palm local features Fsn. Wherein, the local characteristics F of the palm in the dark lighting sceneagLocal characteristics F of palm in (outdoor) bright lighting scenehwPartial characteristic F of palm in normal indoor illuminationsnDenoted as a third characteristic diagram.
And step S280, inputting the global feature map and the third feature map into the palm living body classifier, and outputting an actual classification result.
And step S290, adjusting the weight of the palm anti-counterfeiting network until the deviation between the actual classification result and the target classification result is within the allowable range, finishing training and obtaining the palm anti-counterfeiting model.
In particular, the global feature F is fusedGAnd local features F extracted under each sceneag、FhwAnd FsnTo obtain the final palm features, hereinAnd (3) connecting a normalization layer and a frelu linear activation layer on the basis of the characteristics, activating palm true/false class classification neurons, obtaining final class response output, and finishing palm anti-counterfeiting model training.
In this embodiment, through the structural design of the network branch switch, under the condition that an acquired illumination scene is unknown, image scene information can be obtained according to the training of an illumination scene label, and the use of a convolution kernel is controlled according to a scene predicted by an image, so that the convolution operations of different reception fields are carried out on palm images of different scenes, the detail characteristic mining depths are different, the model has adaptability to the palm living body classification under different illuminations, and the trained palm anti-counterfeiting model is more accurate.
It should be understood that although the steps in the flowcharts of fig. 2 and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
The embodiment disclosed in the application describes an illumination scene adaptive palm anti-counterfeiting method in detail, and the method disclosed in the application can be realized by adopting equipment in various forms, so that the application also discloses an illumination scene adaptive palm anti-counterfeiting device corresponding to the method, and a specific embodiment is given below for detailed description.
Referring to fig. 7, a palm anti-counterfeiting device adaptive to an illumination scene disclosed in an embodiment of the present application mainly includes:
the image obtaining module 710 obtains a palm image to be detected, and extracts a palm image of a target region from the palm image to be detected.
The category probability value output module 720 is used for inputting the target area palmogram into a pre-trained palm anti-counterfeiting model and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene.
The judging module 730 is configured to judge whether the palm image to be detected is a live palm image according to the category probability value.
In one embodiment, an apparatus comprises:
and the sample acquiring and marking module is used for acquiring the image sample, and marking the image sample by respectively adopting the illumination scene information and the living body/non-living body information to obtain the image sample marked with the illumination scene information and the living body/non-living body information.
And the network construction module is used for constructing a palm anti-counterfeiting network, wherein the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier.
And the first characteristic diagram output module is used for inputting the image sample marked with the illumination scene information and the living body/non-living body information into the characteristic pyramid network for up-sampling operation and outputting a first characteristic diagram.
And the second feature map output module is used for carrying out downsampling operation on the first feature map to obtain a second feature map.
And the global feature map output module is used for inputting the first feature map into the full convolution network and outputting the global feature map.
And the loss function value calculation module is used for activating neurons of the global feature map in the illumination scene and calculating the classification loss function values in different illumination scenes.
And the convolutional network selection module is used for selecting one scale convolutional network from the plurality of scale convolutional networks according to the classification loss function value.
And the third feature map output module is used for inputting the second feature map into the selected scale convolution network and inputting the third feature map.
And the classification result output module is used for inputting the global characteristic diagram and the third characteristic diagram into the palm living body classifier and outputting an actual classification result.
And the palm anti-counterfeiting model obtaining module is used for adjusting the weight of the palm anti-counterfeiting network until the deviation between the actual classification result and the target classification result is within an allowable range, finishing training and obtaining the palm anti-counterfeiting model.
In one embodiment, the system includes a sample acquisition and labeling module for acquiring a plurality of initial palm images, extracting a target region palm map from each of the initial palm images to form an image sample.
In one embodiment, the sample acquiring and marking module is configured to mark the initial image sample with illumination scene information and living body/non-living body information, respectively, to obtain an initial image sample marked with the illumination scene information and the living body/non-living body information; adjusting the illumination brightness of the initial image sample, and updating the illumination scene information of the image sample after the illumination brightness adjustment to obtain an adjusted image sample marked with the illumination scene information and living body/non-living body information; and forming the image sample marked with the illumination scene information and the living body/non-living body information according to the initial image sample marked with the illumination scene information and the living body/non-living body information and the adjustment image sample marked with the illumination scene information and the living body/non-living body information.
In one embodiment, the system comprises a sample acquiring and marking module, a sampling module and a processing module, wherein the sample acquiring and marking module is used for respectively calculating average pixel values of three channels by adopting a preset intra-model image of an image sample; calculating an image brightness value of the image sample according to the average pixel values of the three channels; and comparing the image brightness value with a preset brightness threshold value, and marking the image sample by adopting illumination scene information according to a comparison result.
In one embodiment, the sample acquiring and marking module is configured to perform brightness adjustment on an initial image sample twice, and update illumination scene information of the image sample after each illumination brightness adjustment, so that the illumination scene information of the formed image sample includes strong illumination scene information, normal illumination scene information, and dark illumination scene information.
In one embodiment, the plurality of scale convolutional networks includes a first scale convolutional network, a second scale convolutional network, and a third scale convolutional network; the convolution network selection module is used for selecting a first scale convolution network when the classification loss function value is a dark illumination scene loss function value; when the classification loss function value is the normal illumination scene loss function value, selecting a second scale convolution network; and when the classification loss function value is the loss function value of the high-light scene, selecting a third scale convolution network.
In one embodiment, the palm anti-counterfeiting network further comprises a fourth scale convolutional network; the first feature map output module is used for inputting the image sample marked with the illumination scene information and the living body/non-living body information into a fourth scale convolution network and outputting a fourth feature map; and inputting the fourth feature map into the feature pyramid network for up-sampling operation, and outputting the first feature map.
In an embodiment, the determining module 730 is configured to select a classification category corresponding to the maximum category probability value as the classification category of the to-be-detected palm image; and judging whether the palm image to be detected is a living body palm image according to the classification of the palm image to be detected.
In an embodiment, the category probability value output module 720 is configured to determine whether the illumination brightness of the target palmogram is within a preset brightness range; if yes, inputting the target palm diagram into the pre-trained palm anti-counterfeiting model.
For specific limitations of the palm anti-counterfeiting device with adaptive illumination scene, reference may be made to the above limitations on the method, which are not described herein again. The various modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent of a processor in the terminal device, and can also be stored in a memory in the terminal device in a software form, so that the processor can call and execute operations corresponding to the modules.
Referring to fig. 8, fig. 8 is a block diagram illustrating a structure of a terminal device according to an embodiment of the present application. The terminal device 80 may be a computer device. The terminal device 80 in the present application may include one or more of the following components: a processor 82, a memory 84, and one or more applications, wherein the one or more applications may be stored in the memory 84 and configured to be executed by the one or more processors 82, the one or more applications configured to perform the methods described in the above-described illumination scene adaptive palm anti-counterfeiting method embodiments.
The processor 82 may include one or more processing cores. The processor 82 connects various parts within the overall terminal device 80 using various interfaces and lines, and performs various functions of the terminal device 80 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 84, and calling data stored in the memory 84. Alternatively, the processor 82 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 82 may be integrated with one or a combination of a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may be implemented by a communication chip, rather than integrated into the processor 82.
The Memory 84 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 84 may be used to store instructions, programs, code sets or instruction sets. The memory 84 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 80 in use, and the like.
Those skilled in the art will appreciate that the structure shown in fig. 8 is a block diagram of only a portion of the structure relevant to the present application, and does not constitute a limitation on the terminal device to which the present application is applied, and a particular terminal device may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
In summary, the terminal device provided in the embodiment of the present application is used to implement the corresponding illumination scene adaptive palm anti-counterfeiting method in the foregoing method embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Referring to fig. 9, a block diagram of a computer-readable storage medium according to an embodiment of the present disclosure is shown. The computer readable storage medium 90 stores a program code, which can be invoked by a processor to perform the method described in the above embodiment of the illumination scene adaptive palm anti-counterfeiting method.
The computer-readable storage medium 90 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 90 includes a non-transitory computer-readable storage medium. The computer readable storage medium 90 has storage space for program code 92 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 92 may be compressed, for example, in a suitable form.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. An illumination scene adaptive palm anti-counterfeiting method is characterized by comprising the following steps:
acquiring a palm image to be detected, and extracting a target area palm image from the palm image to be detected;
inputting the target area palmogram into a pre-trained palm anti-counterfeiting model, and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene;
and judging whether the palm image to be detected is a living body palm image according to the category probability value.
2. The method according to claim 1, wherein the training method of the palm anti-counterfeiting model comprises the following steps:
acquiring an image sample, and marking the image sample by respectively adopting illumination scene information and living body/non-living body information to obtain the image sample marked with the illumination scene information and the living body/non-living body information;
constructing the palm anti-counterfeiting network, wherein the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier;
inputting the image sample marked with the illumination scene information and the living body/non-living body information into the feature pyramid network for up-sampling operation, and outputting a first feature map; performing downsampling operation on the first feature map to obtain a second feature map;
inputting the first feature map into a full convolution network, and outputting a global feature map;
carrying out neuron activation of illumination scenes on the global feature map, and calculating classification loss function values under different illumination scenes;
selecting a scale convolution network from a plurality of scale convolution networks according to the classification loss function value;
inputting the second feature map into a selected scale convolution network, and inputting a third feature map;
inputting the global feature map and the third feature map into a palm living body classifier, and outputting an actual classification result;
and adjusting the weight of the palm anti-counterfeiting network until the deviation between the actual classification result and the target classification result is within an allowable range, and finishing training to obtain the palm anti-counterfeiting model.
3. The method of claim 2, wherein said obtaining an image sample comprises:
a plurality of initial palm images are acquired, and a target area palm map is extracted from each of the initial palm images to form an image sample.
4. The method of claim 3, wherein labeling the image sample with the illumination scene information and the living/non-living information to obtain an image sample labeled with the illumination scene information and the living/non-living information comprises:
marking the initial image sample by respectively adopting the illumination scene information and the living body/non-living body information to obtain an initial image sample marked with the illumination scene information and the living body/non-living body information;
adjusting the illumination brightness of the initial image sample, and updating the illumination scene information of the image sample after the illumination brightness adjustment to obtain an adjusted image sample marked with the illumination scene information and living body/non-living body information;
and forming the image sample marked with the illumination scene information and the living body/non-living body information according to the initial image sample marked with the illumination scene information and the living body/non-living body information and the adjustment image sample marked with the illumination scene information and the living body/non-living body information.
5. The method of claim 2, wherein said labeling the image sample with the illumination scene information comprises:
respectively calculating average pixel values of three channels by adopting a preset in-range image of the image sample;
calculating an image brightness value of the image sample from the average pixel values of the three channels;
and comparing the image brightness value with a preset brightness threshold value, and marking the image sample by adopting the illumination scene information according to a comparison result.
6. The method of claim 4, wherein performing the illumination brightness adjustment on the initial image sample and updating the illumination scene information of the illumination brightness adjusted image sample comprises:
and adjusting the brightness of the initial image sample twice, and updating the illumination scene information of the image sample after each illumination brightness adjustment, so that the illumination scene information of the formed image sample comprises strong illumination scene information, normal illumination scene information and weak illumination scene information.
7. The method of claim 6, wherein the plurality of scale convolutional networks comprises a first scale convolutional network, a second scale convolutional network, and a third scale convolutional network; selecting a scale convolution network from a plurality of scale convolution networks according to the classification loss function values, comprising:
when the classification loss function value is a dark illumination scene loss function value, selecting a first scale convolution network;
when the classification loss function value is the normal illumination scene loss function value, selecting a second scale convolution network;
and when the classification loss function value is the loss function value of the high-light scene, selecting a third scale convolution network.
8. The method of claim 5, wherein the palm anti-counterfeiting network further comprises a fourth scale convolutional network; inputting the image sample marked with the illumination scene information and the living body/non-living body information into the feature pyramid network for up-sampling operation, and outputting a first feature map, wherein the method comprises the following steps:
inputting the image sample marked with illumination scene information and living body/non-living body information into the fourth scale convolution network, and outputting a fourth feature map;
and inputting the fourth feature map into the feature pyramid network for up-sampling operation, and outputting a first feature map.
9. The method according to any one of claims 1 to 8, wherein determining whether the palm image to be detected is a live palm image according to the category probability value comprises:
selecting a classification category corresponding to the maximum category probability value as the classification category of the to-be-detected palm image;
and judging whether the palm image to be detected is a living body palm image according to the classification of the palm image to be detected.
10. The method according to any one of claims 1-8, wherein the inputting the target area palmogram into a pre-trained palm anti-counterfeiting model comprises:
judging whether the illumination brightness of the target palmogram is within a preset brightness range;
and if so, inputting the target palm diagram into a pre-trained palm anti-counterfeiting model.
11. An illumination scene adaptive palm anti-counterfeiting device, comprising:
the image acquisition module is used for acquiring a palm image to be detected and extracting a target area palm image from the palm image to be detected;
the category probability value output module is used for inputting the target area palmogram into a pre-trained palm anti-counterfeiting model and outputting a category probability value; the palm anti-counterfeiting model is obtained by training a palm anti-counterfeiting network by adopting an image sample marked with illumination scene information and living body/non-living body information, the palm anti-counterfeiting network comprises a characteristic pyramid network, a full convolution network, a plurality of scale convolution networks and a palm living body classifier, and one scale convolution network is used for processing the image sample of one illumination scene;
and the judging module is used for judging whether the palm image to be detected is a living body palm image according to the category probability value.
12. A terminal device, comprising:
a memory; one or more processors coupled with the memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-10.
13. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 10.
CN202210013295.0A 2022-01-06 2022-01-06 Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium Pending CN114373195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210013295.0A CN114373195A (en) 2022-01-06 2022-01-06 Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210013295.0A CN114373195A (en) 2022-01-06 2022-01-06 Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114373195A true CN114373195A (en) 2022-04-19

Family

ID=81142404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210013295.0A Pending CN114373195A (en) 2022-01-06 2022-01-06 Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114373195A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724258A (en) * 2022-04-24 2022-07-08 厦门熵基科技有限公司 Living body detection method, living body detection device, storage medium and computer equipment
CN117037221A (en) * 2023-10-08 2023-11-10 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724258A (en) * 2022-04-24 2022-07-08 厦门熵基科技有限公司 Living body detection method, living body detection device, storage medium and computer equipment
CN117037221A (en) * 2023-10-08 2023-11-10 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN117037221B (en) * 2023-10-08 2023-12-29 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
JP6926335B2 (en) Variable rotation object detection in deep learning
CN110909690B (en) Method for detecting occluded face image based on region generation
CN103617432B (en) A kind of scene recognition method and device
CN106897673B (en) Retinex algorithm and convolutional neural network-based pedestrian re-identification method
CN111178183B (en) Face detection method and related device
CN108960404B (en) Image-based crowd counting method and device
CN112651438A (en) Multi-class image classification method and device, terminal equipment and storage medium
CN108280455B (en) Human body key point detection method and apparatus, electronic device, program, and medium
CN112446302B (en) Human body posture detection method, system, electronic equipment and storage medium
CN114373195A (en) Illumination scene self-adaptive palm anti-counterfeiting method, device, equipment and storage medium
CN112818732B (en) Image processing method, device, computer equipment and storage medium
CN109472193A (en) Method for detecting human face and device
CN110263768A (en) A kind of face identification method based on depth residual error network
CN112101359B (en) Text formula positioning method, model training method and related device
CN111027450A (en) Bank card information identification method and device, computer equipment and storage medium
CN114092833A (en) Remote sensing image classification method and device, computer equipment and storage medium
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN112465709B (en) Image enhancement method, device, storage medium and equipment
CN112836625A (en) Face living body detection method and device and electronic equipment
CN115424171A (en) Flame and smoke detection method, device and storage medium
CN112115805A (en) Pedestrian re-identification method and system with bimodal hard-excavation ternary-center loss
CN116385430A (en) Machine vision flaw detection method, device, medium and equipment
CN115049675A (en) Generation area determination and light spot generation method, apparatus, medium, and program product
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination