CN114202805A - Living body detection method, living body detection device, electronic apparatus, and storage medium - Google Patents
Living body detection method, living body detection device, electronic apparatus, and storage medium Download PDFInfo
- Publication number
- CN114202805A CN114202805A CN202111404822.2A CN202111404822A CN114202805A CN 114202805 A CN114202805 A CN 114202805A CN 202111404822 A CN202111404822 A CN 202111404822A CN 114202805 A CN114202805 A CN 114202805A
- Authority
- CN
- China
- Prior art keywords
- feature map
- living body
- feature
- body detection
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 113
- 238000000605 extraction Methods 0.000 claims abstract description 78
- 230000004927 fusion Effects 0.000 claims abstract description 51
- 238000011176 pooling Methods 0.000 claims abstract description 42
- 238000001727 in vivo Methods 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims description 57
- 238000005070 sampling Methods 0.000 claims description 47
- 238000010586 diagram Methods 0.000 claims description 40
- 238000001574 biopsy Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 13
- 239000000126 substance Substances 0.000 claims 1
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 238000013135 deep learning Methods 0.000 abstract description 5
- 238000010606 normalization Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 8
- 230000001815 facial effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005242 forging Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The disclosure provides a living body detection method, a living body detection device, electronic equipment and a storage medium, relates to the field of artificial intelligence, specifically to the technical field of deep learning and computer vision, and can be applied to scenes such as face recognition and living body detection. The scheme is as follows: performing feature extraction on the acquired input image to obtain a first feature map and a second feature map; performing feature fusion on the first feature map and the second feature map to obtain a fusion feature map; and adopting the first pooling layer of the living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size, and carrying out living body detection on the target feature map. Therefore, the face image of the target object in the input image does not need to be subjected to outward expansion and adjustment, the loss of the features of the face image is avoided, meanwhile, the size of the fusion features is adjusted through the first pooling layer, the normalization of the fusion features is realized, the condition that the features are not aligned is avoided, and the reliability and robustness of the in-vivo detection result are improved.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the field of deep learning and computer vision technologies, which can be applied to scenes such as face recognition and in-vivo detection, and in particular, to a method and an apparatus for in-vivo detection, an electronic device, and a storage medium.
Background
With the continuous development of face recognition technology, the application of face recognition technology to authenticate user identity in various identity authentication systems is becoming more and more popular. For a system that performs identity authentication by using a face recognition technology, face verification and living body detection are generally required for a user. The living body detection is used for confirming whether the acquired data such as the face image comes from the user himself or herself, but not for replaying or forging materials.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for in vivo detection.
According to an aspect of the present disclosure, there is provided a method of living body detection, including: acquiring an input image; performing feature extraction on the input image at a first down-sampling rate to obtain a first feature map, and performing feature extraction at a second down-sampling rate to obtain a second feature map; performing feature fusion on the first feature map and the second feature map to obtain a fused feature map; adopting a first pooling layer of a living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size; and performing living body detection on the target characteristic diagram to determine whether the target object in the input image is a living body.
According to another aspect of the present disclosure, there is provided a living body detection apparatus including: the acquisition module is used for acquiring an input image; the extraction module is used for performing feature extraction on the input image at a first down-sampling rate to obtain a first feature map and performing feature extraction at a second down-sampling rate to obtain a second feature map; the fusion module is used for carrying out feature fusion on the first feature map and the second feature map to obtain a fusion feature map; the adjustment module is used for adjusting the size of the fused feature map by adopting a first pooling layer of the living body detection model to obtain a target feature map conforming to a fixed size; and the detection module is used for carrying out living body detection on the target characteristic diagram so as to determine whether the target object in the input image is a living body.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of an embodiment of the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow chart illustrating a method for in-vivo detection according to an embodiment of the present disclosure;
FIG. 2 is a schematic flowchart of a biopsy method according to a second embodiment of the disclosure;
FIG. 3 is a schematic flowchart of a biopsy method according to a third embodiment of the disclosure;
FIG. 4 is a schematic flowchart of a biopsy method according to a fourth embodiment of the disclosure;
FIG. 5 is a schematic flowchart of a biopsy method according to a fifth embodiment of the disclosure;
FIG. 6 is a schematic flow chart of a biopsy method according to an embodiment of the disclosure;
FIG. 7 is a schematic structural diagram of a biopsy device according to a sixth embodiment of the disclosure;
FIG. 8 shows a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related technology, the living body detection method is to expand the human face by a certain multiple and adjust the human face to a fixed size and then input the human face into a living body detection model for detection, the expanded multiple and the finally adjusted size influence the performance of the living body detection model to a great extent, and manually set super parameters may not be completely adapted to data of various different conditions and cannot achieve high robustness.
In view of the above problems, the present disclosure provides a method and an apparatus for detecting a living body, an electronic device, and a storage medium.
A living body detection method, an apparatus, an electronic device, and a storage medium of the embodiments of the present disclosure are described below with reference to the drawings.
Fig. 1 is a schematic flow chart of a biopsy method according to an embodiment of the disclosure.
The presently disclosed embodiments are exemplified in that the living body detecting method is configured in a living body detecting apparatus that can be applied to any electronic device so that the electronic device can perform a living body detecting function.
The electronic device may be any device with computing capability, for example, a personal computer, a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device with various operating systems, touch screens, and/or display screens, such as an in-vehicle device, a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
As shown in fig. 1, the living body detecting method may include the steps of:
In the embodiment of the present disclosure, the input image is an image including a face of a target object, where the target object may be a human or an animal, and the present disclosure does not limit this.
In the embodiment of the present disclosure, the type of the input image is not limited, for example, the input image may be a NIR (near infrared) image, or the input image may also be an RGB image, a TIR (thermal infrared) image, or the like.
In this disclosure, the input image may be obtained from an existing test set, or the input image may be acquired on line, for example, the face image of the target object may be acquired on line through a web crawler technology, or the input image may also be the face image of the target object acquired in real time, or the input image may also be an image synthesized by a human, and the like, which is not limited in this disclosure.
In order to obtain feature maps of different receptive fields corresponding to an input image, in the embodiment of the present disclosure, feature extraction may be performed on the input image at a first down-sampling rate to obtain a first feature map, and feature extraction may be performed on the input image at a second down-sampling rate to obtain a second feature map.
And 103, performing feature fusion on the first feature map and the second feature map to obtain a fused feature map.
In the embodiment of the present disclosure, feature fusion may be performed on the first feature map and the second feature map to obtain a fused image.
And 104, adopting the first pooling layer of the living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size.
In the embodiment of the disclosure, the fused feature may be input to a first pooling layer of the in-vivo detection model, and the first pooling layer may perform maximum pooling on the fused feature map and perform size adjustment to obtain a target feature map conforming to a fixed size.
And 105, performing living body detection on the target characteristic diagram to determine whether the target object in the input image is a living body.
In the embodiment of the present disclosure, living body detection may be performed according to the target feature map to determine whether the target object in the input image is a living body. For example, in order to improve the accuracy of the detection result, a deep learning technique may be used to perform living body detection on the input image.
In summary, by acquiring an input image; performing feature extraction on an input image at a first down-sampling rate to obtain a first feature map, and performing feature extraction at a second down-sampling rate to obtain a second feature map; performing feature fusion on the first feature map and the second feature map to obtain a fused feature map; adopting a first pooling layer of the living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size; and performing living body detection on the target characteristic diagram to determine whether the target object in the input image is a living body. Therefore, the face image of the target object in the input image does not need to be subjected to outward expansion and adjustment, the loss of the features of the face image is avoided, meanwhile, the size of the fusion features is adjusted through the first pooling layer of the in-vivo detection model, the normalization of the fusion features is realized, the condition that the features are not aligned is avoided, and the reliability and robustness of the in-vivo detection result are improved.
In order to clearly illustrate how the pooling layer of the biopsy model is used to resize the fused feature map in the above embodiments of the present disclosure, so as to obtain a target feature map with a fixed size, the present disclosure further provides a biopsy method.
Fig. 2 is a schematic flow chart of a biopsy method according to a second embodiment of the disclosure.
As shown in fig. 2, the living body detecting method may include the steps of:
And step 203, performing feature fusion on the first feature map and the second feature map to obtain a fused feature map.
In a possible implementation manner of the embodiment of the present disclosure, the first feature map and the second feature map may be spliced to obtain a fused feature map. For example, the first feature map and the second feature map may be added or spliced in a channel (channel) dimension direction to obtain a fused feature map.
In another possible implementation manner of the embodiment of the present disclosure, the first feature map and the second feature map may be spliced to obtain a spliced feature map, for example, the first feature map and the second feature map may be spliced in a channel (channel) dimension direction to obtain a spliced feature map, and then the spliced feature map may be input into the convolutional layer to be fused to obtain the fused feature map.
Therefore, the first characteristic diagram and the first characteristic diagram can be fused according to various modes, and the flexibility and the applicability of the method can be improved.
And step 204, inputting the fusion feature map into a region generation network (RPN) of the living body detection model for region-of-interest prediction, so as to determine a plurality of feature subgraphs of the region-of-interest from the fusion feature map.
In the embodiment of the present disclosure, the fused feature map may be input into an RPN (region generation Network) of a living body detection model to perform region-of-interest prediction, so as to determine a plurality of feature subgraphs of a region of interest from the fused feature map.
And then, inputting the characteristic subgraphs of the multiple interested areas into a pooling layer of the living body detection model, wherein the pooling layer can perform maximum pooling on the characteristic subgraphs of the interested areas with non-uniform sizes and perform size adjustment to obtain a target characteristic graph of which each interested area accords with a fixed size.
In step 206, the living body detection is performed on the target feature map to determine whether the target object in the input image is a living body.
It should be noted that the execution process of steps 201 to 202 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
In conclusion, the region-of-interest prediction is carried out by inputting the fusion feature map into the RPN network of the living body detection model, so that a plurality of feature subgraphs of the region-of-interest are determined from the fusion feature map; and inputting the characteristic subgraphs of the multiple interested areas into a pooling layer of the living body detection model for size adjustment to obtain a target characteristic graph of which each interested area accords with a fixed size. Therefore, the fusion feature graph can be subjected to size adjustment through the pooling layer of the in-vivo detection model, the normalization of the fusion feature is realized, the condition of feature misalignment is avoided, and the reliability and robustness of the in-vivo detection result are improved.
In order to clearly illustrate how to perform feature extraction on an input image at a first down-sampling rate to obtain a first feature map and perform feature extraction at a second down-sampling rate to obtain a second feature map in the embodiment of the present disclosure, the present disclosure also provides a living body detection method.
Fig. 3 is a schematic flow chart of a biopsy method according to a third embodiment of the disclosure.
To avoid feature loss, in the embodiments of the present disclosure, the first convolutional layer may include a plurality of stacked convolutional layers, and the plurality of stacked convolutional layers may perform shallow layer feature extraction on the input image to obtain an intermediate feature map.
And step 303, performing feature extraction on the intermediate feature map by using the second convolution layer at a first down-sampling rate to obtain a first feature map.
In order to obtain the feature maps of different receptive fields, further, the intermediate feature map may be subjected to feature extraction at a first down-sampling rate by using a second convolution layer to obtain a first feature map.
And step 304, performing feature extraction on the intermediate feature map at a second down-sampling rate by using a second pooling layer to obtain a second feature map.
And similarly, performing feature extraction on the intermediate feature map at a second down-sampling rate by adopting a second pooling layer to obtain a second feature map.
It should be noted that the first down-sampling rate may be greater than the second down-sampling rate, and the second down-sampling rate may also be greater than the first down-sampling rate, which is not specifically limited in this disclosure.
And 305, performing feature fusion on the first feature map and the second feature map to obtain a fused feature map.
And step 306, adjusting the size of the fused feature map by using the first pooling layer of the living body detection model to obtain a target feature map conforming to a fixed size.
In step 307, the living body detection is performed on the target feature map to determine whether the target object in the input image is a living body.
It should be noted that the execution processes of step 301 and steps 305 to 307 may refer to the execution process of any embodiment of the present disclosure, and are not described herein again.
In conclusion, an input image is input to the first convolution layer for feature extraction, so that an intermediate feature map is obtained; performing feature extraction on the intermediate feature map at a first down-sampling rate by adopting a second convolution layer to obtain a first feature map; and performing feature extraction on the intermediate feature map at a second down-sampling rate by adopting a second pooling layer to obtain a second feature map, and thus, performing feature extraction on the intermediate feature map through different down-sampling rates to obtain feature maps of different receptive fields.
In order to clearly illustrate how the living body detection is performed on the target feature map in the above embodiments of the present disclosure to determine whether the target object in the input image is a living body, the present disclosure also proposes a living body detection method.
Fig. 4 is a schematic flow chart of a biopsy method according to a fourth embodiment of the disclosure.
And step 403, performing feature fusion on the first feature map and the second feature map to obtain a fused feature map.
And step 404, adopting the first pooling layer of the living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size.
And 405, performing depth feature extraction on the target feature map by using a depth feature extraction network of the living body detection model to obtain a depth feature map.
In the embodiment of the disclosure, a depth feature extraction network in a living body detection model can be adopted to perform depth feature extraction on a target feature map to obtain a depth feature map. For example, the deep feature extraction network may be a lightweight network, such as MobileNetv3 or the like.
And 406, classifying the depth feature map by using a prediction layer in the living body detection model to obtain the classification probability of the target object in the input image.
In the embodiment of the present disclosure, the depth feature map may be classified by using a prediction layer in the living body detection model, so as to obtain the classification probability of the target object in the face image. For example, the prediction layer may include a classifier, and the depth feature map is classified by the classifier to obtain a classification probability of the target object in the face image.
In the disclosed embodiment, it may be determined whether the target object is a living body according to the classification probability. For example, it may be determined whether the classification probability is greater than a set probability threshold (e.g., 0.5), and in response to the classification probability being greater than the set probability threshold, the target object is determined to be a living body, and in response to the classification probability not being greater than the set probability threshold, the target object is determined to be a non-living body.
It should be noted that the execution process of steps 401 to 404 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
In conclusion, the depth feature extraction is carried out on the target feature map by adopting a depth feature extraction network of the living body detection model to obtain a depth feature map; classifying the depth feature map by adopting a prediction layer in a living body detection model to obtain the classification probability of a target object in an input image; and determining whether the target object is a living body according to the classification probability. Therefore, the living body detection is carried out on the target characteristic diagram based on the deep learning technology, and the accuracy and the reliability of the detection result can be improved. Moreover, the face image of the target object does not need to be expanded and adjusted, and the target feature map is obtained by adjusting the size of the fusion feature map according to the first pooling layer, so that the situations of feature loss and misalignment are avoided, and the accuracy and robustness of model detection are improved.
In order to clearly illustrate how the input image is acquired in any of the above embodiments of the present disclosure, the present disclosure also provides a method of detecting a living body.
Fig. 5 is a schematic flowchart of a biopsy method according to a fifth embodiment of the disclosure.
As shown in fig. 5, the living body detecting method may include the steps of:
In this disclosure, the target image may be obtained from an existing test set, or the target image may be acquired online, for example, the target image including the face of the target object may be acquired online through a web crawler technology, or the target image may also be an image including the face of the target object acquired in real time, or the target image may also be an image synthesized by a human, and the like, which is not limited in this disclosure.
In a possible implementation manner of the embodiment of the present disclosure, the input size of the living body detection model may have a setting requirement, and for facilitating the living body detection, the input image needs to conform to the set image size, and the living body detection model may be trained according to the input image with the set size.
Accordingly, before the input image is input to the living body detection model, the target image may be subjected to scaling processing according to the set image size to obtain the input image.
For example, the size of the target image is 1920 × 1080, and the long edge of the target image can be scaled to 512 × 288, where 512 is the size corresponding to the long edge of the setting image.
It should be noted that, in the embodiment of the present disclosure, scaling the target image is different from only performing the external expansion and adjustment on the face of the target object in the target image, and scaling the target image does not cause loss of features of the face of the target object.
And step 504, performing feature fusion on the first feature map and the second feature map to obtain a fused feature map.
And 505, adopting the first pooling layer of the living body detection model to perform size adjustment on the fusion characteristic diagram so as to obtain a target characteristic diagram conforming to a fixed size.
In step 506, the living body detection is performed on the target feature map to determine whether the target object in the input image is a living body.
It should be noted that the execution process of steps 503 to 506 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
In conclusion, the target image to be detected is obtained; the target image is subjected to scaling processing according to the set image size to obtain the input image, so that the input image which accords with the set size can be obtained by carrying out scaling processing on the target object according to the set image size, the feature loss caused by external expansion and adjustment of the face image of the target object in the input image is avoided, and the reliability and the robustness of the living body detection result are improved.
In order to more clearly illustrate the above embodiments, the description will now be made by way of example.
As shown in fig. 6, in order to meet the requirement for setting the input size of the biometric model, the full graph including the face may be scaled to a ratio of 512 by the long side, the full graph is scaled, as the input of the biometric model, two superimposed convolution layers "conv 2d, channel 64, kernel _ size 5, stride 2" and "conv 2d, channel 128, kernel _ size 3, stride 2" are used as the first convolution layer of the biometric model to perform feature extraction on the input image to obtain an intermediate feature map, the intermediate feature map is passed through "conv 2d, channel 512, kernel _ size 1, stride 1" and "conv 2d, channel 256, kernel _ size 3, stride 2" and "stride 2" to perform feature extraction, and the first feature map is obtained by performing feature extraction, and performing feature extraction at the same time, the first feature extraction is performed by performing feature extraction, the second feature extraction is performed by performing feature extraction on the first convolution 2 and the second convolution 2, channel 2, and the second feature extraction is performed by performing feature extraction 512, so as to obtain a second characteristic diagram, and then the first characteristic diagram and the second characteristic diagram are spliced in the channel dimension direction through a "concat layer" to obtain a spliced characteristic diagram, and then, the stitched feature map may be input into the convolutional layer "conv 2d, channel _ 256, kernel _ size _ 3, stride _ 1" to be fused to obtain a fused feature map, then, the fused feature map is subjected to size adjustment through a first pooling layer "Roi posing", to obtain a target feature map 'Image _ RPN' conforming to a fixed size, performing depth feature extraction on the target feature map through a depth feature extraction network 'MobileNet v 3', and classifying the depth feature map by adopting a prediction layer in the living body detection model to obtain the classification probability of the target object in the input image, and determining whether the target object is a living body according to the classification probability.
The living body detection method of the embodiment of the present disclosure includes acquiring an input image; performing feature extraction on an input image at a first down-sampling rate to obtain a first feature map, and performing feature extraction at a second down-sampling rate to obtain a second feature map; performing feature fusion on the first feature map and the second feature map to obtain a fused feature map; adopting a first pooling layer of the living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size; and performing living body detection on the target characteristic diagram to determine whether the target object in the input image is a living body. Therefore, the facial image of the target object does not need to be expanded and adjusted, the loss of the features of the facial image is avoided, meanwhile, the size of the fusion features is adjusted through the first pooling layer of the in-vivo detection model, the normalization of the fusion features is realized, the condition that the features are not aligned is avoided, and the reliability and robustness of the in-vivo detection result are improved.
In correspondence with the above-mentioned biopsy method provided in the embodiments of fig. 1 to 6, the present disclosure also provides a biopsy device, and since the biopsy device provided in the embodiments of the present disclosure corresponds to the above-mentioned biopsy method provided in the embodiments of fig. 1 to 6, the implementation manner of the biopsy method is also applicable to the biopsy device provided in the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
Fig. 7 is a schematic structural diagram of a biopsy device according to a sixth embodiment of the present disclosure.
As shown in fig. 7, the living body detecting apparatus 700 may include: an acquisition module 710, an extraction module 720, a fusion module 730, an adjustment module 740, and a detection module 750.
The acquiring module 710 is configured to acquire an input image; an extracting module 720, configured to perform feature extraction on the input image at a first down-sampling rate to obtain a first feature map, and perform feature extraction at a second down-sampling rate to obtain a second feature map; a fusion module 730, configured to perform feature fusion on the first feature map and the second feature map to obtain a fused feature map; an adjusting module 740, configured to perform size adjustment on the fused feature map by using the first pooling layer of the living body detection model to obtain a target feature map meeting a fixed size; and a detecting module 750, configured to perform living body detection on the target feature map to determine whether the target object in the input image is a living body.
As a possible implementation manner of the embodiment of the present disclosure, the adjusting module 740 is configured to: inputting the fusion characteristic diagram into a region generation network RPN of a living body detection model to predict the region of interest so as to determine a plurality of characteristic subgraphs of the region of interest from the fusion characteristic diagram; and inputting the characteristic subgraphs of the multiple interested areas into a pooling layer of the living body detection model for size adjustment to obtain a target characteristic graph of which each interested area accords with a fixed size.
As a possible implementation manner of the embodiment of the present disclosure, the living body detection model further includes a first convolution layer, a second convolution layer, and a second pooling layer, and the extracting module 720 is configured to: inputting an input image into the first convolution layer for feature extraction to obtain an intermediate feature map; performing feature extraction on the intermediate feature map at a first down-sampling rate by adopting a second convolution layer to obtain a first feature map; and performing feature extraction on the intermediate feature map at a second down-sampling rate by adopting a second pooling layer to obtain a second feature map.
As a possible implementation manner of the embodiment of the present disclosure, the detecting module 750 is configured to: performing depth feature extraction on the target feature map by adopting a depth feature extraction network of the living body detection model to obtain a depth feature map; classifying the depth feature map by adopting a prediction layer in a living body detection model to obtain the classification probability of a target object in an input image; and determining whether the target object is a living body according to the classification probability.
As a possible implementation manner of the embodiment of the present disclosure, the fusion module 730 is configured to: and splicing the first characteristic diagram and the second characteristic diagram to obtain a fused characteristic diagram.
As a possible implementation manner of the embodiment of the present disclosure, the fusion module 730 is configured to splice the first feature map and the second feature map to obtain a spliced image; and inputting the spliced image into the convolution layer to be fused to obtain a fusion characteristic diagram.
As a possible implementation manner of the embodiment of the present disclosure, the obtaining module 710 is configured to: acquiring a target image to be detected; and carrying out scaling processing on the target image according to the set image size to obtain an input image.
The living body detecting apparatus of the embodiment of the present disclosure, by acquiring an input image; performing feature extraction on an input image at a first down-sampling rate to obtain a first feature map, and performing feature extraction at a second down-sampling rate to obtain a second feature map; performing feature fusion on the first feature map and the second feature map to obtain a fused feature map; adopting a first pooling layer of the living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size; and performing living body detection on the target characteristic diagram to determine whether the target object in the input image is a living body. Therefore, the facial image of the target object does not need to be expanded and adjusted, the loss of the features of the facial image is avoided, meanwhile, the size of the fusion features is adjusted through the first pooling layer of the in-vivo detection model, the normalization of the fusion features is realized, the condition that the features are not aligned is avoided, and the reliability and robustness of the in-vivo detection result are improved.
In the technical scheme of the present disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user are all performed under the premise of obtaining the consent of the user, and all meet the regulations of the related laws and regulations, and do not violate the good custom of the public order.
To implement the above embodiments, the present disclosure also provides an electronic device, which may include at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method for in vivo detection set forth in any of the above-described embodiments of the present disclosure.
In order to achieve the above embodiments, the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the living body detection method proposed by any one of the above embodiments of the present disclosure.
In order to implement the above-mentioned embodiments, the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the liveness detection method set forth in any of the above-mentioned embodiments of the present disclosure.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (17)
1. A method of in vivo detection comprising:
acquiring an input image;
performing feature extraction on the input image at a first down-sampling rate to obtain a first feature map, and performing feature extraction at a second down-sampling rate to obtain a second feature map;
performing feature fusion on the first feature map and the second feature map to obtain a fused feature map;
adopting a first pooling layer of a living body detection model to carry out size adjustment on the fused feature map so as to obtain a target feature map conforming to a fixed size;
and performing living body detection on the target characteristic diagram to determine whether the target object in the input image is a living body.
2. The method of claim 1, wherein the resizing the fused feature map with a pooling layer of a biopsy model to obtain a target feature map conforming to a fixed size comprises:
inputting the fusion feature map into a region generation network (RPN) of the living body detection model for region-of-interest prediction so as to determine a plurality of feature subgraphs of regions-of-interest from the fusion feature map;
and inputting the characteristic subgraphs of the multiple interested areas into a pooling layer of the living body detection model for size adjustment to obtain a target characteristic graph of which each interested area conforms to a fixed size.
3. The method of claim 1, wherein the liveness detection model further comprises a first convolutional layer, a second convolutional layer, and a second pooling layer, and the performing feature extraction on the input image at a first down-sampling rate to obtain a first feature map and at a second down-sampling rate to obtain a second feature map comprises:
inputting the input image into the first convolution layer for feature extraction to obtain an intermediate feature map;
performing feature extraction on the intermediate feature map by using the second convolution layer at a first down-sampling rate to obtain a first feature map;
and performing feature extraction on the intermediate feature map at a second down-sampling rate by using the second pooling layer to obtain a second feature map.
4. The method of claim 1, wherein the performing living body detection on the target feature map to determine whether the target object in the input image is a living body comprises:
performing depth feature extraction on the target feature map by adopting a depth feature extraction network of the living body detection model to obtain a depth feature map;
classifying the depth feature map by adopting a prediction layer in the living body detection model to obtain the classification probability of a target object in the input image;
and determining whether the target object is a living body according to the classification probability.
5. The method according to any one of claims 1-4, wherein said feature fusing the first feature map and the second feature map to obtain a fused feature map comprises:
and splicing the first characteristic diagram and the second characteristic diagram to obtain the fused characteristic diagram.
6. The method according to any one of claims 1-4, wherein said feature fusing the first feature map and the second feature map to obtain a fused feature map comprises:
splicing the first characteristic diagram and the second characteristic diagram to obtain a spliced image;
and inputting the spliced image into a convolution layer to be fused to obtain the fusion characteristic diagram.
7. The method of any of claims 1-4, wherein the acquiring an input image comprises:
acquiring a target image to be detected;
and according to the set image size, carrying out scaling processing on the target image to obtain the input image.
8. A living body detection apparatus comprising:
the acquisition module is used for acquiring an input image;
the extraction module is used for performing feature extraction on the input image at a first down-sampling rate to obtain a first feature map and performing feature extraction at a second down-sampling rate to obtain a second feature map;
the fusion module is used for carrying out feature fusion on the first feature map and the second feature map to obtain a fusion feature map;
the adjustment module is used for adjusting the size of the fused feature map by adopting a first pooling layer of the living body detection model to obtain a target feature map conforming to a fixed size;
and the detection module is used for carrying out living body detection on the target characteristic diagram so as to determine whether the target object in the input image is a living body.
9. The apparatus of claim 8, wherein the adjustment module is to:
inputting the fusion feature map into a region generation network (RPN) of the living body detection model for region-of-interest prediction so as to determine a plurality of feature subgraphs of regions-of-interest from the fusion feature map;
and inputting the characteristic subgraphs of the multiple interested areas into a pooling layer of the living body detection model for size adjustment to obtain a target characteristic graph of which each interested area conforms to a fixed size.
10. The apparatus of claim 8, wherein the liveness detection model further comprises a first convolutional layer, a second convolutional layer, and a second pooling layer, the extraction module to:
inputting the input image into the first convolution layer for feature extraction to obtain an intermediate feature map;
performing feature extraction on the intermediate feature map by using the second convolution layer at a first down-sampling rate to obtain a first feature map;
and performing feature extraction on the intermediate feature map at a second down-sampling rate by using the second pooling layer to obtain a second feature map.
11. The apparatus of claim 8, wherein the detection module is to:
performing depth feature extraction on the target feature map by adopting a depth feature extraction network of the living body detection model to obtain a depth feature map;
classifying the depth feature map by adopting a prediction layer in the living body detection model to obtain the classification probability of a target object in the input image;
and determining whether the target object is a living body according to the classification probability.
12. The apparatus of any one of claims 8-11, wherein the fusion module is to:
and splicing the first characteristic diagram and the second characteristic diagram to obtain the fused characteristic diagram.
13. The apparatus of any one of claims 8-11, wherein the fusion module is to:
splicing the first characteristic diagram and the second characteristic diagram to obtain a spliced image;
and inputting the spliced image into a convolution layer to be fused to obtain the fusion characteristic diagram.
14. The apparatus of any one of claims 8-11, wherein the means for obtaining is configured to:
acquiring a target image to be detected;
and according to the set image size, carrying out scaling processing on the target image to obtain the input image.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111404822.2A CN114202805A (en) | 2021-11-24 | 2021-11-24 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111404822.2A CN114202805A (en) | 2021-11-24 | 2021-11-24 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114202805A true CN114202805A (en) | 2022-03-18 |
Family
ID=80648723
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111404822.2A Pending CN114202805A (en) | 2021-11-24 | 2021-11-24 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114202805A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740686A (en) * | 2019-01-09 | 2019-05-10 | 中南大学 | A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features |
WO2020088029A1 (en) * | 2018-10-29 | 2020-05-07 | 北京三快在线科技有限公司 | Liveness detection method, storage medium, and electronic device |
CN111666901A (en) * | 2020-06-09 | 2020-09-15 | 创新奇智(北京)科技有限公司 | Living body face detection method and device, electronic equipment and storage medium |
WO2020199593A1 (en) * | 2019-04-04 | 2020-10-08 | 平安科技(深圳)有限公司 | Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium |
CN112070041A (en) * | 2020-09-14 | 2020-12-11 | 北京印刷学院 | Living body face detection method and device based on CNN deep learning model |
WO2021068322A1 (en) * | 2019-10-10 | 2021-04-15 | 平安科技(深圳)有限公司 | Training method and apparatus for living body detection model, computer device, and storage medium |
CN113033519A (en) * | 2021-05-25 | 2021-06-25 | 腾讯科技(深圳)有限公司 | Living body detection method, estimation network processing method, device and computer equipment |
CN113435408A (en) * | 2021-07-21 | 2021-09-24 | 北京百度网讯科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN113642639A (en) * | 2021-08-12 | 2021-11-12 | 云知声智能科技股份有限公司 | Living body detection method, living body detection device, living body detection apparatus, and storage medium |
CN113657245A (en) * | 2021-08-13 | 2021-11-16 | 亮风台(上海)信息科技有限公司 | Method, device, medium and program product for human face living body detection |
-
2021
- 2021-11-24 CN CN202111404822.2A patent/CN114202805A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020088029A1 (en) * | 2018-10-29 | 2020-05-07 | 北京三快在线科技有限公司 | Liveness detection method, storage medium, and electronic device |
CN109740686A (en) * | 2019-01-09 | 2019-05-10 | 中南大学 | A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features |
WO2020199593A1 (en) * | 2019-04-04 | 2020-10-08 | 平安科技(深圳)有限公司 | Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium |
WO2021068322A1 (en) * | 2019-10-10 | 2021-04-15 | 平安科技(深圳)有限公司 | Training method and apparatus for living body detection model, computer device, and storage medium |
CN111666901A (en) * | 2020-06-09 | 2020-09-15 | 创新奇智(北京)科技有限公司 | Living body face detection method and device, electronic equipment and storage medium |
CN112070041A (en) * | 2020-09-14 | 2020-12-11 | 北京印刷学院 | Living body face detection method and device based on CNN deep learning model |
CN113033519A (en) * | 2021-05-25 | 2021-06-25 | 腾讯科技(深圳)有限公司 | Living body detection method, estimation network processing method, device and computer equipment |
CN113435408A (en) * | 2021-07-21 | 2021-09-24 | 北京百度网讯科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN113642639A (en) * | 2021-08-12 | 2021-11-12 | 云知声智能科技股份有限公司 | Living body detection method, living body detection device, living body detection apparatus, and storage medium |
CN113657245A (en) * | 2021-08-13 | 2021-11-16 | 亮风台(上海)信息科技有限公司 | Method, device, medium and program product for human face living body detection |
Non-Patent Citations (2)
Title |
---|
张云;李岚;: "基于级联卷积神经网络的人脸特征点识别算法实现", 兰州理工大学学报, no. 03, 15 June 2020 (2020-06-15) * |
李莉;: "基于Gabor小波和动态LBP的人脸活体检测", 电子世界, no. 01, 15 January 2020 (2020-01-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11436739B2 (en) | Method, apparatus, and storage medium for processing video image | |
US11321593B2 (en) | Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device | |
CN113971751A (en) | Training feature extraction model, and method and device for detecting similar images | |
CN113343826A (en) | Training method of human face living body detection model, human face living body detection method and device | |
CN113378712B (en) | Training method of object detection model, image detection method and device thereof | |
CN112861885A (en) | Image recognition method and device, electronic equipment and storage medium | |
CN113221771A (en) | Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product | |
CN113177892A (en) | Method, apparatus, medium, and program product for generating image inpainting model | |
CN114120454A (en) | Training method and device of living body detection model, electronic equipment and storage medium | |
CN114333038B (en) | Training method of object recognition model, object recognition method, device and equipment | |
CN116052288A (en) | Living body detection model training method, living body detection device and electronic equipment | |
CN113343997B (en) | Optical character recognition method, device, electronic equipment and storage medium | |
CN114842541A (en) | Model training and face recognition method, device, equipment and storage medium | |
CN114202805A (en) | Living body detection method, living body detection device, electronic apparatus, and storage medium | |
CN115249281A (en) | Image occlusion and model training method, device, equipment and storage medium | |
CN114067394A (en) | Face living body detection method and device, electronic equipment and storage medium | |
CN114119990A (en) | Method, apparatus and computer program product for image feature point matching | |
CN114140320A (en) | Image migration method and training method and device of image migration model | |
CN113903071A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN113032071A (en) | Page element positioning method, page testing method, device, equipment and medium | |
CN113378774A (en) | Gesture recognition method, device, equipment, storage medium and program product | |
CN112070022A (en) | Face image recognition method and device, electronic equipment and computer readable medium | |
CN116128863B (en) | Medical image processing method, device and equipment | |
CN113378773B (en) | Gesture recognition method, gesture recognition device, gesture recognition apparatus, gesture recognition storage medium, and gesture recognition program product | |
CN111311616B (en) | Method and apparatus for segmenting an image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |