CN112329574A - Automatic acquisition method and device applied to cat nose line - Google Patents

Automatic acquisition method and device applied to cat nose line Download PDF

Info

Publication number
CN112329574A
CN112329574A CN202011162222.5A CN202011162222A CN112329574A CN 112329574 A CN112329574 A CN 112329574A CN 202011162222 A CN202011162222 A CN 202011162222A CN 112329574 A CN112329574 A CN 112329574A
Authority
CN
China
Prior art keywords
image
nose
clear
basic
noise ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011162222.5A
Other languages
Chinese (zh)
Inventor
徐强
李凌
宋凯旋
喻辉
陈宇桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongkehuaying Health Technology Co ltd
Suzhou Zhongke Advanced Technology Research Institute Co Ltd
Original Assignee
Suzhou Zhongkehuaying Health Technology Co ltd
Suzhou Zhongke Advanced Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongkehuaying Health Technology Co ltd, Suzhou Zhongke Advanced Technology Research Institute Co Ltd filed Critical Suzhou Zhongkehuaying Health Technology Co ltd
Priority to CN202011162222.5A priority Critical patent/CN112329574A/en
Publication of CN112329574A publication Critical patent/CN112329574A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to an automatic acquisition method and device applied to cat rhinoprints. The method and the device carry out preprocessing operation on the image to be acquired to obtain a processed basic rhinoprint image; performing definition evaluation processing on the basic nose pattern image to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image; calculating the texture signal-to-noise ratio of the clear nose pattern image to obtain the nose pattern texture signal-to-noise ratio of the clear nose pattern image; judging whether the clear value and the nose-print texture signal-to-noise ratio value meet a preset first comparison condition or not; if the clear value or the nose pattern texture signal-to-noise ratio value does not meet the first comparison condition, outputting information that no nose pattern is acquired by the clear nose pattern image; if the clear value and the nose pattern texture signal-to-noise ratio value both accord with the first comparison condition, the clear nose pattern image is stored in a database.

Description

Automatic acquisition method and device applied to cat nose line
Technical Field
The invention relates to the technical field of image recognition, in particular to an automatic acquisition method and device applied to cat nose striae.
Background
With the increasing of the number of pets and the increasing of the dependence degree of people on the pets, new identity authentication scenes can be continuously produced, and for the needs of safety and sanitation, the identity authentication is carried out on pet cats in various domestic big cities.
In order to ensure the accuracy of cat identity authentication, the identity of the cat can be confirmed through the identification of the nasal print of the cat, but because the definition requirement of the nasal print identification technology on the nasal print is higher, the requirement is difficult to be met by a common cat picture; due to the good property of the cat, when the cat is shot, the clear shooting of the nasal print image of the cat is difficult, and the accuracy of the subsequent nasal print recognition is easy to reduce.
Disclosure of Invention
The embodiment of the invention provides an automatic acquisition method and device applied to cat nose striae, which at least solve the technical problem of low image acquisition definition in the traditional acquisition technical mode.
According to an embodiment of the invention, an automatic acquisition method applied to cat nose striae is provided, which comprises the following steps:
receiving a nasal print acquisition request, wherein the nasal print acquisition request at least carries an image to be acquired;
preprocessing an image to be acquired to obtain a processed basic rhinoprint image;
performing definition evaluation processing on the basic nose pattern image to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image;
calculating the texture signal-to-noise ratio of the clear nose pattern image to obtain the nose pattern texture signal-to-noise ratio of the clear nose pattern image;
judging whether the clear value and the nose-print texture signal-to-noise ratio value meet a preset first comparison condition or not;
if the clear value or the nose pattern texture signal-to-noise ratio value does not meet the preset first comparison condition, outputting information that no nose pattern is collected in the clear nose pattern image;
and if the clear value and the nose pattern texture signal-to-noise ratio value both accord with a preset first comparison condition, storing the clear nose pattern image into a database.
Further, the step of performing definition evaluation processing on the basic nose pattern image to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image comprises the following steps:
and performing definition evaluation processing on the basic nasal print image by using a Laplace gradient function to obtain a processed clear nasal print image and a clear value of the clear nasal print image.
Further, the step of calculating the texture signal-to-noise ratio of the clear nose pattern image to obtain the nose pattern texture signal-to-noise ratio of the clear nose pattern image comprises:
performing region division processing operation on the clear nose pattern image to obtain N × N sub-regions, wherein N is a positive integer greater than 0;
calculating the signal-to-noise ratio of each subregion to obtain the signal-to-noise ratio of each subregion corresponding to the subregion;
and summing the signal-to-noise ratios of the N x N sub-regions, and averaging to calculate to obtain the signal-to-noise ratio of the nose texture.
Further, the step of preprocessing the image to be acquired to obtain a processed basic rhinoprint image comprises:
inputting an image to be acquired into a trained nose pattern segmentation model for segmentation processing to obtain a basic segmentation image;
judging whether the basic segmentation image meets a preset second comparison condition;
if the basic segmentation image does not meet the preset second comparison condition, outputting the information that the cat nose region is not found in the basic segmentation image;
and if the basic segmentation image meets a preset second comparison condition, taking the basic segmentation image as a basic nose pattern image.
Further, the method further comprises:
constructing a deep learning semantic segmentation model;
labeling the cat image in the database to obtain a segmentation data set for training;
and inputting the segmentation data set into a deep learning semantic segmentation model for iterative training operation to obtain a trained nose pattern segmentation model.
According to another embodiment of the present invention, there is provided an automatic acquisition device applied to a cat nose line, including:
the request receiving module is used for receiving a nasal print acquisition request, and the nasal print acquisition request at least carries an image to be acquired;
the image preprocessing module is used for preprocessing an image to be acquired to obtain a processed basic rhinoprint image;
the image definition processing module is used for evaluating the definition of the basic nose pattern image to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image;
the signal-to-noise ratio calculation module is used for carrying out texture signal-to-noise ratio calculation on the clear nose pattern image to obtain a nose pattern texture signal-to-noise ratio value of the clear nose pattern image;
the first comparison condition module is used for judging whether the clear value and the nose texture signal-to-noise ratio value meet a preset first comparison condition or not;
the first information output module is used for outputting information that no nose print is acquired by the clear nose print image if the clear value or the nose print texture signal-to-noise ratio value does not accord with a preset first comparison condition;
and the image storage module is used for storing the clear nose print image to a database if the clear value and the nose print signal-to-noise ratio both accord with a preset first comparison condition.
Further, the image sharpness processing module comprises:
and performing definition evaluation processing on the basic nasal print image by using a Laplace gradient function to obtain a processed clear nasal print image and a clear value of the clear nasal print image.
Further, the signal-to-noise ratio calculation module comprises:
the region dividing unit is used for performing region dividing processing operation on the clear nose pattern image to obtain N × N sub-regions, wherein N is a positive integer greater than 0;
the regional signal-to-noise ratio calculation unit is used for calculating the signal-to-noise ratio of each subregion, and the signal-to-noise ratio of each subregion corresponds to the signal-to-noise ratio of the subregion;
and the texture signal-to-noise ratio calculation unit is used for summing the signal-to-noise ratios of the N x N sub-regions and calculating the average value to obtain the signal-to-noise ratio of the nose texture.
Further, the image preprocessing module includes:
the image segmentation unit is used for inputting the image to be acquired into the trained nose print segmentation model for segmentation processing to obtain a basic segmentation image;
the second comparison condition judgment unit is used for judging whether the basic segmentation image meets a preset second comparison condition;
the second information output unit is used for outputting the information that the cat nose area is not found in the basic segmentation image if the basic segmentation image does not meet the preset second comparison condition;
and the basic image acquisition unit is used for taking the basic segmentation image as the basic nose pattern image if the basic segmentation image meets a preset second comparison condition.
Further, the apparatus further comprises:
the model construction module is used for constructing a deep learning semantic segmentation model;
the data set acquisition module is used for labeling the cat images in the database to obtain a segmentation data set for training;
and the model training module is used for inputting the segmentation data set into the deep learning semantic segmentation model to perform iterative training operation so as to obtain a trained nose pattern segmentation model.
According to the automatic acquisition method and device applied to the cat nose line, the basic nose line image containing the cat nose area can be quickly and accurately acquired by preprocessing the image to be acquired; furthermore, by evaluating the definition of the basic nose pattern image, a clear nose pattern image with clear nose patterns can be obtained, and a precise definition value can be obtained; then, the texture signal-to-noise ratio of the clear nose pattern image is calculated, so that the nose pattern texture signal-to-noise ratio can be quickly and accurately obtained, noise existing in the clear nose pattern image can be eliminated, and the quality of nose patterns in the image is further ensured; and judging the obtained clear value and the signal-to-noise ratio of the nasal texture, and storing the clear nasal texture image which meets the conditions and has high definition and high texture quality into a database.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of the automatic collection method of the invention applied to cat nose prints;
FIG. 2 is a flow chart of the image SNR calculation applied to the automatic acquisition method of cat nose print according to the present invention;
FIG. 3 is a flow chart of image preprocessing for an automatic acquisition method of cat nose print according to the present invention;
FIG. 4 is a flowchart of a segmentation model for training nasal prints applied to an automatic acquisition method for cat nasal prints according to the present invention;
FIG. 5 is a block diagram of an automatic acquisition device of the invention applied to cat's nose print;
FIG. 6 is a block diagram of the SNR calculation module of the image applied to the automatic acquisition device of cat nose print according to the present invention;
FIG. 7 is a block diagram of an image preprocessing module of the automatic acquisition device applied to the cat nose print according to the present invention;
FIG. 8 is a block diagram of a training nasal print segmentation model of the automatic acquisition device applied to cat nasal prints according to the present invention;
FIG. 9 is a schematic diagram of image region division applied to the automatic acquiring method of cat nose print according to the present invention;
FIG. 10 is a diagram illustrating the effect of nose region segmentation and texture analysis applied to the method for automatically acquiring cat nasal print according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, there is provided an automatic acquisition method applied to a cat nose line, referring to fig. 1, including the following steps:
s1: and receiving a nasal print acquisition request, wherein at least an image to be acquired is carried in the nasal print acquisition request.
In this embodiment, since the deep learning has a specific depth network, the deep learning can be used for deeply mining potential features of an image, and the image classification, detection, segmentation and other fields are highlighted, so that the automatic acquisition method applied to the cat rhinoprint in this embodiment specifically performs semantic segmentation on the cat image by using the deep learning network to accurately acquire the image including the cat nose region, and then performs clear quantization and texture analysis on the nasal print region in the segmented image to obtain quantized values, so as to determine whether the nasal print meets requirements by using the quantized values to acquire a high-definition and high-texture-quality nasal print image, thereby accurately acquiring the cat rhinoprint.
The nasal print acquisition request is an operation request input by a user according to the actual application of a specific nasal print acquisition scene and the acquisition operation required to be carried out; the nose pattern acquisition request at least carries an acquired image, and the image to be acquired is image data provided in the application of the actual specific nose pattern acquisition scene, so that the image data to be acquired can be analyzed and processed subsequently, and the nose pattern data in the image can be accurately acquired.
Specifically, a nasal print acquisition request input by a user from a client is received, and the acquired nasal print acquisition request at least carries an image to be acquired, so that the subsequent data analysis processing of the image to be acquired can be realized, the nasal print image with high definition and high texture quality can be obtained, and the accurate acquisition of cat nasal prints can be realized.
S2: and carrying out preprocessing operation on the image to be acquired to obtain a processed basic rhinoprint image.
In this embodiment, in order to ensure the subsequent processing and the acquisition of the high-definition cat nose pattern image, it is necessary to ensure the area with the cat nose in the image, so in this embodiment, the pre-processing operation is performed on the image to be acquired to obtain the processed basic nose pattern image of the area including the cat nose.
The preprocessing operation may specifically be an image preprocessing means such as rotation, size change, brightness change, and nose line segmentation, and may also be another image processing means according to the actual application requirements, which is not limited herein.
Specifically, image preprocessing means such as image rotation, size change, light and shade change, nose line segmentation and the like are respectively carried out on an image to be acquired so as to acquire a processed basic nose line image of an area containing the cat nose.
S3: and evaluating the definition of the basic nose pattern image to obtain a processed clear nose pattern image and a clear value of the clear nose pattern image.
In this embodiment, the definition evaluation processing is to quantify a definition index of the basic rhinoprint image, which can be used to evaluate the definition of the image, and it can be understood that the higher the definition value is, the clearer the image is focused, and the lower the definition value is, the more blurred the image is focused, so in order to ensure that a high-definition cat rhinoprint image can be collected, thereby collecting high-quality and high-definition cat rhinoprints, in this embodiment, the definition evaluation processing is performed on the basic rhinoprint image obtained in step S2, so as to realize the definition quantification of the basic rhinoprint image.
Specifically, the definition of the basic rhinoprint image may be quantized by using an energy gradient method, a laplacian gradient function, and other algorithms, such as a variance method, where no specific limitation is imposed, and then, a clear rhinoprint image with a higher definition than the basic rhinoprint image and a definition of the clear rhinoprint image may be obtained, so that the definition may be further evaluated later, and a high-definition feline rhinoprint image may be acquired, thereby acquiring a high-quality and high-definition feline rhinoprint.
S4: and calculating the texture signal-to-noise ratio of the clear nose pattern image to obtain the nose pattern texture signal-to-noise ratio of the clear nose pattern image.
In this embodiment, the texture snr calculation is used to perform noise removal processing on noise interference, such as gaussian noise, salt and pepper noise, additive noise or multiplicative noise, generated in the imaging, transmission, storage, or other processes of an image, so as to reduce the influence on the image edge, texture, resolution, and the like, and ensure that a cat nose print image with high texture quality is obtained, thereby acquiring a cat nose print with high texture quality and high definition.
In particular, it is generally understood that the higher the signal-to-noise ratio, the higher the texture quality of the image, and the lower the signal-to-noise ratio, the lower the texture quality of the image, the more the embodiment can specifically pass through the Peak Signal Noise Ratio (PSNR), or a regional Noise Ratio (SNR), and other algorithms, such as Mean Square Error (MSE), may also be used to quantize the nasal print texture of the basic nasal print image, and this is not particularly limited, then, a clear nasal pattern image with higher definition and higher texture quality relative to the basic nasal pattern image can be obtained, and the nose-print texture signal-to-noise ratio of the clear nose-print image, so that the nose-print texture signal-to-noise ratio can be further evaluated subsequently, the cat nose pattern image with high texture quality can be acquired, and therefore the cat nose pattern with high texture quality and high definition is acquired.
S5: and judging whether the clear value and the nose-print texture signal-to-noise ratio value meet a preset first comparison condition.
In this embodiment, the first comparison condition is used to measure whether the sharpness value and the nose pattern signal-to-noise ratio value meet the standard for determining whether the sharp nose pattern image is sharp and the quality of the nose pattern is good, and may be specifically set according to the actual application requirement, and the present disclosure is not limited specifically here.
Specifically, assuming that the first comparison condition is whether the sharpness value is greater than a preset sharpness threshold and whether the nose-line texture signal-to-noise ratio value is greater than a preset texture threshold, it is determined whether the sharpness value and the nose-line texture signal-to-noise ratio value meet the preset first comparison condition, that is, the sharpness value and the nose-line texture signal-to-noise ratio value obtained in steps S3 and S4 are respectively compared with the preset sharpness threshold and the texture threshold, so as to determine whether the sharpness value and the nose-line texture signal-to-noise ratio value meet the preset first comparison condition.
S51: and if the clear value or the nose pattern texture signal-to-noise ratio value does not meet the preset first comparison condition, outputting information that no nose pattern is collected in the clear nose pattern image.
Specifically, according to the comparison result of comparing the sharpness value and the nose-line texture signal-to-noise ratio value with the preset sharpness threshold and the preset texture threshold in step S5, when the result is that the sharpness value is less than or equal to the preset sharpness threshold, or the nose-line texture signal-to-noise ratio value is less than or equal to the texture threshold, either of the two conditions is satisfied, that is, the sharpness value or the nose-line texture signal-to-noise ratio value does not meet the preset first comparison condition, it can be understood that the sharpness of the sharpness nose-line image corresponding to the sharpness value or the nose-line texture signal-to-noise ratio value is insufficient, or the quality of the nose-line texture in the sharpness-line image is not high enough, which is not favorable for the application of the sharpness-line image in the subsequent nose-line recognition, and then the information including the sharpness nose-line image which is not collected can be output to the client for.
S52: and if the clear value and the nose pattern texture signal-to-noise ratio value both accord with a preset first comparison condition, storing the clear nose pattern image into a database.
Specifically, according to the comparison result of comparing the definition value and the nose pattern texture signal-to-noise ratio value with the preset definition threshold value and the preset texture threshold value in step S5, when the result is that the definition value is greater than the preset definition threshold value and the nose pattern texture signal-to-noise ratio value is greater than the texture threshold value, both the two conditions are satisfied, that is, the definition value or the nose pattern texture signal-to-noise ratio value meets the preset first comparison condition, it can be understood that the definition of the clear nose pattern image corresponding to the definition value and the nose pattern texture signal-to-noise ratio value meets the standard or high definition, and the quality of the nose pattern texture in the clear nose pattern image meets the standard or high texture quality, so that the clear nose pattern image can be applied in subsequent nose pattern recognition, and the clear nose pattern image can be stored in a database for use by a user; meanwhile, the information of the cat nose print with high definition and high texture quality, which contains the collected clear nose print image, can be output to the client for the user to use or manage.
According to the automatic acquisition method applied to the cat nose line, the basic nose line image containing the cat nose area can be quickly and accurately acquired by preprocessing the image to be acquired; furthermore, by evaluating the definition of the basic nose pattern image, a clear nose pattern image with clear nose patterns can be obtained, and a precise definition value can be obtained; then, the texture signal-to-noise ratio of the clear nose pattern image is calculated, so that the nose pattern texture signal-to-noise ratio can be quickly and accurately obtained, noise existing in the clear nose pattern image can be eliminated, and the quality of nose patterns in the image is further ensured; judging the obtained clear value and the signal-to-noise ratio of the nasal texture, and storing the clear nasal texture image which meets the conditions and has high definition and high texture quality into a database; the method has the advantages of low calculation complexity, simplicity, convenience, practicability and low cost.
It should be noted that, when the method for automatically acquiring the cat nasal print is applied in the embodiment of the invention, other auxiliary devices, such as a nasal print sticker and a high-definition camera, are not needed in the using process, so that the cost can be saved; the embodiment can also support various deployment modes, such as end-side exclusive sharing, end-cloud collaborative deployment and the like, so as to realize convenient transplantation of the technology.
In a preferred embodiment, the step S3 of performing the sharpness evaluation processing on the basic nose pattern image to obtain a processed sharp nose pattern image and a sharpness value of the sharp nose pattern image includes:
and performing definition evaluation processing on the basic nasal print image by using a Laplace gradient function to obtain a processed clear nasal print image and a clear value of the clear nasal print image.
Specifically, since a good image sharpness evaluation function needs to have unimodal, unbiased and sensitive properties, the embodiment performs sharpness evaluation processing on the basic nose print image by using the laplacian gradient function to obtain a sharp nose print image with higher sharpness than the basic nose print image and a sharpness value of the sharp nose print image, and the laplacian gradient function may be implemented by calculating derivatives of second order x and y by using Sobel operators, and then summing the derivatives, and taking the summed value as the sharpness value of the sharp nose print image.
Wherein, the Laplace gradient function expression is as follows:
Figure BDA0002744718310000121
where I ═ (x, y) is a two-dimensional image, and x, y are image pixels.
In a preferred technical solution, referring to fig. 2, the step S4 of performing texture signal-to-noise ratio calculation on the clear nose print image to obtain the nose print texture signal-to-noise ratio of the clear nose print image includes:
s21: and carrying out region division processing operation on the clear nose pattern image to obtain N × N sub-regions, wherein N is a positive integer greater than 0.
Specifically, after the basic nose pattern image is processed into the clear nose pattern image, the image is presented as a binary texture image, and the ideal image should have a clear texture and alternate black and white, without a large number of nose pattern images with pure black or pure white patterns, as shown in fig. 9, in order to accurately obtain the texture signal-to-noise ratio of the clear nose pattern image, the embodiment performs nose pattern texture signal-to-noise ratio calculation on the clear nose pattern image by using the SNR.
Further, when the SNR is calculated in the embodiment, in order to enable the texture of the clear nose print image to be uniformly distributed in each small region, before calculating the SNR value, the embodiment may divide the clear nose print image into N × N sub-regions, where N is 20, that is, the clear nose print image is divided into 20 rows and 20 columns of sub-regions.
S22: and calculating the signal-to-noise ratio of each subregion to obtain the signal-to-noise ratio of the subregion corresponding to each subregion.
In this embodiment, since the sharp nose-line image appears as a binary texture image, 3 values may appear for each small region in the image, respectively: 0, 1 and-1; where-1 is represented as the part outside the mask, which generally appears at the edge of the nose print image, and this value is excluded when the statistics inside the sub-regions are to be taken for the texture quality of the nose print; meanwhile, if all pixel values in a small region are-1, the block sub-region is excluded from the last all-region statistics.
Further, for each sub-region, the number of pixels with a value of 0 and 1 inside the sub-region is counted, which can be denoted as N0 and N1, respectively, for calculating the sub-region signal-to-noise ratio of the sub-region, which is usually expressed on the binary texture image, such as the ratio of black and white pixels; if the texture in the sub-region is clear and black and white are alternated, the SNR value of the obtained sub-region signal-to-noise ratio is closer to 1; if there is little texture in this region and the appearance on the image is a large black or white pattern, the resulting sub-region signal-to-noise ratio SNR value is low.
The expression for calculating the SNR value of the subregion is as follows:
sub-region signal-to-noise ratio
Figure BDA0002744718310000131
S23: and summing the signal-to-noise ratios of the N x N sub-regions, and averaging to calculate to obtain the signal-to-noise ratio of the nose texture.
Specifically, in order to ensure the quality of the obtained nose print texture in the clear nose print image, in this embodiment, by excluding the case that all pixel values in the whole small region are-1, then summing up the SNR values of the sub-regions of 400 sub-regions in 20 rows and 20 columns and performing an average calculation, the SNR signal strength and weakness value of the SNR signal of the clear nose print image, that is, the nose print texture signal-to-noise ratio value, can be obtained.
In a preferred technical solution, referring to fig. 3, the step S2 of performing a preprocessing operation on the image to be acquired to obtain a processed basic rhinoprint image includes:
s31: and inputting the image to be acquired into the trained nose pattern segmentation model for segmentation to obtain a basic segmentation image.
In this embodiment, the trained nose pattern segmentation model is a model with a high accuracy obtained by performing iterative training on a pre-constructed deep learning semantic segmentation model with a manufactured segmentation data set after a large number of cat nose pattern images are processed.
Specifically, in the embodiment, the image to be acquired is input into the trained nose pattern segmentation model to perform the nose pattern segmentation operation, and the output basic segmentation image of the region possibly including the cat nose can be further subjected to nose pattern confirmation in the subsequent process, so that the subsequent acquisition of the high-definition and high-texture-quality nose pattern image is ensured.
S32: and judging whether the basic segmentation image meets a preset second comparison condition.
In this embodiment, the second comparison condition is a criterion for measuring whether the basic segmentation image meets the region including the cat nose, and may be specifically set according to the actual application requirement, and is not specifically limited herein.
Specifically, it is assumed that the second comparison condition is that the aspect ratio of the cat nose in the basic segmentation image satisfies the normal aspect ratio of the cat nose, for example, the preset aspect ratio may be between 0.5 and 2, that is, whether the aspect ratio of the cat nose in the basic segmentation image is normal and is greater than the preset aspect ratio threshold; and the basic segmentation image satisfies the proportion of the segmentation area relative to the screen, if the preset proportion is not lower than one tenth, namely whether the proportion of the segmentation area of the basic segmentation image relative to the screen is larger than the preset proportion or not; it is determined whether the basic segmented image meets a preset second comparison condition, that is, the normal aspect ratio of the cat nose in the basic segmented image acquired in step S31 and the proportion of the segmented region of the basic segmented image with respect to the screen are respectively compared with a preset aspect ratio threshold and a preset proportion, so as to determine whether the basic segmented image meets the preset second comparison condition.
S321: and if the basic segmentation image does not meet the preset second comparison condition, outputting the information that the cat nose region is not found in the basic segmentation image.
Specifically, according to the comparison result of comparing the normal aspect ratio of the cat nose in the basic segmented image and the proportion of the segmented region of the basic segmented image with respect to the screen in step S32 with the preset aspect ratio threshold value and the preset proportion, respectively, when the result is that the normal aspect ratio of the cat nose in the basic segmented image is less than or equal to the preset aspect ratio threshold value or the proportion of the segmented region of the basic segmented image with respect to the screen is less than or equal to the preset proportion, and either condition is satisfied, that is, the normal aspect ratio of the cat nose in the basic segmented image or the proportion of the segmented region of the basic segmented image with respect to the screen does not meet the preset second comparison condition, it can be understood that the cat nose of the basic segmented image is not the normal aspect ratio or the proportion of the segmented region of the cat nose with respect to the screen is insufficient, that the basic segmented image has no region that can be applied to the cat nose for the subsequent nasal print processing, information including the cat nose region not found in the base segmented image may be output to a client for use or management by a user.
S322: and if the basic segmentation image meets a preset second comparison condition, taking the basic segmentation image as a basic nose pattern image.
Specifically, according to the comparison result of comparing the normal aspect ratio of the cat nose in the basic segmented image and the proportion of the segmented region of the basic segmented image with respect to the screen in step S32 with the preset aspect ratio threshold value and the preset proportion, respectively, when it is found that the normal aspect ratio of the cat nose in the basic segmented image is larger than the preset aspect ratio threshold value and the proportion of the segmented region of the basic segmented image with respect to the screen is larger than the preset proportion, both conditions can be satisfied, that is, the normal aspect ratio of the cat nose in the basic segmented image and the proportion of the segmented region of the basic segmented image with respect to the screen satisfy the preset second comparison condition, it can be understood that the cat nose of the basic segmented image is the normal aspect ratio and the proportion of the segmented region of the cat nose with respect to the screen is sufficient, that the basic segmented image has a region that can be applied to a cat that is processed at a subsequent nasal print, the basic segmentation image can be saved as a basic nose pattern image; meanwhile, the embodiment can also output the information including the cat nose area found in the basic segmentation image to the client for the user to use or manage.
In a preferred embodiment, referring to fig. 4, before step S3, the method further includes:
s41: and constructing a deep learning semantic segmentation model.
Specifically, in order to obtain a model with a high accuracy of segmenting the nose print, the deep learning semantic segmentation model constructed in this embodiment adopts a deep learning network of depeplabv 3+ mobilenetv2, and may also adopt other networks according to actual application requirements, which is not specifically limited here, and can ensure the accuracy of segmenting the nose print to a certain extent.
S42: and labeling the cat image in the database to obtain a segmentation data set for training.
Specifically, in the present embodiment, the segmentation data sets are a nose pattern segmentation training set and a nose pattern segmentation test set that are processed and prepared in advance.
Specifically, the present embodiment may perform indexing in the database according to the data type required by the actual application, so as to quickly and accurately obtain the preprocessed and fabricated nose pattern segmentation training set and the nose pattern segmentation test set for subsequent training.
S43: and inputting the segmentation data set into a deep learning semantic segmentation model for iterative training operation to obtain a trained nose pattern segmentation model.
In this embodiment, the trained nose pattern segmentation model is a model for recognizing the nose pattern feature of the cat image and performing nose pattern segmentation based on the extracted nose pattern feature.
Specifically, in the embodiment, a great amount of iterative training is performed on the deep learning semantic segmentation model by using a preprocessed and prepared nose pattern segmentation training set and a nose pattern segmentation test set, so that a nose pattern segmentation model which can obtain nose pattern features for identifying cat images and can perform nose pattern segmentation according to the extracted nose pattern features is obtained.
Example 2
According to another embodiment of the present invention, there is provided an automatic acquisition device for application to cat's nose print, see fig. 5, comprising:
a request receiving module 501, configured to receive a nasal print acquisition request, where the nasal print acquisition request at least carries an image to be acquired;
in this embodiment, since the deep learning has a specific depth network, the deep learning can be used for deeply mining potential features of an image, and the image classification, detection, segmentation and other fields are highlighted, so that the automatic acquisition method applied to the cat rhinoprint in this embodiment specifically performs semantic segmentation on the cat image by using the deep learning network to accurately acquire the image including the cat nose region, and then performs clear quantization and texture analysis on the nasal print region in the segmented image to obtain quantized values, so as to determine whether the nasal print meets requirements by using the quantized values to acquire a high-definition and high-texture-quality nasal print image, thereby accurately acquiring the cat rhinoprint.
The nasal print acquisition request is an operation request input by a user according to the actual application of a specific nasal print acquisition scene and the acquisition operation required to be carried out; the nose pattern acquisition request at least carries an acquired image, and the image to be acquired is image data provided in the application of the actual specific nose pattern acquisition scene, so that the image data to be acquired can be analyzed and processed subsequently, and the nose pattern data in the image can be accurately acquired.
Specifically, a nasal print acquisition request input by a user from a client is received, and the acquired nasal print acquisition request at least carries an image to be acquired, so that the subsequent data analysis processing of the image to be acquired can be realized, the nasal print image with high definition and high texture quality can be obtained, and the accurate acquisition of cat nasal prints can be realized.
The image preprocessing module 502 is configured to perform preprocessing operation on an image to be acquired to obtain a processed basic rhinoprint image;
in this embodiment, in order to ensure the subsequent processing and the acquisition of the high-definition cat nose pattern image, it is necessary to ensure the area with the cat nose in the image, so in this embodiment, the pre-processing operation is performed on the image to be acquired to obtain the processed basic nose pattern image of the area including the cat nose.
The preprocessing operation may specifically be an image preprocessing means such as rotation, size change, brightness change, and nose line segmentation, and may also be another image processing means according to the actual application requirements, which is not limited herein.
Specifically, image preprocessing means such as image rotation, size change, light and shade change, nose line segmentation and the like are respectively carried out on an image to be acquired so as to acquire a processed basic nose line image of an area containing the cat nose.
The image sharpness processing module 503 is configured to perform sharpness evaluation processing on the basic nose pattern image to obtain a processed sharp nose pattern image and a sharpness value of the sharp nose pattern image;
in this embodiment, the definition evaluation processing is to quantify a definition index of the basic rhinoprint image, which can be used to evaluate the definition of the image, and it can be understood that the higher the definition value is, the clearer the image is focused, and the lower the definition value is, the more blurred the image is focused, so in order to ensure that a high-definition cat rhinoprint image can be collected, thereby collecting high-quality and high-definition cat rhinoprints, in this embodiment, the definition evaluation processing is performed on the basic rhinoprint image obtained in step S2, so as to realize the definition quantification of the basic rhinoprint image.
Specifically, the definition of the basic rhinoprint image may be quantized by using an energy gradient method, a laplacian gradient function, and other algorithms, such as a variance method, where no specific limitation is imposed, and then, a clear rhinoprint image with a higher definition than the basic rhinoprint image and a definition of the clear rhinoprint image may be obtained, so that the definition may be further evaluated later, and a high-definition feline rhinoprint image may be acquired, thereby acquiring a high-quality and high-definition feline rhinoprint.
A signal-to-noise ratio calculation module 504, configured to perform texture signal-to-noise ratio calculation on the clear nose pattern image to obtain a nose pattern texture signal-to-noise ratio value of the clear nose pattern image;
in this embodiment, the texture snr calculation is used to perform noise removal processing on noise interference, such as gaussian noise, salt and pepper noise, additive noise or multiplicative noise, generated in the imaging, transmission, storage, or other processes of an image, so as to reduce the influence on the image edge, texture, resolution, and the like, and ensure that a cat nose print image with high texture quality is obtained, thereby acquiring a cat nose print with high texture quality and high definition.
In particular, it is generally understood that the higher the signal-to-noise ratio, the higher the texture quality of the image, and the lower the signal-to-noise ratio, the lower the texture quality of the image, the more the embodiment can specifically pass through the Peak Signal Noise Ratio (PSNR), or a regional Noise Ratio (SNR), and other algorithms, such as Mean Square Error (MSE), may also be used to quantize the nasal print texture of the basic nasal print image, and this is not particularly limited, then, a clear nasal pattern image with higher definition and higher texture quality relative to the basic nasal pattern image can be obtained, and the nose-print texture signal-to-noise ratio of the clear nose-print image, so that the nose-print texture signal-to-noise ratio can be further evaluated subsequently, the cat nose pattern image with high texture quality can be acquired, and therefore the cat nose pattern with high texture quality and high definition is acquired.
A first comparison condition module 505, configured to determine whether the sharpness value and the nose print texture signal-to-noise ratio value meet a preset first comparison condition;
in this embodiment, the first comparison condition is used to measure whether the sharpness value and the nose pattern signal-to-noise ratio value meet the standard for determining whether the sharp nose pattern image is sharp and the quality of the nose pattern is good, and may be specifically set according to the actual application requirement, and the present disclosure is not limited specifically here.
Specifically, if the first comparison condition is that whether the sharpness value is greater than a preset sharpness threshold and whether the nose-line texture signal-to-noise ratio value is greater than a preset texture threshold, it is determined whether the sharpness value and the nose-line texture signal-to-noise ratio value meet the preset first comparison condition, that is, the sharpness value and the nose-line texture signal-to-noise ratio value, which are obtained in the image sharpness processing module 503 and the signal-to-noise ratio calculation module 504, are respectively compared with the preset sharpness threshold and the texture threshold, so as to determine whether the sharpness value and the nose-line texture signal-to-noise ratio value meet the preset first comparison condition.
The first information output module 5051 is configured to output information that no nose print is acquired in the clear nose print image if the clear value or the nose print texture signal-to-noise ratio value does not meet a preset first comparison condition;
specifically, according to the comparison result of comparing the sharpness value and the nose pattern texture signal-to-noise ratio value with the preset sharpness threshold and the preset texture threshold in the first comparison condition module 505, when the result is that the sharpness value is less than or equal to the preset sharpness threshold, or the nose pattern texture signal-to-noise ratio value is less than or equal to the texture threshold, any one of the two conditions is satisfied, that is, the sharpness value or the nose pattern texture signal-to-noise ratio value does not meet the preset first comparison condition, it can be understood that the sharpness of the sharpness nose pattern image corresponding to the sharpness value or the nose pattern texture signal-to-noise ratio value is insufficient, or the quality of the nose pattern texture in the sharpness nose pattern image is not high enough, which is not favorable for the application of the sharpness nose pattern image in subsequent nose pattern recognition, and then information including the sharpness nose pattern image which is not collected can be output to the client for the user to use or manage.
The image saving module 5052 is configured to, if the clear value and the nose print signal-to-noise ratio both meet a preset first comparison condition, save the clear nose print image to the database.
Specifically, according to the comparison result of comparing the sharpness value and the nose pattern texture signal-to-noise ratio value with the preset sharpness threshold and texture threshold in the first comparison condition module 505, when the result is that the sharpness value is greater than the preset sharpness threshold and the nose pattern texture signal-to-noise ratio value is greater than the texture threshold, both conditions are satisfied, that is, the sharpness value or the nose pattern texture signal-to-noise ratio value meets the preset first comparison condition, it can be understood that the sharpness of the sharp nose pattern image corresponding to the sharpness value and the nose pattern texture signal-to-noise ratio value meets the standard or high definition, and the quality of the nose pattern texture in the sharp nose pattern image meets the standard or high texture quality, so that the sharp nose pattern image can be applied in subsequent nose pattern recognition, and the sharp nose pattern image can be stored in a database for use by a user; meanwhile, the information of the cat nose print with high definition and high texture quality, which contains the collected clear nose print image, can be output to the client for the user to use or manage.
The automatic acquisition device applied to the cat nose line in the embodiment of the invention can quickly and accurately acquire the basic nose line image containing the cat nose area by preprocessing the image to be acquired; furthermore, by evaluating the definition of the basic nose pattern image, a clear nose pattern image with clear nose patterns can be obtained, and a precise definition value can be obtained; then, the texture signal-to-noise ratio of the clear nose pattern image is calculated, so that the nose pattern texture signal-to-noise ratio can be quickly and accurately obtained, noise existing in the clear nose pattern image can be eliminated, and the quality of nose patterns in the image is further ensured; judging the obtained clear value and the nasal print texture signal-to-noise ratio, and storing the clear nasal print image with high definition and high texture quality meeting the conditions into a database; the method has the advantages of low calculation complexity, simplicity, convenience, practicability and low cost.
It should be noted that, when the automatic acquisition device applied to the cat nasal print in the embodiment of the invention is used, other auxiliary devices, such as a nasal print sticker and a high-definition camera, are not needed, so that the cost can be saved; the embodiment can also support various deployment modes, such as end-side exclusive sharing, end-cloud collaborative deployment and the like, so as to realize convenient transplantation of the technology.
In a preferred embodiment, the image sharpness processing module 503 includes:
and performing definition evaluation processing on the basic nasal print image by using a Laplace gradient function to obtain a processed clear nasal print image and a clear value of the clear nasal print image.
Specifically, since a good image sharpness evaluation function needs to have unimodal, unbiased and sensitive properties, the embodiment performs sharpness evaluation processing on the basic nose print image by using the laplacian gradient function to obtain a sharp nose print image with higher sharpness than the basic nose print image and a sharpness value of the sharp nose print image, and the laplacian gradient function may be implemented by calculating derivatives of second order x and y by using Sobel operators, and then summing the derivatives, and taking the summed value as the sharpness value of the sharp nose print image.
Wherein, the Laplace gradient function expression is as follows:
Figure BDA0002744718310000211
where I ═ (x, y) is a two-dimensional image, and x, y are image pixels.
In a preferred embodiment, referring to fig. 6, the snr calculating module 504 includes:
the region dividing unit 601 is configured to perform region dividing processing operation on the clear nose pattern image to obtain N × N sub-regions, where N is a positive integer greater than 0;
specifically, after the basic nose pattern image is processed into the clear nose pattern image, the image is presented as a binary texture image, and the ideal image should have a clear texture and alternate black and white, without a large number of nose pattern images with pure black or pure white patterns, as shown in fig. 9, in order to accurately obtain the texture signal-to-noise ratio of the clear nose pattern image, the embodiment performs nose pattern texture signal-to-noise ratio calculation on the clear nose pattern image by using the SNR.
Further, when the SNR is calculated in the embodiment, in order to enable the texture of the clear nose print image to be uniformly distributed in each small region, before calculating the SNR value, the embodiment may divide the clear nose print image into N × N sub-regions, where N is 20, that is, the clear nose print image is divided into 20 rows and 20 columns of sub-regions.
A region signal-to-noise ratio calculation unit 602, configured to perform signal-to-noise ratio calculation on each sub-region, where the obtained signal-to-noise ratio value of each sub-region corresponds to the sub-region;
in this embodiment, since the sharp nose-line image appears as a binary texture image, 3 values may appear for each small region in the image, respectively: 0, 1 and-1; where-1 is represented as the part outside the mask, which generally appears at the edge of the nose print image, and this value is excluded when the statistics inside the sub-regions are to be taken for the texture quality of the nose print; meanwhile, if all pixel values in a small region are-1, the block sub-region is excluded from the last all-region statistics.
Further, referring to fig. 10, for each sub-region, the numbers of pixels with values of 0 and 1 inside the sub-region are counted, which can be denoted as N0 and N1, respectively, for calculating the sub-region signal-to-noise ratio of the sub-region, where the sub-region signal-to-noise ratio is usually expressed on the binary texture image, such as the ratio of black and white pixels; if the texture in the sub-region is clear and black and white are alternated, the SNR value of the obtained sub-region signal-to-noise ratio is closer to 1; if there is little texture in this region and the appearance on the image is a large black or white pattern, the resulting sub-region signal-to-noise ratio SNR value is low.
The expression for calculating the SNR value of the subregion is as follows:
sub-region signal-to-noise ratio
Figure BDA0002744718310000231
And a texture signal-to-noise ratio calculation unit 603, configured to sum and average the signal-to-noise ratios of the N × N sub-regions to obtain a nose texture signal-to-noise ratio.
Specifically, in order to ensure the quality of the obtained nose print texture in the clear nose print image, in this embodiment, by excluding the case that all pixel values in the whole small region are-1, then summing up the SNR values of the sub-regions of 400 sub-regions in 20 rows and 20 columns and performing an average calculation, the SNR signal strength and weakness value of the SNR signal of the clear nose print image, that is, the nose print texture signal-to-noise ratio value, can be obtained.
In a preferred embodiment, referring to fig. 7, the image preprocessing module 502 includes:
the image segmentation unit 701 is used for inputting an image to be acquired into a trained nose print segmentation model for segmentation processing to obtain a basic segmentation image;
in this embodiment, the trained nose pattern segmentation model is a model with a high accuracy obtained by performing iterative training on a pre-constructed deep learning semantic segmentation model with a manufactured segmentation data set after a large number of cat nose pattern images are processed.
Specifically, in the embodiment, the image to be acquired is input into the trained nose pattern segmentation model to perform the nose pattern segmentation operation, and the output basic segmentation image of the region possibly including the cat nose can be further subjected to nose pattern confirmation in the subsequent process, so that the subsequent acquisition of the high-definition and high-texture-quality nose pattern image is ensured.
A second comparison condition determining unit 702, configured to determine whether the basic segmentation image meets a preset second comparison condition;
in this embodiment, the second comparison condition is a criterion for measuring whether the basic segmentation image meets the region including the cat nose, and may be specifically set according to the actual application requirement, and is not specifically limited herein.
Specifically, it is assumed that the second comparison condition is that the aspect ratio of the cat nose in the basic segmentation image satisfies the normal aspect ratio of the cat nose, for example, the preset aspect ratio may be between 0.5 and 2, that is, whether the aspect ratio of the cat nose in the basic segmentation image is normal and is greater than the preset aspect ratio threshold; and the basic segmentation image satisfies the proportion of the segmentation area relative to the screen, if the preset proportion is not lower than one tenth, namely whether the proportion of the segmentation area of the basic segmentation image relative to the screen is larger than the preset proportion or not; then, it is determined whether the basic segmented image meets a preset second comparison condition, that is, the normal aspect ratio of the cat nose in the basic segmented image obtained in the image segmentation unit 701 and the proportion of the segmented region of the basic segmented image with respect to the screen are respectively compared with a preset aspect ratio threshold and a preset proportion, so as to determine whether the basic segmented image meets the preset second comparison condition.
A second information output unit 7021, configured to output information that the cat nose region is not found in the basic segmented image if the basic segmented image does not satisfy a preset second comparison condition;
specifically, according to the comparison result of comparing the aspect ratio of the cat nose in the basic divided image in the second comparison condition determination unit 702, and the proportion of the divided region of the basic divided image with respect to the screen, with the preset aspect ratio threshold value and the preset proportion, respectively, when the result is that the aspect ratio of the cat nose in the basic divided image is smaller than or equal to the preset aspect ratio threshold value, or the proportion of the divided region of the basic divided image with respect to the screen is smaller than or equal to the preset proportion, and either condition is satisfied, that is, the aspect ratio of the cat nose in the basic divided image is normal or the proportion of the divided region of the basic divided image with respect to the screen does not meet the preset second comparison condition, it can be understood that the aspect ratio of the cat nose in the basic divided image is not normal, or the proportion of the divided region of the cat nose with respect to the screen is insufficient, that is, the basic segmented image does not have a region of the cat nose that can be applied to the subsequent nasal print processing, information including a region of the cat nose that is not found in the basic segmented image may be output to the client for use or management by the user.
A basic image obtaining unit 7022, configured to, if the basic segmented image does not satisfy the preset second comparison condition, take the basic segmented image as a basic rhinoprint image.
Specifically, according to the comparison result of comparing the normal aspect ratio of the cat nose in the basic divided image and the proportion of the divided region of the basic divided image with respect to the screen in the second comparison condition determination unit 702 with the preset aspect ratio threshold value and the preset proportion, respectively, when the result is that the normal aspect ratio of the cat nose in the basic divided image is larger than the preset aspect ratio threshold value and the proportion of the divided region of the basic divided image with respect to the screen is larger than the preset proportion, both conditions can be satisfied, that is, the normal aspect ratio of the cat nose in the basic divided image and the proportion of the divided region of the basic divided image with respect to the screen meet the preset second comparison condition, it can be understood that the cat nose of the basic divided image is the normal aspect ratio and the proportion of the divided region of the cat nose with respect to the screen is sufficient, that the basic divided image has a region that can be applied to the cat nose in the subsequent nasal print processing, the basic segmentation image can be saved as a basic nose pattern image; meanwhile, the embodiment can also output the information including the cat nose area found in the basic segmentation image to the client for the user to use or manage.
As a preferred technical solution, referring to fig. 8, the apparatus further includes:
the model construction module 801 is used for constructing a deep learning semantic segmentation model;
specifically, in order to obtain a model with a high accuracy of segmenting the nose print, the deep learning semantic segmentation model constructed in this embodiment adopts a deep learning network of depeplabv 3+ mobilenetv2, and may also adopt other networks according to actual application requirements, which is not specifically limited here, and can ensure the accuracy of segmenting the nose print to a certain extent.
A data set acquisition module 802, configured to label cat images in a database to obtain a segmentation data set for training;
specifically, in the present embodiment, the segmentation data sets are a nose pattern segmentation training set and a nose pattern segmentation test set that are processed and prepared in advance.
Specifically, the present embodiment may perform indexing in the database according to the data type required by the actual application, so as to quickly and accurately obtain the preprocessed and fabricated nose pattern segmentation training set and the nose pattern segmentation test set for subsequent training.
And the model training module 803 is configured to input the segmentation data set into the deep learning semantic segmentation model to perform iterative training operation, so as to obtain a trained nose print segmentation model.
In this embodiment, the trained nose pattern segmentation model is a model for recognizing the nose pattern feature of the cat image and performing nose pattern segmentation based on the extracted nose pattern feature.
Specifically, in the embodiment, a great amount of iterative training is performed on the deep learning semantic segmentation model by using a preprocessed and prepared nose pattern segmentation training set and a nose pattern segmentation test set, so that a nose pattern segmentation model which can obtain nose pattern features for identifying cat images and can perform nose pattern segmentation according to the extracted nose pattern features is obtained.
Compared with the existing nasal print acquisition method, the automatic acquisition method and the device applied to the cat nasal print have the advantages that:
1. according to the method, the image to be acquired is segmented by adopting the nose pattern segmentation model, the definition evaluation processing of the nose pattern is carried out on the processed basic nose pattern image, the texture signal-to-noise ratio analysis processing is carried out on the processed clear nose pattern image, and then whether the definition value of the clear nose pattern image and the nose pattern texture signal-to-noise ratio value meet the threshold requirement or not is judged to obtain the nose pattern image with high definition and high texture quality, and the method is low in calculation complexity, simple, convenient, practical and low in cost;
2. in the using process of the embodiment, other auxiliary equipment such as a nose line patch, a high-definition camera and the like is not needed, so that the cost can be saved;
3. the embodiment can also support various deployment modes, such as end-side exclusive sharing, end-cloud collaborative deployment and the like, so as to realize convenient transplantation of the technology.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described system embodiments are merely illustrative, and for example, a division of a unit may be a logical division, and an actual implementation may have another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (10)

1. An automatic acquisition method applied to cat nose striae is characterized by comprising the following steps:
receiving a nasal print acquisition request, wherein the nasal print acquisition request at least carries an image to be acquired;
preprocessing the image to be acquired to obtain a processed basic rhinoprint image;
performing definition evaluation processing on the basic nose pattern image to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image;
calculating the texture signal-to-noise ratio of the clear nose pattern image to obtain the nose pattern texture signal-to-noise ratio of the clear nose pattern image;
judging whether the clear value and the nose-print texture signal-to-noise ratio value meet a preset first comparison condition or not;
if the clear value or the nose pattern texture signal-to-noise ratio value does not meet a preset first comparison condition, outputting information that no nose pattern is acquired by the clear nose pattern image;
and if the clear value and the nose pattern signal-to-noise ratio value both accord with a preset first comparison condition, storing the clear nose pattern image into a database.
2. The automatic acquisition method applied to the cat nose print as claimed in claim 1, wherein the step of performing the sharpness evaluation processing on the basic nose print image to obtain the processed sharp nose print image and the sharpness value of the sharp nose print image comprises:
and performing definition evaluation processing on the basic nose pattern image by using a Laplace gradient function to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image.
3. The automatic acquisition method applied to the cat nose print as claimed in claim 1, wherein the step of performing texture signal-to-noise ratio calculation on the clear nose print image to obtain the nose print texture signal-to-noise ratio value of the clear nose print image comprises:
performing region division processing operation on the clear nose pattern image to obtain N × N sub-regions, wherein N is a positive integer greater than 0;
calculating the signal-to-noise ratio of each subregion to obtain the signal-to-noise ratio of each subregion corresponding to the subregion;
and summing the signal-to-noise ratios of the N x N sub-regions, and averaging to calculate to obtain the signal-to-noise ratio of the nose texture.
4. The method according to claim 1, wherein the step of preprocessing the image to be acquired to obtain a processed basic nasal pattern image comprises:
inputting the image to be acquired into a trained nose pattern segmentation model for segmentation processing to obtain a basic segmentation image;
judging whether the basic segmentation image meets a preset second comparison condition;
if the basic segmentation image does not meet a preset second comparison condition, outputting information that no cat nose region is found in the basic segmentation image;
and if the basic segmentation image meets a preset second comparison condition, taking the basic segmentation image as the basic nose pattern image.
5. The method according to claim 4, wherein before the step of preprocessing the image to be captured to obtain the processed basic nasal pattern image, the method further comprises:
constructing a deep learning semantic segmentation model;
labeling the cat image in the database to obtain a segmentation data set for training;
and inputting the segmentation data set into the deep learning semantic segmentation model for iterative training operation to obtain the trained nose pattern segmentation model.
6. An automatic acquisition device applied to cat nose striae, comprising:
the request receiving module is used for receiving a nasal print acquisition request, and the nasal print acquisition request at least carries an image to be acquired;
the image preprocessing module is used for preprocessing the image to be acquired to obtain a processed basic rhinoprint image;
the image definition processing module is used for evaluating the definition of the basic nose pattern image to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image;
the signal-to-noise ratio calculation module is used for carrying out texture signal-to-noise ratio calculation on the clear nose pattern image to obtain a nose pattern texture signal-to-noise ratio value of the clear nose pattern image;
the first comparison condition module is used for judging whether the clear value and the nose texture signal-to-noise ratio value meet a preset first comparison condition or not;
the first information output module is used for outputting information that no nose print is acquired by the clear nose print image if the clear value or the nose print texture signal-to-noise ratio value does not meet a preset first comparison condition;
and the image storage module is used for storing the clear nose pattern image to a database if the clear value and the nose pattern signal-to-noise ratio both accord with a preset first comparison condition.
7. The automatic acquisition device applied to the cat nose line according to claim 6, wherein the image sharpness processing module comprises:
and performing definition evaluation processing on the basic nose pattern image by using a Laplace gradient function to obtain a processed clear nose pattern image and a definition value of the clear nose pattern image.
8. The automatic acquisition device for cat nose print as claimed in claim 6, wherein the signal-to-noise ratio calculation module comprises:
the region dividing unit is used for performing region dividing processing operation on the clear nose pattern image to obtain N × N sub-regions, wherein N is a positive integer greater than 0;
the regional signal-to-noise ratio calculation unit is used for calculating the signal-to-noise ratio of each sub-region to obtain the signal-to-noise ratio of each sub-region corresponding to the sub-region;
and the texture signal-to-noise ratio calculation unit is used for summing the signal-to-noise ratios of the N x N sub-regions and calculating the average value to obtain the signal-to-noise ratio of the nose texture.
9. The automatic acquisition device applied to the cat nose print according to claim 6, characterized in that the image preprocessing module comprises:
the image segmentation unit is used for inputting the image to be acquired into a trained nose pattern segmentation model for segmentation processing to obtain a basic segmentation image;
judging whether the basic segmentation image meets a preset second comparison condition;
if the basic segmentation image does not meet a preset second comparison condition, outputting information that no cat nose region is found in the basic segmentation image;
and if the basic segmentation image meets a preset second comparison condition, taking the basic segmentation image as the basic nose pattern image.
10. The automated collection device for cat nose print of claim 9, wherein the device further comprises:
the model construction module is used for constructing a deep learning semantic segmentation model;
the data set acquisition module is used for labeling the cat images in the database to obtain a segmentation data set for training;
and the model training module is used for inputting the segmentation data set into the deep learning semantic segmentation model to perform iterative training operation so as to obtain the trained nose pattern segmentation model.
CN202011162222.5A 2020-10-27 2020-10-27 Automatic acquisition method and device applied to cat nose line Pending CN112329574A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011162222.5A CN112329574A (en) 2020-10-27 2020-10-27 Automatic acquisition method and device applied to cat nose line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011162222.5A CN112329574A (en) 2020-10-27 2020-10-27 Automatic acquisition method and device applied to cat nose line

Publications (1)

Publication Number Publication Date
CN112329574A true CN112329574A (en) 2021-02-05

Family

ID=74312071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011162222.5A Pending CN112329574A (en) 2020-10-27 2020-10-27 Automatic acquisition method and device applied to cat nose line

Country Status (1)

Country Link
CN (1) CN112329574A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140138103A (en) * 2014-10-31 2014-12-03 주식회사 아이싸이랩 Apparatus of Animal Recognition System using nose patterns
CN109727194A (en) * 2018-11-20 2019-05-07 广东智媒云图科技股份有限公司 A kind of method, electronic equipment and storage medium obtaining pet noseprint
CN110458901A (en) * 2019-06-26 2019-11-15 西安电子科技大学 A kind of optimum design method of overall importance based on the photo electric imaging system for calculating imaging
CN111079701A (en) * 2019-12-30 2020-04-28 河南中原大数据研究院有限公司 Face anti-counterfeiting method based on image quality
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140138103A (en) * 2014-10-31 2014-12-03 주식회사 아이싸이랩 Apparatus of Animal Recognition System using nose patterns
CN109727194A (en) * 2018-11-20 2019-05-07 广东智媒云图科技股份有限公司 A kind of method, electronic equipment and storage medium obtaining pet noseprint
CN110458901A (en) * 2019-06-26 2019-11-15 西安电子科技大学 A kind of optimum design method of overall importance based on the photo electric imaging system for calculating imaging
CN111079701A (en) * 2019-12-30 2020-04-28 河南中原大数据研究院有限公司 Face anti-counterfeiting method based on image quality
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘成淼 等: "双通道剪切干涉高光谱成像方法的信噪比分析", 《光学学报》, vol. 38, no. 5, pages 0511001 - 1 *
许洁平: "空间目标图像盲解卷积技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 12, pages 138 - 361 *

Similar Documents

Publication Publication Date Title
Yao et al. Detecting image splicing based on noise level inconsistency
CN111797653B (en) Image labeling method and device based on high-dimensional image
US9014432B2 (en) License plate character segmentation using likelihood maximization
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN103118220B (en) A kind of Key-frame Extraction Algorithm based on multidimensional characteristic vectors
CN111612104B (en) Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN105184291B (en) A kind of polymorphic type detection method of license plate and system
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN111192241B (en) Quality evaluation method and device for face image and computer storage medium
CN108229289B (en) Target retrieval method and device and electronic equipment
Alaei et al. Image quality assessment based on regions of interest
CN111784675A (en) Method and device for processing article texture information, storage medium and electronic equipment
CN114049499A (en) Target object detection method, apparatus and storage medium for continuous contour
CN111091093A (en) Method, system and related device for estimating number of high-density crowds
CN109215047B (en) Moving target detection method and device based on deep sea video
CN108769543B (en) Method and device for determining exposure time
CN111753642B (en) Method and device for determining key frame
Li et al. Two-layer average-to-peak ratio based saliency detection
Alaei et al. Document image quality assessment based on texture similarity index
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
Garg et al. Color based segmentation using K-mean clustering and watershed segmentation
CN106611417B (en) Method and device for classifying visual elements into foreground or background
CN112329575A (en) Nose print detection method and device based on image quality evaluation
CN112329574A (en) Automatic acquisition method and device applied to cat nose line
CN111027560B (en) Text detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination