CN117079339B - Animal iris recognition method, prediction model training method, electronic equipment and medium - Google Patents

Animal iris recognition method, prediction model training method, electronic equipment and medium Download PDF

Info

Publication number
CN117079339B
CN117079339B CN202311040576.6A CN202311040576A CN117079339B CN 117079339 B CN117079339 B CN 117079339B CN 202311040576 A CN202311040576 A CN 202311040576A CN 117079339 B CN117079339 B CN 117079339B
Authority
CN
China
Prior art keywords
picture
iris
outer boundary
iris outer
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311040576.6A
Other languages
Chinese (zh)
Other versions
CN117079339A (en
Inventor
张小亮
王明魁
李茂林
魏衍召
杨占金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202311040576.6A priority Critical patent/CN117079339B/en
Publication of CN117079339A publication Critical patent/CN117079339A/en
Application granted granted Critical
Publication of CN117079339B publication Critical patent/CN117079339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses an animal iris recognition method, a prediction model training method, electronic equipment and a medium, wherein the prediction model training method comprises the following steps: marking the animal eye picture to obtain a marked picture; inputting the animal eye pictures to a downsampling module, and outputting a plurality of downsampled pictures; inputting each downsampled picture into an upsampling module, and outputting a plurality of upsampled pictures and an iris outer boundary mask picture; transpose the up-sampling picture with the largest convolution characteristic scale to obtain at least one first prediction picture; inputting the iris outer boundary mask picture and each first prediction picture to a prediction module, and outputting a prediction result picture; determining a loss value of a predicted result picture; and adjusting parameters of the prediction model based on the loss value to obtain a trained prediction model, wherein the prediction model is used for predicting the iris outer boundary of the animal eye picture. By adopting the method and the device, the clear and complete iris area of the outer boundary can be determined, and the accuracy of animal iris recognition is improved.

Description

Animal iris recognition method, prediction model training method, electronic equipment and medium
Technical Field
The application relates to the technical field of iris recognition, in particular to an animal iris recognition method, a prediction model training method, electronic equipment and a medium.
Background
The animal iris recognition technology is a biological recognition technology for carrying out identity recognition based on animal iris features. The iris recognition technology has higher accuracy and reliability due to the unique structural characteristics and stability of the animal iris. The animal iris recognition technology has wide application value in the fields of animal management, vaccine tracking, provenance tracing and the like.
The animal iris picture generally comprises a sclera area, an iris area and a pupil area, and the key task of the iris recognition technology is to extract texture features from the iris area and compare the texture features with texture features in a database so as to realize the recognition of animal individuals. In the process of acquiring the animal iris image by the actual monitoring equipment, because the eyeballs of the animal are possibly larger, or the coordination degree of the animal is low, blinking, strabismus and other conditions are easy to occur, the iris area in the iris image is blocked by the eyelid, so that the outer boundary of the animal iris area cannot be completely determined, and the accuracy of animal iris recognition is affected.
Disclosure of Invention
The application provides an animal iris recognition method, a prediction model training method, electronic equipment and a medium, which can determine an iris region with clear and complete outer boundary and improve the accuracy of animal iris recognition.
In a first aspect of the present application, the present application provides a prediction model training method, where the prediction model includes a downsampling module, an upsampling module, and a prediction module, and the prediction model training method includes:
acquiring an animal eye picture, and performing marking treatment on the animal eye picture to obtain a marked picture;
inputting the animal eye picture to the downsampling module, and outputting a plurality of downsampled pictures with characteristic scales from high to low; inputting each downsampled picture into the upsampling module, and outputting a plurality of upsampled pictures with characteristic scales from low to high and an iris outer boundary mask picture, wherein the characteristic scales of the iris outer boundary mask picture are characteristic scales of an iris outer boundary;
transpose the up-sampling picture with the largest convolution characteristic scale to obtain at least one first prediction picture;
inputting the iris outer boundary mask picture and each first prediction picture to the prediction module, and outputting a prediction result picture; determining a loss value of the predicted result picture according to the marked picture;
And adjusting parameters of the prediction model based on the loss value to obtain a trained prediction model, wherein the trained prediction model is used for predicting an iris outer boundary heat map in an animal eye picture, and the iris outer boundary heat map comprises an iris outer boundary of the animal eye picture.
By adopting the technical scheme, when the iris region in the animal eye picture is blocked, the outer boundary of the iris region in the iris outer boundary mask picture is incomplete. And outputting a predicted result picture by inputting the iris outer boundary mask picture to a prediction module. And designing a loss function aiming at the iris outer boundary mask picture, calculating a loss value of a corresponding prediction result picture, training a prediction model through the loss value, and improving the prediction capability of the prediction model on the animal eye picture with the blocked iris region, so that the clear and complete iris region of the outer boundary can be predicted according to the animal eye picture. Further, the outer boundary of the iris region can be more accurately positioned according to the iris outer boundary heat map output by the prediction model.
Optionally, the marking pictures include an iris region marking picture, a pupil region marking picture, an iris outer boundary mask marking picture, an iris outer boundary thermal marking picture and a strabismus tag, and the marking process is performed on the animal eye picture to obtain a marking picture, including:
Marking an iris region in the animal eye picture to obtain the iris region marking picture;
Marking a pupil area in the animal eye picture to obtain the pupil area marking picture;
Marking a transition region between the iris region and the sclera region in the animal eye picture to obtain the iris outer boundary mask mark picture;
according to the iris region marking picture, obtaining the iris outer boundary thermal marking picture;
and marking the outer boundary of the iris in the picture according to the iris region to obtain the strabismus tag.
By adopting the technical scheme, the marked pictures form a supervision signal in the training process of the prediction model, and the prediction model is guided to obtain the correct updating direction in the training process. The difference between the prediction results and the prediction results reflects the training effect of the model, and finally an optimized prediction model is obtained through minimizing the loss value, so that the prediction capability of the prediction model is indirectly improved.
Optionally, the obtaining the iris outer boundary thermal mark picture according to the iris region mark picture includes: marking a central point of an iris area in a picture by using the iris area, sending a ray intersecting with an iris outer boundary curve to any direction, and rotating the ray for a plurality of times according to a preset angle until the ray rotates for one circle to obtain a plurality of first intersection points where the ray intersects with the iris outer boundary curve;
normalizing pixel points between the first intersection point and the second intersection point to obtain first pixel points, and normalizing pixel points between the first intersection point and the third intersection point to obtain second pixel points, wherein the second intersection point is a point which is a preset distance away from the first intersection point along the positive direction of the ray, and the third intersection point is a point which is a preset distance away from the first intersection point along the negative direction of the ray;
and generating the iris outer boundary thermal mark picture according to each first pixel point and each second pixel point.
By adopting the technical scheme, the iris boundary area is mapped by adopting the normalized pixel distance, and the iris edge contour information loss can be compensated for animal eye pictures with the animal eyes shielded and the unclear iris boundary. The clear iris outer boundary thermal mark picture can be directly output by rotating rays to obtain the intersection point information of the outer boundary curve and highlighting pixels around the iris outer boundary through normalization processing. The distribution condition of the outer boundary of the iris is intuitively reflected, the characteristics of the extracted outer boundary of the iris are more obvious, and accurate position information can be provided for subsequent positioning.
Optionally, the obtaining the strabismus tag according to the iris outer boundary in the iris region marking picture includes: generating an elliptic equation of the outer boundary of the iris in the iris region mark picture to obtain a major axis and a minor axis of the elliptic equation; and obtaining the strabismus tag according to the ratio of the long axis to the short axis.
By adopting the technical scheme, the ellipse equation is fitted, the proportion of the major axis and the minor axis of the ellipse equation is calculated, the label L which represents the degree of strabismus can be obtained, the value of the label L is in the range of 0-1, the severity of strabismus can be quantitatively represented, the classification result is not only obtained, but also the eyeball shape can be quantitatively reflected.
Optionally, the marking pictures comprise iris region marking pictures, pupil region marking pictures, iris outer boundary mask marking pictures, iris outer boundary thermal marking pictures and strabismus labels,
The prediction result picture comprises an iris region prediction result picture, a pupil region prediction result picture, an iris outer boundary mask prediction result picture, an iris outer boundary thermal prediction result picture and a strabismus prediction result, the prediction module comprises a strabismus prediction unit, the strabismus prediction result is output obtained by inputting a downsampled picture with the minimum scale characteristic into the strabismus prediction unit, and the loss value of the prediction result picture is determined according to the marked picture, and the method comprises the following steps:
Determining a first loss value according to the iris region marking picture and the iris region prediction result picture;
Determining a second loss value according to the pupil area marking picture and the pupil area prediction result picture;
determining a strabismus coefficient and a third loss value according to the strabismus label and the strabismus prediction result;
Determining a fourth loss value according to the iris outer boundary thermal mark picture and the iris outer boundary thermal prediction result picture;
And determining a fifth loss value according to the iris outer boundary mask mark picture and the iris outer boundary mask prediction result picture.
By adopting the technical scheme, in the process of training the prediction model, a plurality of loss functions act together, and the model training is supervised from different aspects such as regional semantics, shape constraint, strabismus degree and the like, so that the accuracy of the prediction model on animal eye picture prediction is improved.
Optionally, the determining the fifth loss value according to the iris outer boundary mask mark picture and the iris outer boundary mask prediction result picture includes:
Obtaining a shape loss value and a similarity loss value according to the iris outer boundary mask mark picture and the iris outer boundary mask prediction result picture;
And summing the product of the strabismus coefficient and the shape loss value with the similarity loss value to obtain the fifth loss value.
By adopting the technical scheme, the shape loss value is calculated, the shape of the outer boundary of the iris can be restrained, and the condition of boundary blurring is processed. And calculating a similarity loss value, and restraining the similarity between the prediction result and the marked picture. By combining the strabismus coefficient, the product of the shape loss values and the similarity loss value, the prediction model can pay more attention to the accuracy of shape prediction of the iris region under strabismus conditions.
Optionally, the obtaining a shape loss value according to the iris outer boundary mask mark picture and the iris outer boundary mask prediction result picture includes:
Extracting a first iris outer boundary in the iris outer boundary mask prediction result picture and a second iris outer boundary in the iris outer boundary mask mark picture;
calculating the Euclidean distance between the first iris outer boundary and the second iris outer boundary to obtain an iris outer boundary position offset picture; comparing the iris outer boundary mask mark picture with the iris outer boundary mask prediction result picture to obtain an iris outer boundary mask probability picture;
carrying out Hadamard product operation on the iris outer boundary position offset picture and the iris outer boundary mask probability picture to obtain a first shape loss value;
Comparing the ratio of the perimeter and the area of the outer boundary of the first iris with the ratio of the perimeter and the area of the outer boundary of the second iris to obtain a second shape loss value;
and calculating the sum of the first shape loss value and the second shape loss value to obtain the shape loss value.
By adopting the technical scheme, the Euclidean distance between the first iris outer boundary and the second iris outer boundary is calculated, and the spatial deviation between the prediction boundary and the actual boundary is accurately quantized. On this basis, the deviation information is combined with the prediction result to obtain a first shape loss value. Then, the degree of similarity of the shape is quantified by comparing the peripheral area ratio of the outer boundaries of the two irises, and the shape similarity is constructed as a second shape loss value. Finally, integrating the first shape loss value and the second shape loss value to form a shape loss value that constrains both the boundary position and the boundary shape.
Optionally, the inputting each downsampled picture to the upsampling module outputs a plurality of upsampled pictures with feature scales from low to high, and an iris outer boundary mask picture, where the feature scales of the iris outer boundary mask picture are feature scales of an iris outer boundary, and the method includes:
taking the downsampled picture with the minimum feature scale as a target downsampled picture, upsampling the target downsampled picture through an upsampling module, and splicing the upsampled target downsampled picture with the corresponding feature scale to obtain a first upsampled picture;
Taking the first up-sampling picture as a target down-sampling picture, repeatedly executing up-sampling of the target down-sampling picture through an up-sampling module, and splicing the up-sampled target down-sampling picture with a down-sampling picture with corresponding scale characteristics to obtain a first up-sampling picture, wherein the execution is finished until the up-sampling picture with the same characteristic scale as the animal eye picture is obtained, so that a plurality of up-sampling pictures with characteristic scales from low to high are obtained;
taking the downsampled picture with the minimum feature scale as a target downsampled picture, upsampling the target downsampled picture through an upsampling module, and splicing the upsampled target downsampled picture with the corresponding feature scale to obtain a second upsampled picture;
And taking the second up-sampling picture as a target down-sampling picture, repeatedly executing up-sampling of the target down-sampling picture through an up-sampling module, and splicing the up-sampled target down-sampling picture with the corresponding scale characteristic to obtain a second up-sampling picture, wherein the executing is finished until the up-sampling picture with the same characteristic scale as the iris outer boundary is obtained, and the iris outer boundary mask picture is obtained.
By adopting the technical scheme, the characteristic information of different scales simultaneously exists in the animal eye picture, and the multi-scale characteristics are very valuable for the identification and prediction of the prediction model. But during the downsampling process described above, some of the detail features of the animal's eye picture are lost. It is therefore necessary to gradually recover the various scale features in the original image by means of upsampling. Second, if the image of the smallest feature scale is directly upsampled on a large scale, information loss may be caused, thereby reducing the quality of the final recovered feature. Therefore, a gradual progressive up-sampling mode can be adopted, a reasonable scale is up-sampled each time, and then the original corresponding scale images are spliced, so that different scale features are restored smoothly, up-sampling features and original feature information are fused, and the robustness of a prediction model is improved.
In a second aspect of the application there is provided a method of iris recognition of an animal comprising:
Inputting an obtained eye picture of an animal to be identified into a prediction model, and outputting a first iris region and an iris outer boundary heat map of the eye picture, wherein the prediction model is a prediction model obtained after training by the prediction model training method; acquiring an iris outer boundary in the iris outer boundary heat map;
and taking the iris outer boundary as a boundary, and removing pixel values of the first iris region outside the iris outer boundary to obtain a target iris region.
By adopting the technical scheme, the first iris region and the iris outer boundary heat map of the eye picture of the animal to be identified are directly predicted by adopting the prediction model. The iris outer boundary heat map comprises rich boundary detail information, and more accurate iris outer boundaries are further extracted according to the iris outer boundary heat map. And finally, taking the outer iris boundary as a reference, and removing pixel values of the first iris region outside the outer iris boundary to obtain a clear and complete target iris region.
In a third aspect the application provides a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-described method steps.
In a fourth aspect of the application there is provided an electronic device comprising: a processor, a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
In summary, one or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. By adopting the embodiment of the application, when the iris region in the animal eye picture is blocked, the outer boundary of the iris region in the iris outer boundary mask picture is incomplete. And outputting a predicted result picture by inputting the iris outer boundary mask picture to a prediction module. And designing a loss function aiming at the iris outer boundary mask picture, calculating a loss value of a corresponding prediction result picture, training a prediction model through the loss value, and improving the prediction capability of the prediction model on the animal eye picture with the blocked iris region, so that the clear and complete iris region of the outer boundary can be predicted according to the animal eye picture. Further, the outer boundary of the iris region can be more accurately positioned according to the iris outer boundary heat map output by the prediction model.
2. And directly predicting a first iris region and an iris outer boundary heat map of the eye picture of the animal to be identified by adopting a prediction model. The iris outer boundary heat map comprises rich boundary detail information, and more accurate iris outer boundaries are further extracted according to the iris outer boundary heat map. And finally, taking the outer iris boundary as a reference, and removing pixel values of the first iris region outside the outer iris boundary to obtain a clear and complete target iris region.
Drawings
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a system architecture of a prediction model according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a predictive model training method according to an embodiment of the application;
FIG. 4 is a schematic diagram showing interaction of each module in a predictive model training process according to an embodiment of the present application;
fig. 5 is a schematic diagram of generating an iris outer boundary thermal mark picture according to an embodiment of the present application;
Fig. 6 is a schematic flow chart of an animal iris recognition method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals illustrate: 700. an electronic device; 701. a processor; 702. a memory; 703. a user interface; 704. a network interface; 705. a communication bus.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, perceives the environment, obtains knowledge, and uses the knowledge to obtain optimal results. Artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
Machine learning (MACHINE LEARNING, ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
Animal iris recognition is a biological recognition technology, and a digital image processing technology is used for analyzing and comparing the line patterns of the irises in animal eyes so as to realize the confirmation of animal identities. The iris is a ring-shaped structure in the animal's eye, the pattern of the lines is unique, and the number and shape of the lines varies with the animal's species and individual. Therefore, the iris recognition technology has high accuracy and is not tamper-resistant, and can be applied to the fields of animal identity authentication, data acquisition, behavior monitoring and the like.
The animal eye picture generally comprises a sclera area, an iris area and a pupil area, and the core task of the iris recognition technology is to extract texture features from the iris area and compare the texture features with texture features in a database so as to realize the recognition of animal individuals. In the process of acquiring the animal eye picture by the actual monitoring equipment, the condition of blinking, strabismus and the like easily occurs because the eyeball of the animal is possibly larger or the coordination degree of the animal is low, so that the iris area in the iris picture is blocked by eyelid. Therefore, the outer boundary of the animal iris region cannot be completely determined, and the accuracy of animal iris recognition is affected.
Based on the above, the embodiment of the application provides an animal iris recognition method, a prediction model training method, electronic equipment and a medium, wherein an iris region and an iris outer boundary in an animal eye picture are output through a prediction model, and the iris outer boundary is combined with the iris region, so that a clear and complete iris region is obtained, and the accuracy of animal iris recognition can be further improved.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application, where the implementation environment includes a monitoring device and an executing device, where the monitoring device and the executing device are connected through a communication network, and a computer program for recognizing an animal iris is stored in the executing device.
As shown in fig. 1, in an exemplary embodiment, when an animal to be identified is within a monitoring range of a monitoring device, the monitoring device may acquire an eye picture of the animal to be identified and transmit the eye picture to an execution device. The execution device inputs the eye picture into a prediction model, and the prediction model outputs an iris region in the eye picture.
Further, the execution device carries out post-processing on the iris region to obtain an iris region with clear and complete outer boundary. And further extracting texture information in the iris region through a computer program of animal iris recognition, and comparing the texture information of the animal to be recognized with texture information in a database to determine the identity of the animal to be recognized.
Among them, the execution devices include, but are not limited to, servers, computer devices, android (Android) system devices, mobile operating system (IOS) devices developed by apple corporation, personal Computers (PCs), world Wide Web (Web) devices, virtual Reality (VR) devices, augmented Reality (Augmented Reality, AR) devices, and the like.
With the above description of the application scenario, in order to enable a person skilled in the art to better understand the implementation principle of the animal iris recognition method, the system architecture of the prediction model is described first, and referring to fig. 2, fig. 2 shows a schematic diagram of the system architecture of the prediction model according to an embodiment of the present application. As shown in fig. 2, the prediction model includes a downsampling module, an upsampling module, and a prediction module.
The prediction model can be used for image segmentation of iris areas, sclera areas, pupils and other areas in the eye picture of the animal to be identified. In the image segmentation task, the downsampling module may be regarded as an encoder, extracts semantic information and global context of the image by downsampling, and the upsampling module may be regarded as a decoder, gradually restores spatial resolution by upsampling to obtain a fine segmentation result. The up-sampling module and the down-sampling module may constitute an encoder-decoder structure, focusing on the extraction of semantic information and the recovery of detail information, respectively.
Specifically, the downsampling module comprises a plurality of downsampling units, and each downsampling unit comprises a convolution layer, normalization and an activation function; the corresponding up-sampling module comprises a plurality of up-sampling units, each up-sampling unit comprises an up-sampling layer, a splicing layer, a convolution layer, normalization and an activation function, and the up-sampling units are spliced with pictures of corresponding feature scales in the process of up-sampling the images so as to transmit spatial information.
Further, after the upsampling unit in the upsampling module completes the stitching of the pictures with different feature scales, the stitched pictures can be respectively output to the prediction unit in the prediction module, so that the prediction unit outputs a plurality of segmentation prediction results of different areas in the eye image.
The system architecture of the prediction model and the interaction between the modules and the units under the system architecture are described above. It should be noted that the number of units set by each module in the above prediction model is merely exemplary, and in practical application, it is necessary to adaptively set the number according to different image segmentation tasks. When the prediction model is actually applied to the iris recognition of animals, parameters of each module in the prediction model are adjusted according to the loss value of the output prediction result of each prediction unit so as to train the prediction model and improve the accuracy of the iris region segmentation of the animals.
On the basis of the above embodiment, a training process of the predictive model will be described below. Further, please refer to fig. 3, a flowchart of a predictive model training method is specifically provided, which can be implemented by a computer program, a single-chip microcomputer, or the execution device. For easy understanding, please refer to fig. 4 on the basis of fig. 3, fig. 4 is a schematic diagram illustrating interactions between modules in a predictive model training process according to an embodiment of the present application. Specifically, the method for training the prediction model may include steps 301 to 307, where the steps are as follows:
step 301, obtaining an animal eye picture, and performing marking treatment on the animal eye picture to obtain a marked picture.
The animal eye picture is a training picture for training a prediction model. In order to improve the training accuracy of the prediction model, the animal eye picture can form a picture set by a plurality of training pictures, and the picture set contains eye pictures of animals of different species, different types and different ages. In addition, because the animal is easy to look obliquely, blink and the like when taking the animal eye picture, the training picture set also comprises pictures with incomplete boundaries of iris areas. By constructing a comprehensive and diversified animal eye training picture set, the capability of the prediction model for accurately identifying and predicting eye images of various conditions is improved.
Further, in the process of training the prediction model, the standard result is required to be compared with the prediction result output by the model, so that a loss value between the prediction result and the standard result is calculated, and parameters of the model are adjusted through the loss value, so that the prediction accuracy of the model is improved. In order to obtain the standard result required in the training process, before the animal eye image is used for training the prediction model, the animal eye image is required to be marked in advance, and the characteristic boundary image of the animal eye in the image is marked by adopting a manual marking or image segmentation algorithm to obtain a marked image, so that the standard result required in the training process can be provided.
In one possible embodiment, the marker pictures may include an iris region marker picture, a pupil region marker picture, an iris outer boundary mask marker picture, an iris outer boundary thermal marker picture, and a strabismus tag.
The iris region marking picture is a binarized picture obtained by marking an iris region in an animal eye picture, wherein the iris region is white, and the rest regions are black.
The pupil area marking picture is a binarized picture obtained by marking pupil areas in animal eye pictures, wherein only the pupil areas are white, and the rest areas are black.
The iris outer boundary mask mark picture is a binary picture obtained by marking a transition region between an iris region and a sclera region in the animal eye picture, wherein the transition region can be marked in gray, the iris region in the transition region can be filled in white, a non-iris region except the transition region can be filled in black, and the non-iris region can comprise pupil, eyelash or facula and other regions.
The iris outer boundary thermal mark picture is obtained by marking the iris outer boundary region in a thermodynamic diagram form, and emphasizes the iris outer boundary information.
The strabismus label is a numerical label designed according to the strabismus degree of the animal eye picture, and a specific numerical value represents the strabismus degree in the animal eye picture.
It should be noted that, although the two concepts of the iris region mark picture and the iris outer boundary mask mark picture are similar, they are both binary images for representing the iris region, but the purpose of their roles differs. The iris region mark picture is a picture for representing and extracting features in an iris region, and the iris outer boundary mask mark picture is a picture for determining an iris region.
For example, after the marking of the eye pictures of the animals is completed, the prediction model can be trained by sequentially passing through the eye pictures of different varieties of animals in the same species so as to improve the suitability of the prediction model for the eye picture prediction of the animals in the species. The specific training process is as follows steps 302 to 307.
Step 302: and inputting the animal eye pictures into a downsampling module, and outputting a plurality of downsampled pictures with characteristic scales from high to low.
Illustratively, as shown in FIG. 4, the downsampling module comprises 4 downsampling units connected in series. The animal eye picture is input as an original input picture to the downsampling unit 1. The downsampling unit 1 downsamples an original picture and outputs the downsampled picture 1. The downsampled picture 1 is passed as input to the next downsampling unit 2, and the downsampling unit 2 continues downsampling the downsampled picture 1, outputting the downsampled picture 2. And so on, the resolution of the picture is reduced and the feature scale is lower and lower after each downsampling unit. And finally, outputting 4 downsampled pictures with characteristic scales from high to low from a downsampling module.
It should be noted that the downsampled pictures with the feature scale from high to low contain information with different granularities, and parameters of each downsampling unit can be adjusted to a proper downsampling proportion, so that information loss is avoided.
Step 303: and inputting each downsampled picture into an upsampling module, outputting a plurality of upsampled pictures with characteristic scales from low to high, and an iris outer boundary mask picture, wherein the characteristic scales of the iris outer boundary mask picture are the characteristic scales of the iris outer boundary.
Specifically, referring to fig. 4, in one possible implementation, a downsampled picture (downsampled picture 4) with the smallest feature scale is taken as a target picture, upsampled, and then spliced with a downsampled picture (downsampled picture 3) with the upsampled feature scale to obtain an upsampled picture 1. Then, the upsampling operation and the splicing operation are repeated by continuing to take the upsampled picture 1 as a target picture, and a plurality of upsampled pictures with the feature scale from low to high are output.
Further, an iris outer boundary mask picture can be understood as an up-sampled picture focusing on the iris outer boundary mask features, which is mainly used for determining the iris region in an animal eye picture. Therefore, the feature scale of the iris outer boundary mask picture should be the same as the feature scale of the iris outer boundary in the eye picture, and in the embodiment of the application, the feature scale of the iris outer boundary is a preset value.
In the process of obtaining the iris outer boundary mask picture by upsampling the downsampled picture of the minimum feature scale, the feature scale of the iris outer boundary should be used as a stop condition for upsampling. It can be understood that the up-sampling and splicing operations are performed in the above manner until the feature scale of the up-sampled picture is the same as the feature scale of the iris outer boundary, so as to obtain the iris outer boundary mask picture.
It should be noted that, feature information of different scales exists in the animal eye picture at the same time, and these multi-scale features are very valuable for the identification and prediction of the prediction model. But during the downsampling process described above, some of the detail features of the animal's eye picture are lost. It is therefore necessary to gradually recover the various scale features in the original image by means of upsampling.
Second, if the image of the smallest feature scale is directly upsampled on a large scale, information loss may be caused, thereby reducing the quality of the final recovered feature. Therefore, a gradual progressive up-sampling mode can be adopted, a reasonable scale is up-sampled each time, and then the original corresponding scale images are spliced, so that different scale features are restored smoothly, up-sampling features and original feature information are fused, and the robustness of a prediction model is improved.
And step 304, transpose the up-sampling picture with the largest convolution characteristic scale to obtain at least one first prediction picture.
The transposed convolution is a convolution operation commonly used for image up-sampling, and in the embodiment of the present application, a multi-channel transposed convolution may be used, where each channel is responsible for generating a predicted picture with different characteristics.
For example, 3 different transpose convolution kernels may be designed, respectively for generating predictions of iris region, pupil region, and iris outer boundary heat map. In transpose convolution, the 3 convolution kernels are convolved with the input features, respectively, to output the features of the 3 channels. And then each channel represents a prediction result of different semantics, and a prediction picture of the iris region, the pupil region and the iris outer boundary heat map can be obtained through post-processing.
It should be noted that, in the embodiment of the present application, the first predicted picture refers to a picture used for prediction, and is not a predicted picture.
Step 305: and inputting the iris outer boundary mask picture and each first prediction picture into a prediction module, and outputting a prediction result picture.
Specifically, the prediction module is provided with a plurality of prediction units, and the iris outer boundary mask picture and each first prediction picture can be respectively input into the corresponding prediction unit, so that the prediction result picture focused by each prediction unit is output.
Step 306: and determining the loss value of the predicted result picture according to the marked picture.
Step 307: and adjusting parameters of the prediction model based on the loss value to obtain the prediction model after training.
Specifically, the loss value reflects the effect of the current prediction model, so that parameters of the prediction model can be adjusted according to the loss value, and the prediction model can be iteratively trained. In the training iteration process, the processes of calculating the loss value, adjusting the parameters and reducing the loss are repeated. When the loss value meets the condition of iteration stopping (if the loss value is smaller than the loss threshold value or the iteration number reaches the preset number), training of the prediction model is finished, a prediction model with optimized parameters is obtained, and the training process of the prediction model is completed.
The above embodiment describes the training process of the prediction model, and in the actual process of collecting the iris characteristics of the animals, the degree of matching of the animals is low, so that more strabismus conditions exist in the collected eye pictures of the animals, and the eyeballs of some animals are large, so that the iris area is often blocked by eyelid, thereby causing difficulty in positioning the outer boundary of the iris. In order to improve the prediction effect of the prediction model on the eye picture with incomplete iris outer boundary. When the embodiment of the application is used for training the model, the marking picture and the loss function are designed according to the conditions, so that the accuracy of the prediction model for predicting the animal eye picture with incomplete iris boundary area is improved. This process will be described below in connection with the above embodiments.
On the basis of the above embodiment, further, as an optional implementation manner, the marking process is performed on the animal eye image to obtain a marked image, and specifically the method may further include the following steps:
and 401, marking an iris region in the animal eye picture to obtain an iris region marking picture.
And 402, marking the pupil area in the animal eye picture to obtain a pupil area marking picture.
And 403, marking a transition region between an iris region and a sclera region in the animal eye picture to obtain an iris outer boundary mask mark picture.
And 404, marking the picture according to the iris region to obtain an iris outer boundary thermal mark picture.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating generation of an iris outer boundary thermal mark picture according to an embodiment of the present application, and specifically, the process may include the following steps:
and 501, marking a central point of an iris area in the picture by the iris area, sending a ray intersecting with an iris outer boundary curve to any direction, and rotating the ray for a plurality of times according to a preset angle until the ray rotates for one circle to obtain a plurality of first intersection points where the ray intersects with the iris outer boundary curve.
Specifically, as shown in fig. 5, a center point Q of the iris region is determined in the iris region marker picture, and the outer boundary of the iris region is possibly irregularly shaped, which is defined as an iris outer boundary curve in the embodiment of the present application. A ray in any direction is initialized with the center point Q as a starting point. Along the ray direction, an intersection point with the iris outer boundary curve is found and set as a first intersection point P0. And rotating the ray for a plurality of times according to a certain angle until a circle, and repeating the fixed point process to obtain a plurality of first intersection points P0 where the ray intersects with the iris outer boundary curve.
Step 502, performing normalization processing on pixel points between each first intersection point and each second intersection point to obtain first pixel points, and performing normalization processing on pixel points between each first intersection point and each third intersection point to obtain second pixel points, wherein the second intersection point is a point which is a preset distance away from the first intersection point along the positive direction of the ray, and the third intersection point is a point which is a preset distance away from the first intersection point along the negative direction of the ray.
Specifically, for each first intersection point P0, a point at a certain distance from P0 is set as a second intersection point P1 with P0 as the center in the ray direction thereof. In the direction opposite to the ray, a point with a certain distance from P0 is set as a third intersection point P2 with P0 as the center. Calculating the normalized pixel of each pixel point between the straight lines formed by the two points P0 and P1 to obtain a first pixel point, and calculating the normalized pixel of each pixel point between the straight lines formed by the two points P0 and P2 to obtain a second pixel point.
For example, for an iris region marker picture, the center point Q of the iris is determined. And a ray is sent out from the central point to any direction, and a point of the ray and an iris outer boundary curve in the iris region marker picture is set as a first intersection point P0. In the ray direction, a point at a fixed distance D from P0 is taken as a second intersection point P1. In the opposite direction of the ray, a point at a fixed distance D from P0 is taken as a third intersection point P2. Wherein, the distance between each pixel point between P0 and P1 and P0 is D i, where i represents the pixel point between the ith pixel point between P0 and P1, the normalized distance between each pixel point between P0 and P1 is S i=1-di/D, and then S i is used as a coefficient to assign to the pixel value of each pixel point between P0 and P1, and each pixel point between P0 and P1 after assignment is defined as the first pixel point. The process of calculating the normalized pixel of each pixel between the P0 point and the P2 point to obtain the second pixel is the same as the principle of the process, and redundant description is omitted.
And 503, generating an iris outer boundary thermal mark picture according to each first pixel point and each second pixel point.
Further, the iris outer boundary thermal mark picture obtained by the method comprises the iris outer boundary with more accurate boundary position, and the change of the shape of the iris outer boundary can be effectively adapted to through multi-ray sampling, and the iris outer boundary thermal mark picture has good robustness to irregular iris outer boundaries.
Compared with human eyes, a shadow area exists in a region between an animal iris area and a sclera area, so that the boundary between the iris area and a non-iris area is not obvious, and the segmentation accuracy of the iris area is affected. Therefore, by adopting the mode, the normalized pixel distance is used for mapping the iris outer boundary area, and the animal eye picture with the unclear iris boundary and the occlusion of the animal eye can make up the iris outer boundary outline information loss. The clear iris outer boundary thermal mark picture can be directly output by rotating rays to obtain the intersection point information of the outer boundary curve and highlighting pixels around the iris outer boundary through normalization processing. The distribution condition of the outer boundary of the iris is intuitively reflected, the characteristics of the extracted outer boundary of the iris are more obvious, and accurate position information can be provided for subsequent positioning.
And 405, marking the outer boundary of the iris in the picture according to the iris region to obtain the strabismus tag.
Specifically, the process may include the steps of:
And 601, generating an elliptic equation of the outer boundary of the iris in the iris region marker picture, and obtaining the major axis and the minor axis of the elliptic equation.
Specifically, according to the iris region marked picture, an iris outer boundary curve is extracted, and an elliptic parameter equation of the curve is fitted, so that a long axis a and a short axis b can be obtained.
Step 602, obtaining a strabismus tag according to the ratio of the major axis to the minor axis.
Specifically, the ratio of the minor axis b to the major axis a is calculated as r=b/a, and r must be less than 1 because of the ellipse. The smaller r indicates the flatter the ellipse, the more strabismus the eyeball. Further, a transition may be made where l=1-r, where L represents the degree of strabismus, and when r is smaller, L is larger, representing the degree of strabismus more severe.
In summary, by fitting an elliptic equation and comparing the ratio of the long axis to the short axis, the label L representing the degree of strabismus can be obtained, the value thereof is in the range of 0-1, the severity of strabismus can be quantitatively represented, not only the classification result, but also the eyeball shape can be quantitatively reflected, so that the subsequent processing is facilitated.
The above embodiment describes the generation process of the marker picture, and based on the above embodiment, step 306 will be described below: the step of determining the loss value of the predicted outcome picture from the marked picture is described, and specifically, the step 306 may further include the steps of:
And 701, determining a first loss value according to the iris region marking picture and the iris region prediction result picture.
Specifically, the iris region marker picture and the iris region prediction result picture can be compared to obtain the probability of predicting the iris region, and the probability of predicting the iris region can be substituted into the first formula to obtain the first loss value. Wherein, the first formula is:
L1=-(1-pt)γlog(pt);
Wherein L1 represents a first loss value, p t represents a probability value of predicting an iris region, and γ is a super parameter.
Step 402, determining a second loss value according to the pupil area marking picture and the pupil area prediction result picture.
Specifically, the pupil region marking picture and the pupil region prediction result picture can be compared to obtain the prediction probability of the pupil region, and the binary cross entropy of the prediction probability of the pupil region is calculated to obtain the calculated second loss value.
And 703, determining a strabismus coefficient and a third loss value according to the strabismus label and the strabismus prediction result.
Specifically, the strabismus prediction unit uses the fully connected layer as an output layer, inputs as depth coding features, outputs as a scalar value, and represents the strabismus coefficient G. The closer G is to 1, the higher the probability of predicting strabismus; the closer G is to 0, the higher the probability of predicting non-strabismus.
And comparing the strabismus prediction result with the strabismus label, and calculating a smooth average absolute error loss between the strabismus prediction result and the strabismus label to obtain a third loss value.
Step 704, determining a fourth loss value according to the iris outer boundary thermal mark picture and the iris outer boundary thermal prediction result picture.
Specifically, substituting the iris outer boundary thermal mark picture and the iris outer boundary thermal prediction result picture into a second formula to obtain a fourth loss value, wherein the second formula is as follows:
L4=MSE(xhm,Y3);
Wherein L4 is a fourth loss value, x hm represents an iris outer boundary thermal prediction result picture, Y 3 is an iris outer boundary thermal marker picture, and MSE represents a mean square error loss function.
Step 705, determining a fifth loss value according to the iris outer boundary mask mark picture and the iris outer boundary mask prediction result picture.
Specifically, the step may specifically further include the steps of:
And 801, marking the picture according to the iris outer boundary mask and obtaining a shape loss value and a similarity loss value according to the iris outer boundary mask prediction result picture.
Specifically, the step may specifically further include the steps of:
Step 901, extracting a first iris outer boundary in an iris outer boundary mask prediction result picture and marking a second iris outer boundary in the picture by the iris outer boundary mask.
Specifically, the distance D1 between each pixel point in the first iris outer boundary Q1 and the nearest background point (the pixel value is 0) is calculated, and the corresponding position of the pixel point with the distance equal to 1 in D1 in Q1 is further determined, so as to obtain the first iris outer boundary Q1 contour of Q1, that is, Q contour =q1 (d1= =1). And similarly, performing contour extraction on the iris outer boundary mask mark picture Q2 to obtain a second iris outer boundary.
And 902, calculating Euclidean distance between the first iris outer boundary and the second iris outer boundary to obtain an iris outer boundary position offset picture.
Specifically, both the first iris outer boundary and the second iris outer boundary may be converted into a sequence of coordinate points. Let the first iris outer boundary coordinate point sequence be the first sequence (x n1,yn1), the second iris outer boundary coordinate point sequence be the second sequence (x n2,yn2), n be the number of coordinate points in the sequence. Traversing the two sequences, taking each coordinate point (x j1,yj1) of the first sequence as a center, and finding out the corresponding coordinate point (x j2,yj2) of the second sequence, wherein j is the serial number of the jth coordinate point in the n coordinate points. The Euclidean distance d of the two point coordinates is calculated.
In the preset position offset picture, d is assigned to an area taking (x j1,yj1) as the center, and the process is repeated for each point of the first sequence, so that an iris outer boundary position offset picture M offset can be obtained.
And 903, comparing the iris outer boundary mask mark picture with the iris outer boundary mask prediction result picture to obtain an iris outer boundary mask probability picture.
And 904, carrying out Hadamard product operation on the iris outer boundary position offset picture and the iris outer boundary mask probability picture to obtain a first shape loss value.
Step 905, comparing the ratio of the perimeter and the area of the outer boundary of the first iris with the ratio of the perimeter and the area of the outer boundary of the second iris to obtain a second shape loss value.
In particular, the outer boundary mask of the iris region of an animal is typically an ellipse, and whether its shape is correct or not has some effect on subsequent normalization, and therefore requires the addition of constraints. Knowing Q1, the perimeter s1 and the area s2 are obtained, giving a perimeter to area ratio s1/s2. The Euclidean distance is calculated from the ratio of the perimeter to the area of the label, so that the constraint on the shape can be realized.
Step 906, calculating a sum of the first shape loss value and the second shape loss value to obtain a shape loss value.
In a possible implementation, the calculation process from step 901 to step 905 may be expressed by a third formula, where the third formula is as follows:
wherein L shape_metric represents a shape loss value; Carrying out Hadamard product operation on the iris outer boundary position offset picture and the iris outer boundary mask probability picture to obtain a first shape loss value, wherein j represents a j-th coordinate point in the iris outer boundary position offset picture M offset, N represents the total number of coordinate points, M represents the total number of classes of binary classification, k represents the k-th class of binary classification, y k represents a pixel point corresponding to the k-th class of class in the iris outer boundary mask marker picture, Representing a pixel point corresponding to the kth classification category in the iris outer boundary mask prediction result picture; representing a second value of the shape loss, Is the ratio of the perimeter and the area of the outer boundary of the first iris,Is the ratio of the perimeter and the area of the outer boundary of the second iris.
Step 802, summing the product of the squint coefficient and the shape loss value with the similarity loss value to obtain a fifth loss value.
In a possible implementation, the calculation process from step 801 to step 802 may be expressed by a fourth formula, where the fourth formula is as follows:
L5=dice(x,Y1)+G×Lshape_metric
Wherein L5 is a fifth loss value; the dice (x, Y 1) represents a similarity loss value, wherein x is an iris outer boundary mask mark picture, and Y 1 is an iris outer boundary mask prediction result picture; g is a strabismus coefficient; l shape_metric is a shape loss value.
Based on the above embodiment, as an alternative embodiment, step 303: inputting each downsampled picture into an upsampling module, outputting a plurality of upsampled pictures with characteristic scales from low to high, and an iris outer boundary mask picture, wherein the characteristic scale of the iris outer boundary mask picture is the characteristic scale of the iris outer boundary, and the method specifically comprises the following steps: step 1001, taking a downsampled picture with the minimum feature scale as a target downsampled picture, upsampling the target downsampled picture through an upsampling module, and splicing the upsampled target downsampled picture with the corresponding feature scale to obtain a first upsampled picture.
Step 1002, taking the first up-sampling picture as a target down-sampling picture, repeatedly executing up-sampling on the target down-sampling picture through an up-sampling module, and splicing the up-sampled target down-sampling picture with the corresponding scale feature to obtain a first up-sampling picture, until the up-sampling picture with the same feature scale as the animal eye picture is obtained, ending execution, and obtaining a plurality of up-sampling pictures with feature scales from low to high.
Step 1003, taking the downsampled picture with the minimum feature scale as a target downsampled picture, upsampling the target downsampled picture through an upsampling module, and splicing the upsampled target downsampled picture with the corresponding feature scale to obtain a second upsampled picture.
And 1004, taking the second up-sampling picture as a target down-sampling picture, repeatedly executing up-sampling on the target down-sampling picture through an up-sampling module, and splicing the up-sampled target down-sampling picture with the corresponding scale characteristic to obtain a second up-sampling picture, wherein the executing is finished until the up-sampling picture with the same characteristic scale as the iris outer boundary is obtained, and obtaining an iris outer boundary mask picture.
The above embodiment describes the training process of the prediction model, and on the basis of the above embodiment, the following describes the application process of the prediction model after the training is completed. Referring to fig. 6, a flow chart of an animal iris recognition method according to an embodiment of the application is shown. The process may specifically include the steps of:
Step 1101: and inputting the obtained eye picture of the animal to be identified into a prediction model, and outputting a first iris region and an iris outer boundary heat map of the eye picture.
Specifically, as shown in fig. 4, when the animal to be identified is in the monitoring range of the monitoring device, the monitoring device identifies an eye picture of the animal to be identified and sends the eye picture to the execution device. The execution device predicts each region in the eye picture of the animal to be identified by adopting the prediction model trained by the embodiment, and outputs a first iris region and an iris outer boundary heat map of the eye picture.
Step 1102: an iris outer boundary in an iris outer boundary heat map is obtained.
Specifically, the iris outer boundary heat map contains more abundant boundary related information. Each pixel point on the heat map reflects the probability or confidence that it belongs to the iris boundary. Thresholding the iris outer boundary heat map directly does not necessarily result in an optimal iris outer boundary because of the existence of actual boundary points with low noise or thermal values for irrelevant areas on the iris outer boundary heat map. Therefore, there is also a need for post-processing the iris outer boundary heat map.
Further, for the post-processing of the iris outer boundary heat map, a ray is led out along any direction by taking the center point of the ellipse of the iris outer boundary as the origin, a series of intersection points are formed between the ray and the iris outer boundary heat map, the maximum value of the intersection points is obtained, and the coordinate of the maximum value point is taken as one point of the outer boundary of the first iris region. According to the method, the iris outer boundary is obtained by rotating for a plurality of times according to a preset angle until one circle.
Step 1103: and taking the outer iris boundary as a boundary, and removing pixel values of the first iris region outside the outer iris boundary to obtain a target iris region.
Specifically, the post-processing of the first iris region is performed by removing pixel values outside the outer boundary of the iris obtained as described above with the outer boundary as a boundary, and obtaining the final iris region. As shown in fig. 4, after determining the target iris region, iris texture features can be further extracted, and then the texture features are compared with texture features in a database to determine the identity of the animal to be identified.
Further, for the pupil picture output by the prediction model and the iris outer boundary mask picture with the same post-processing mode, an opencv contour search function can be adopted to obtain boundary points, and then ellipse fitting is carried out on the boundary points, so that the iris inner boundary can be finally obtained.
The application also discloses electronic equipment. Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 700 may include: at least one processor 701, at least one network interface 704, a user interface 703, a memory 702, at least one communication bus 705.
Wherein a communication bus 705 is used to enable connected communication between these components.
The user interface 703 may include a Display screen (Display), a Camera (Camera), and the optional user interface 703 may further include a standard wired interface, and a wireless interface.
The network interface 704 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 701 may include one or more processing cores. The processor 701 connects various portions of the overall server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 702, and invoking data stored in the memory 702. Alternatively, the processor 701 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 701 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface diagram, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 701 and may be implemented by a single chip.
The Memory 702 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 702 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 702 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 702 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 702 may also optionally be at least one storage device located remotely from the processor 701. Referring to fig. 7, an operating system, a network communication module, a user interface module, and an application program of an animal iris recognition method or a predictive model training method may be included in a memory 702 as a computer storage medium.
In the electronic device 700 shown in fig. 7, the user interface 703 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 701 may be configured to invoke an application program in memory 702 that stores an animal iris recognition method or predictive model training method, which when executed by one or more processors 701, causes electronic device 700 to perform the method as described in one or more of the embodiments above. It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all of the preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (11)

1. A predictive model training method, wherein the predictive model includes a downsampling module, an upsampling module, and a predictive module, the predictive model training method comprising:
acquiring an animal eye picture, and performing marking treatment on the animal eye picture to obtain a marked picture;
inputting the animal eye picture to the downsampling module, and outputting a plurality of downsampled pictures with characteristic scales from high to low;
inputting each downsampled picture into the upsampling module, and outputting a plurality of upsampled pictures with characteristic scales from low to high and an iris outer boundary mask picture, wherein the characteristic scales of the iris outer boundary mask picture are characteristic scales of an iris outer boundary;
transpose the up-sampling picture with the largest convolution characteristic scale to obtain at least one first prediction picture;
inputting the iris outer boundary mask picture and each first prediction picture to the prediction module, and outputting a prediction result picture;
determining a loss value of the predicted result picture according to the marked picture;
And adjusting parameters of the prediction model based on the loss value to obtain a trained prediction model, wherein the trained prediction model is used for predicting an iris outer boundary heat map in an animal eye picture, and the iris outer boundary heat map comprises an iris outer boundary of the animal eye picture.
2. The method for training a prediction model according to claim 1, wherein the marked pictures include an iris region marked picture, a pupil region marked picture, an iris outer boundary mask marked picture, an iris outer boundary thermal marked picture and a strabismus tag, and the marking the animal eye picture to obtain the marked picture comprises:
Marking an iris region in the animal eye picture to obtain the iris region marking picture;
Marking a pupil area in the animal eye picture to obtain the pupil area marking picture;
Marking a transition region between the iris region and the sclera region in the animal eye picture to obtain the iris outer boundary mask mark picture;
according to the iris region marking picture, obtaining the iris outer boundary thermal marking picture;
and marking the outer boundary of the iris in the picture according to the iris region to obtain the strabismus tag.
3. The method for training the prediction model according to claim 2, wherein the step of obtaining the iris external boundary thermal marker picture from the iris region marker picture comprises the steps of:
Marking a central point of an iris area in a picture by using the iris area, sending a ray intersecting with an iris outer boundary curve to any direction, and rotating the ray for a plurality of times according to a preset angle until the ray rotates for one circle to obtain a plurality of first intersection points where the ray intersects with the iris outer boundary curve;
normalizing pixel points between the first intersection point and the second intersection point to obtain first pixel points, and normalizing pixel points between the first intersection point and the third intersection point to obtain second pixel points, wherein the second intersection point is a point which is a preset distance away from the first intersection point along the positive direction of the ray, and the third intersection point is a point which is a preset distance away from the first intersection point along the negative direction of the ray;
and generating the iris outer boundary thermal mark picture according to each first pixel point and each second pixel point.
4. The method for training a predictive model according to claim 2, wherein the obtaining the strabismus tag according to the iris region marking the iris outer boundary in the picture comprises:
generating an elliptic equation of the outer boundary of the iris in the iris region mark picture to obtain a major axis and a minor axis of the elliptic equation;
and obtaining the strabismus tag according to the ratio of the long axis to the short axis.
5. The method of claim 1, wherein the marked pictures comprise an iris region marked picture, a pupil region marked picture, an iris outer boundary mask marked picture, an iris outer boundary thermal marked picture and a strabismus tag,
The prediction result picture comprises an iris region prediction result picture, a pupil region prediction result picture, an iris outer boundary mask prediction result picture, an iris outer boundary thermal prediction result picture and a strabismus prediction result, the prediction module comprises a strabismus prediction unit, the strabismus prediction result is the output obtained by inputting a downsampled picture with the minimum scale characteristic into the strabismus prediction unit,
The determining the loss value of the predicted result picture according to the marked picture comprises the following steps:
Determining a first loss value according to the iris region marking picture and the iris region prediction result picture;
Determining a second loss value according to the pupil area marking picture and the pupil area prediction result picture;
determining a strabismus coefficient and a third loss value according to the strabismus label and the strabismus prediction result;
Determining a fourth loss value according to the iris outer boundary thermal mark picture and the iris outer boundary thermal prediction result picture;
And determining a fifth loss value according to the iris outer boundary mask mark picture and the iris outer boundary mask prediction result picture.
6. The method according to claim 5, wherein determining the fifth loss value according to the iris outer boundary mask label picture and the iris outer boundary mask prediction result picture comprises:
Obtaining a shape loss value and a similarity loss value according to the iris outer boundary mask mark picture and the iris outer boundary mask prediction result picture;
And summing the product of the strabismus coefficient and the shape loss value with the similarity loss value to obtain the fifth loss value.
7. The method according to claim 6, wherein obtaining the shape loss value according to the iris outer boundary mask label picture and the iris outer boundary mask prediction result picture comprises:
Extracting a first iris outer boundary in the iris outer boundary mask prediction result picture and a second iris outer boundary in the iris outer boundary mask mark picture;
Calculating the Euclidean distance between the first iris outer boundary and the second iris outer boundary to obtain an iris outer boundary position offset picture;
Comparing the iris outer boundary mask mark picture with the iris outer boundary mask prediction result picture to obtain an iris outer boundary mask probability picture;
carrying out Hadamard product operation on the iris outer boundary position offset picture and the iris outer boundary mask probability picture to obtain a first shape loss value;
Comparing the ratio of the perimeter and the area of the outer boundary of the first iris with the ratio of the perimeter and the area of the outer boundary of the second iris to obtain a second shape loss value;
and calculating the sum of the first shape loss value and the second shape loss value to obtain the shape loss value.
8. The method according to claim 1, wherein inputting each downsampled picture to the upsampling module outputs a plurality of upsampled pictures with feature scales from low to high, and an iris outer boundary mask picture, the feature scales of the iris outer boundary mask picture being feature scales of an iris outer boundary, comprising:
taking the downsampled picture with the minimum feature scale as a target downsampled picture, upsampling the target downsampled picture through an upsampling module, and splicing the upsampled target downsampled picture with the corresponding feature scale to obtain a first upsampled picture;
Taking the first up-sampling picture as a target down-sampling picture, repeatedly executing up-sampling of the target down-sampling picture through an up-sampling module, and splicing the up-sampled target down-sampling picture with a down-sampling picture with corresponding scale characteristics to obtain a first up-sampling picture, wherein the execution is finished until the up-sampling picture with the same characteristic scale as the animal eye picture is obtained, so that a plurality of up-sampling pictures with characteristic scales from low to high are obtained;
taking the downsampled picture with the minimum feature scale as a target downsampled picture, upsampling the target downsampled picture through an upsampling module, and splicing the upsampled target downsampled picture with the corresponding feature scale to obtain a second upsampled picture;
And taking the second up-sampling picture as a target down-sampling picture, repeatedly executing up-sampling of the target down-sampling picture through an up-sampling module, and splicing the up-sampled target down-sampling picture with the corresponding scale characteristic to obtain a second up-sampling picture, wherein the executing is finished until the up-sampling picture with the same characteristic scale as the iris outer boundary is obtained, and the iris outer boundary mask picture is obtained.
9. A method for iris recognition of an animal, comprising:
Inputting an obtained eye picture of an animal to be identified into a prediction model, and outputting a first iris region and an iris outer boundary heat map of the eye picture, wherein the prediction model is a prediction model obtained after training by the prediction model training method according to any one of claims 1-8;
Acquiring an iris outer boundary in the iris outer boundary heat map;
and taking the iris outer boundary as a boundary, and removing pixel values of the first iris region outside the iris outer boundary to obtain a target iris region.
10. An electronic device comprising a processor, a memory, a user interface, and a network interface, the memory for storing instructions, the user interface and the network interface for communicating to other devices, the processor for executing the instructions stored in the memory to cause the electronic device to perform the method of any one of claims 1-8 or 9.
11. A computer readable storage medium storing instructions which, when executed, perform the method of any one of claims 1-8 or claim 9.
CN202311040576.6A 2023-08-17 Animal iris recognition method, prediction model training method, electronic equipment and medium Active CN117079339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311040576.6A CN117079339B (en) 2023-08-17 Animal iris recognition method, prediction model training method, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311040576.6A CN117079339B (en) 2023-08-17 Animal iris recognition method, prediction model training method, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN117079339A CN117079339A (en) 2023-11-17
CN117079339B true CN117079339B (en) 2024-07-05

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340087A (en) * 2020-02-21 2020-06-26 腾讯医疗健康(深圳)有限公司 Image recognition method, image recognition device, computer-readable storage medium and computer equipment
CN114445904A (en) * 2021-12-20 2022-05-06 北京无线电计量测试研究所 Iris segmentation method, apparatus, medium, and device based on full convolution neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340087A (en) * 2020-02-21 2020-06-26 腾讯医疗健康(深圳)有限公司 Image recognition method, image recognition device, computer-readable storage medium and computer equipment
CN114445904A (en) * 2021-12-20 2022-05-06 北京无线电计量测试研究所 Iris segmentation method, apparatus, medium, and device based on full convolution neural network

Similar Documents

Publication Publication Date Title
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
CN111709409B (en) Face living body detection method, device, equipment and medium
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
CN111062328B (en) Image processing method and device and intelligent robot
WO2022089257A1 (en) Medical image processing method, apparatus, device, storage medium, and product
WO2024109374A1 (en) Training method and apparatus for face swapping model, and device, storage medium and program product
WO2021238586A1 (en) Training method and apparatus, device, and computer readable storage medium
US20240087368A1 (en) Companion animal life management system and method therefor
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
CN114519877A (en) Face recognition method, face recognition device, computer equipment and storage medium
CN113392741A (en) Video clip extraction method and device, electronic equipment and storage medium
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN111291695B (en) Training method and recognition method for recognition model of personnel illegal behaviors and computer equipment
CN115761834A (en) Multi-task mixed model for face recognition and face recognition method
CN110472673B (en) Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus
CN112115790A (en) Face recognition method and device, readable storage medium and electronic equipment
CN116704264B (en) Animal classification method, classification model training method, storage medium, and electronic device
CN116758622A (en) Data processing method, device, system and medium for attendance management system
Wang et al. Optic disc detection based on fully convolutional neural network and structured matrix decomposition
CN116884045A (en) Identity recognition method, identity recognition device, computer equipment and storage medium
Li et al. Location and model reconstruction algorithm for overlapped and sheltered spherical fruits based on geometry
CN117079339B (en) Animal iris recognition method, prediction model training method, electronic equipment and medium
CN111753618A (en) Image recognition method and device, computer equipment and computer readable storage medium
CN117079339A (en) Animal iris recognition method, prediction model training method, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant