CN117636400A - Method and system for identifying animal identity based on image - Google Patents

Method and system for identifying animal identity based on image Download PDF

Info

Publication number
CN117636400A
CN117636400A CN202410050259.0A CN202410050259A CN117636400A CN 117636400 A CN117636400 A CN 117636400A CN 202410050259 A CN202410050259 A CN 202410050259A CN 117636400 A CN117636400 A CN 117636400A
Authority
CN
China
Prior art keywords
animal
feature
identity
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410050259.0A
Other languages
Chinese (zh)
Other versions
CN117636400B (en
Inventor
张维斌
张天行
张嘉译
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongnong Huamu Group Co ltd
Original Assignee
Zhongnong Huamu Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongnong Huamu Group Co ltd filed Critical Zhongnong Huamu Group Co ltd
Priority to CN202410050259.0A priority Critical patent/CN117636400B/en
Publication of CN117636400A publication Critical patent/CN117636400A/en
Application granted granted Critical
Publication of CN117636400B publication Critical patent/CN117636400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a system for recognizing an animal identity based on an image, which relate to the field of animal recognition, and adopt an artificial intelligent recognition technology based on machine vision. Thus, the accuracy of animal identification can be effectively improved.

Description

Method and system for identifying animal identity based on image
Technical Field
The present application relates to the field of animal identification, and more particularly, to a method and system for identifying an animal identity based on an image.
Background
Wild animal identification is a key technology and plays a vital role in protection work. With the continued expansion of human activities and environmental changes, wild animals are exposed to increasingly severe threats including habitat loss, illegal hunting, climate change, and the like. Thus, understanding the number, distribution and population structure of wild animals is critical to the establishment of effective protective measures.
By wild animal identification, we can accurately identify and record the identity information of each individual, including unique characteristics, genetic information and patterns of behavior. This enables us to conduct accurate population screening, knowing the number and distribution of different species. Meanwhile, by analyzing the population structure, the endangered degree can be estimated, and a specific protection strategy can be formulated so as to ensure the survival and reproduction of endangered species. In addition, wild animal identification plays an irreplaceable role in monitoring wild animal numbers and distribution. Knowledge of the dynamic changes in wild animal populations is critical to assessing the health of the ecosystem, monitoring environmental changes, and predicting future trends. Through long-term and systematic monitoring and statistics work, a large amount of data can be obtained, and the ecological characteristics, migration modes and population dynamics of the population are deeply known. This allows us to discover problems faced by endangered species or populations in a timely manner and quickly take effective protective measures, including establishing a protected area, limiting hunting, restoring habitat, etc. In addition to the protective significance for species diversity and ecosystem, wild animal identification also helps to drive scientific research and education. By tracking and recording individual wild animals we can gain insight into their behavioral habits, social structures and reproductive ecology. This provides valuable research material for biologists, on-state scientists and behaviours, helping to drive our understanding of the behavior and ecosystem functions of wild animals. Meanwhile, the research results can also be used for education of the public, the understanding and consciousness of the protection of wild animals are enhanced, and the harmony and symbiosis of people and nature are promoted.
Thus, there is a need for an image-based scheme that can identify the identity of an animal.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a method and a system for identifying the identity of an animal based on an image, which adopt an artificial intelligent identification technology based on machine vision, firstly, the body figure characteristic, the body texture characteristic and the facial characteristic of the animal are obtained by carrying out characteristic extraction on the image of the animal to be identified, then the body figure characteristic, the body texture characteristic and the facial characteristic are fused to obtain an identity identification characteristic, and finally, the identity identification characteristic is classified to obtain a classification result for representing the identity of the animal. Thus, the accuracy of animal identification can be effectively improved.
According to one aspect of the present application, there is provided a method of image-based identification of an animal, comprising:
acquiring images of animals to be identified, which are shot at a plurality of angles;
the images of the animals to be identified, which are shot at the plurality of angles, pass through a noise reducer to obtain a plurality of noise reduction images;
extracting features of the plurality of noise reduction images to obtain an identity recognition feature matrix;
and judging the identity of the animal to be identified based on the identity recognition feature matrix.
In the method for identifying the identity of the animal based on the image, the feature extraction of the plurality of noise reduction images to obtain an identity identification feature matrix comprises the following steps: extracting features of the plurality of noise reduction images to obtain an animal feature map; and carrying out channel characteristic enhancement on the animal characteristic diagram to obtain the identity recognition characteristic matrix.
In the method for identifying the identity of the animal based on the image, the feature extraction of the plurality of noise reduction images to obtain an animal feature map includes: extracting texture features of the plurality of noise reduction images to obtain a body texture feature map; extracting body type characteristics of the plurality of noise reduction images to obtain a body type characteristic diagram; extracting facial features of one noise reduction image in the plurality of noise reduction images to obtain a facial feature map; and fusing the facial feature map, the body figure feature map and the body texture feature map to obtain the animal feature map.
In the method for recognizing the identity of the animal based on the image, the extracting of the texture features from the plurality of noise reduction images is performed to obtain a body texture feature map, which is used for: and the plurality of noise reduction images are processed through a gray level co-occurrence matrix to obtain a plurality of texture feature images, and the texture feature images are aggregated into the body texture feature image.
In the method for recognizing the identity of the animal based on the image, the body type feature extraction is performed on the plurality of noise reduction images to obtain a body type feature map, which is used for: and the plurality of noise reduction images are passed through an image encoder to obtain a plurality of body type feature maps and the body type feature maps are aggregated into the body type feature map.
In the above method for identifying an identity of an animal based on an image, the extracting facial features of one of the plurality of noise-reduced images to obtain a facial feature map includes: extracting a noise-reduced image containing an animal basic face region from the plurality of noise-reduced images; the noise reduction image containing the animal basic facial area passes through a facial interest area detection network to obtain a facial interest area; the facial region of interest is passed through a facial feature extractor using a spatial attention mechanism to obtain the facial feature map.
In the method for identifying the identity of the animal based on the image, the step of judging the identity of the animal to be identified based on the identity identification feature matrix includes: performing inter-feature node topology aggregation based on an objective function on the identity recognition feature matrix to obtain an optimized identity recognition feature matrix; and the optimized identity recognition feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for representing the identity of the animal to be recognized.
In the method for identifying the identity of the animal based on the image, the step of performing the topological aggregation between feature nodes based on the objective function on the identity identification feature matrix to obtain an optimized identity identification feature matrix comprises the following steps: calculating characteristic node factors based on an objective function of each row vector in the identity recognition characteristic matrix; and weighting each row vector in the identity recognition feature matrix by using a feature node factor corresponding to each row vector and based on an objective function so as to obtain the optimized identity recognition feature matrix.
In the method for identifying the identity of the animal based on the image, the calculating the characteristic node factors based on the objective function of each row vector in the identity identification characteristic matrix comprises the following steps: calculating characteristic node factors based on an objective function of each row vector in the identity recognition characteristic matrix according to the following characteristic node factor calculation formula; the characteristic node factor calculation formula is as follows:
w i =-log[|softmax(V i )-τ|]×bool[softmax(V i )-τ]+α||V i || F
wherein V is i Representing the ith row vector, softmax (V i ) Representing class probability values obtained by the i-th row vector in the identity recognition feature matrix through a classifier alone, wherein alpha represents a preset hyper-parameter, tau is the hyper-parameter representing a shift value, bol represents a Boolean function, log represents a logarithmic function value based on 2, and V i || F Frobenius norm, w representing the ith row vector in the identity recognition feature matrix i An objective function based feature node factor representing the ith row vector.
According to another aspect of the present application, there is provided a system for image-based identification of an animal, comprising:
the animal image data acquisition module is used for acquiring images of the animals to be identified, which are shot at a plurality of angles;
the animal image noise reduction module is used for enabling the images of the animals to be identified, which are shot at the plurality of angles, to pass through a noise reducer so as to obtain a plurality of noise reduction images;
the image feature extraction module is used for carrying out feature extraction on the plurality of noise reduction images to obtain an identity recognition feature matrix;
and the animal identity recognition result generation module is used for judging the identity of the animal to be recognized based on the identity recognition feature matrix.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a flow chart of a method for image-based identification of an animal identity according to an embodiment of the present application.
Fig. 2 is a block diagram of a method for image-based identification of an animal identity according to an embodiment of the present application.
Fig. 3 is a flowchart of a method for identifying an identity of an animal based on an image according to an embodiment of the present application, wherein feature extraction is performed on the plurality of noise reduction images to obtain an identity recognition feature matrix.
Fig. 4 is a flowchart of a method for identifying an animal identity based on an image according to an embodiment of the present application, wherein feature extraction is performed on the plurality of noise reduction images to obtain an animal feature map.
Fig. 5 is a flowchart of a method for identifying an identity of an animal based on an image according to an embodiment of the present application, in which facial feature extraction is performed on one noise reduction image of the plurality of noise reduction images to obtain a facial feature map.
Fig. 6 is a flowchart of determining the identity of the animal to be identified based on the identity recognition feature matrix in the method for recognizing the identity of the animal based on the image according to the embodiment of the application.
Fig. 7 is a flowchart of a method for identifying an identity of an animal based on an image according to an embodiment of the present application, in which topology aggregation between feature nodes based on an objective function is performed on the identity recognition feature matrix to obtain an optimized identity recognition feature matrix.
Fig. 8 is a system block diagram of an image-based system for identifying an identity of an animal in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As noted above in the background, wild animal identification plays a vital role in protection work. First, it has profound implications for maintaining species diversity. By identifying and recording the individual identities of wild animals, the number, distribution and population structure of different species can be accurately mastered, so that the endangered degree is scientifically evaluated, and a targeted protection scheme is prepared. This not only helps to protect endangered species, but also helps to maintain the balance and stability of the ecosystem. In addition, wild animal identification also has an irreplaceable role in monitoring the number and distribution of wild animals. The number and distribution of wild animal populations is an important basis for assessing ecosystem health and environmental changes. By identifying the identity of the individual, the system can monitor and count for a long time and systematically, understand the dynamic change of the population in depth, discover the endangered species or the problems faced by the population in time, and take effective protective measures rapidly. Thus, a solution is desired that can identify the identity of an animal based on an image.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like. The development of deep learning and neural networks provides new solutions and schemes for the identification of animals.
Fig. 1 is a flow chart of a method for image-based identification of an animal identity according to an embodiment of the present application. Fig. 2 is a block diagram of a method for image-based identification of an animal identity according to an embodiment of the present application. As shown in fig. 1 and 2, a method for identifying an animal identity based on an image according to an embodiment of the present application includes: s110, acquiring images of animals to be identified, which are shot at a plurality of angles; s120, enabling the images of the animals to be identified, which are shot at the plurality of angles, to pass through a noise reducer so as to obtain a plurality of noise reduction images; s130, extracting features of the plurality of noise reduction images to obtain an identity recognition feature matrix; and S140, judging the identity of the animal to be identified based on the identity recognition feature matrix.
Specifically, in the technical scheme of the application, first, images of an animal to be identified, which are shot at a plurality of angles, are acquired. It should be understood that when we try to identify the identity of a wild animal, the appearance characteristics of each individual may change at different angles. For example, the body texture, body shape, and facial characteristics of an animal may exhibit subtle changes due to changes in viewing angle. In order to more accurately identify the identity of an animal, it is desirable to acquire images taken at multiple angles to cover the appearance of the animal at different angles. The images may be from different angles of view, for example, from different angles of the side, front, back, etc. The images shot at multiple angles are the basis of subsequent data processing, and different characteristics of animals can be comprehensively utilized for identification through the subsequent processing and characteristic extraction of the images at multiple angles, so that the accuracy and the reliability of wild animal identification are improved.
In this embodiment, one implementation manner of acquiring images of an animal to be identified photographed at a plurality of angles may be: a plurality of high-quality shooting devices can be placed in places where wild animals often go out, so that the shooting devices can shoot clear images, the shooting devices are placed at a certain distance as far as possible, and the directions of the shooting devices are best different.
It should be well understood that the details and features of the image may be blurred due to the influence of ambient light changes and noise of the image sensor, which may affect the subsequent feature extraction and identification processes, so that the image of the animal to be identified, which is captured by the image capturing device, needs to be passed through the noise reducer to obtain a plurality of noise-reduced images. In the technical scheme of the application, the noise reducer adopts a network architecture based on an automatic coder-decoder.
In an embodiment of the present application, the method for obtaining a plurality of noise reduction images by passing the images of the animal to be identified, which are captured at the plurality of angles, through the noise reducer may be: inputting the images of the animals to be identified, which are shot by the angles, into an encoder of the noise reducer based on the automatic coder-decoder, wherein the encoder uses a convolution layer to perform explicit spatial encoding on the images of the animals shot by the angles so as to obtain a plurality of animal image characteristics; inputting the plurality of animal image features into a decoder of the automatic codec-based noise reducer, wherein the decoder deconvolves the plurality of animal image features using a deconvolution layer to obtain the plurality of noise reduced images.
Fig. 3 is a flowchart of a method for identifying an identity of an animal based on an image according to an embodiment of the present application, wherein feature extraction is performed on the plurality of noise reduction images to obtain an identity recognition feature matrix. As shown in fig. 3, the feature extraction of the plurality of noise reduction images to obtain an identification feature matrix includes: s131, carrying out feature extraction on the plurality of noise reduction images to obtain an animal feature map; s132, carrying out channel characteristic enhancement on the animal characteristic diagram to obtain the identity recognition characteristic matrix.
Fig. 4 is a flowchart of a method for identifying an animal identity based on an image according to an embodiment of the present application, wherein feature extraction is performed on the plurality of noise reduction images to obtain an animal feature map. As shown in fig. 4, the feature extraction of the plurality of noise reduction images to obtain an animal feature map includes: s1311, extracting texture features of the plurality of noise reduction images to obtain a body texture feature map; s1312, extracting body type characteristics of the plurality of noise reduction images to obtain a body type characteristic diagram; s1313, extracting facial features of one noise reduction image in the plurality of noise reduction images to obtain a facial feature map; s1314, fusing the facial feature map, the body figure feature map and the body texture feature map to obtain the animal feature map.
Texture is a combination of spatial relationships between pixels in an image and gray scale distribution that can provide information about the surface details and structure of an object. In animal identification, animal fur, speckle, pattern, and other body texture features are often one of the important bases for distinguishing individuals. Therefore, in the technical scheme of the application, the plurality of noise reduction images are processed through the gray level co-occurrence matrix to obtain a plurality of texture feature images, and the plurality of texture feature images are aggregated into the body texture feature image. A Gray-Level Co-occurrence Matrix, GLCM is a commonly used texture feature extraction method that describes the relative relationship of Gray values between different pixels in an image. By calculating the gray level co-occurrence matrix, gray level co-occurrence statistical information such as contrast, correlation, energy, entropy and the like between pixel pairs in different directions and distances in an image can be obtained. These statistical features can reflect the texture features of the image and thus be used to distinguish the body textures of different individuals.
In an embodiment of the present application, one implementation manner of passing the plurality of noise reduction images through a gray level co-occurrence matrix to obtain a plurality of texture feature maps and aggregating the plurality of texture feature maps into a body texture feature map may be: 1. for each noise reduction image, it is converted into a gray scale image. 2. Selecting direction and distance parameters: the gray co-occurrence matrix is calculated by selecting appropriate direction and distance parameters as needed, the direction parameters determining the relative positions of the pixel pairs considered in the image and the distance parameters representing the separation distance between the pixel pairs. Common direction parameters include horizontal (0 degrees), vertical (90 degrees), diagonal (45 degrees and 135 degrees), etc., and distance parameters may be selected according to the size of the image and the dimensions of the texture features. 3. Calculating a gray level co-occurrence matrix: for each combination of direction and distance parameters, the frequency of occurrence of pairs of pixels in the image having a particular gray value is recorded in a gray level co-occurrence matrix, which is a square matrix whose elements represent the frequency of occurrence of pairs of pixels in the image having a particular gray value, the gray level co-occurrence matrix being calculated by traversing the pairs of pixels in the image and based on the direction and distance parameters. 4. Calculating texture characteristics: for each gray level co-occurrence matrix, a series of texture features, such as contrast, correlation, energy, entropy, etc., may be calculated, which may be obtained by statistically calculating the gray level co-occurrence matrix. For example, contrast reflects the degree of difference between different gray levels in an image, correlation describes a linear correlation between pixel pairs, energy represents the intensity of texture in an image, and entropy reflects the complexity of an image. 5. Aggregate texture feature map: the plurality of texture feature maps are aggregated into a body texture feature map. The aggregation may be a simple feature stitching, i.e. a plurality of texture feature maps are connected together in a certain order to form a larger feature vector.
Animal body type is one of the important features of its identification. Body type characteristics may include information on the size, shape, proportion, and contour of the animal. By analyzing and comparing the body type characteristics of animals, the differences between different individuals can be differentiated, and important clues are provided for identification. In the previous step, we have obtained a plurality of noise reduced images by the noise reducer. These noise-reduced images may remove some noise and interference relative to the original image, making the details in the image more clear and identifiable. With these noise-reduced images we can better extract and characterize the body shape characteristics of animals. In order to extract body type features of an animal from these noise reduced images, the plurality of noise reduced images are passed through an image encoder to obtain a plurality of body type feature maps and the plurality of body type feature maps are aggregated into a body type feature map. An image encoder is a deep learning model that can convert an input image into a representation of features in a high-dimensional space. By inputting a plurality of noise reduced images into an image encoder, we can obtain a plurality of body type feature maps, each feature map corresponding to one noise reduced image. The feature maps can capture body type information of animals at different angles and visual angles. After obtaining the plurality of body type feature maps, information of the plurality of feature maps can be integrated together by aggregation to form a comprehensive body type feature map.
In an embodiment of the present application, one implementation of passing the plurality of noise reduction images through an image encoder to obtain a plurality of body type feature maps and aggregating the plurality of body type feature maps into a body type feature map may be: each layer of the image encoder respectively carries out input data in the forward transmission process of the layer: performing convolution processing on the input data based on a two-dimensional convolution check to obtain a convolution characteristic diagram; carrying out mean pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; wherein the output of the last layer of the image encoder is the plurality of body type feature maps, and the input of the first layer of the image encoder is the plurality of noise reduction images. The plurality of body conformation feature maps are then aggregated into a body conformation feature map along a channel dimension.
Fig. 5 is a flowchart of a method for identifying an identity of an animal based on an image according to an embodiment of the present application, in which facial feature extraction is performed on one noise reduction image of the plurality of noise reduction images to obtain a facial feature map. As shown in fig. 5, the extracting facial features of one of the noise reduction images to obtain a facial feature map includes: s13131, extracting a noise reduction image containing an animal basic face area from the plurality of noise reduction images; s13132, passing the noise reduction image containing the animal basic face region through a face region of interest detection network to obtain a face region of interest; s13133, passing the facial region of interest through a facial feature extractor using a spatial attention mechanism to obtain the facial feature map.
Animal facial features often contain rich information such as eyes, nose, mouth, ears, etc. These features play an important role in animal identification because of their high recognizability and individual variability. By focusing on the facial features of animals, we can improve the accuracy and reliability of identification. Thus a noise reduced image containing an animal basic face region is extracted from the plurality of noise reduced images. The noise reduction image containing the basic facial area of the animal is selected, which provides more facial details such as the shape of the eyes, the characteristics of the mouth, the position of the ears, etc., which can be located and extracted in a subsequent step using a face detection algorithm, which focuses on the facial characteristics of the animal to be identified, reduces interference in other parts, and provides a more targeted representation of the characteristics.
Because the location and size of the animal's facial region in the image may vary, it is desirable to use a facial region of interest detection network to accurately locate the facial region. The facial region of interest detection network may learn by training to identify and locate the position of the animal's face, thereby providing an accurate bounding box or mask of the face region. That is, the noise reduction image containing the animal basic face area is passed through a face region of interest detection network to obtain a face region of interest.
In an embodiment of the present application, the step of passing the noise-reduced image including the animal basic face region through the face region-of-interest detection network to obtain the face region-of-interest may be: 1. data preparation: an image dataset containing animal faces is prepared and corresponding facial region of interest labels are provided for each image, which labels may be bounding boxes or masks for the facial regions. 2. Constructing a face region of interest detection network: a suitable facial region of interest detection network model is selected, for example a Convolutional Neural Network (CNN) based object detection model, such as fast R-CNN, YOLO or SSD, etc. 3. Data preprocessing: a noise-reduced image containing a basic facial region of an animal is preprocessed for input into a facial region-of-interest detection network. Preprocessing may include image resizing, normalization, channel adjustment, etc. operations to ensure that the input meets the network requirements. 4. Training a face region of interest detection network: the face region of interest detection network is trained using the prepared dataset and the preprocessed image as training data. In the training process, the loss function is optimized, so that the network can accurately locate and detect the facial area of the animal. 5. Predicting a facial region of interest: and predicting a new noise reduction image containing the basic facial area of the animal by using the trained facial region-of-interest detection network. By inputting the image into the network, location information of the facial region of interest, such as a bounding box or a mask of the facial region, can be obtained. 6. Extracting a region of interest of a face: and extracting the corresponding region of interest of the face from the original image or the noise reduction image according to the position information of the region of interest of the face. The region of interest may be extracted from the image using image processing techniques, such as cropping operations.
In order to further extract and characterize the key features of the animal's face, the region of interest of the face is therefore passed through a facial feature extractor using a spatial attention mechanism in the solution of the present application to obtain a facial feature map. One of ordinary skill in the art will appreciate that the spatial attention mechanism is a technique for adjusting the degree of attention of a model to different areas of an image. It directs the attention allocation of the model by learning weights or. In a facial feature extractor, spatial attention mechanisms may help the model focus more on different parts of the region of interest of the face in order to better extract facial features. The spatial attention mechanism is typically based on the principle of the attention mechanism, where it is critical to calculate the attention weight of each location. These weights may be adaptively adjusted according to the content of the input data. In facial feature extraction, the attention weight may be used to adjust the degree of attention of the model to the facial region. By inputting the facial region of interest image into a facial feature extractor using a spatial attention mechanism, a facial feature map can be obtained. A facial feature map is a tensor with spatial dimensions, where each location corresponds to a particular facial feature.
In an embodiment of the present application, one implementation of the facial region of interest by using a facial feature extractor of a spatial attention mechanism to obtain a facial feature map may be: the facial feature extractor of the spatial attention mechanism performs each layer of input data in forward transfer of the layer: performing convolution processing based on a convolution kernel on the input data to obtain a convolution feature map; passing the convolved feature map through a spatial attention unit to obtain a spatial attention map; calculating the convolution characteristic diagram and multiplying the convolution characteristic diagram by the position points of the spatial attention diagram to obtain a spatial attention characteristic diagram; inputting the spatial attention profile into a nonlinear activation unit to obtain an activation profile; wherein the input of the first layer of the facial feature extractor is the region of interest of the face and the output of the last layer of the facial feature extractor is the facial feature map.
The facial feature map mainly contains key features of the animal's facial area, such as eyes, nose, mouth, etc. Facial features are important in animal identification because they are generally highly identifiable and distinguishable. The body conformation map mainly contains shape and contour information of the animal body. These features can be used to distinguish between different species of animals, as different species of animals typically have different body type characteristics. The body texture feature map mainly contains texture information of the animal body, such as spots, stripes, etc. These textural features can be used to distinguish differences between different individuals in the same species. If the identity is identified only by facial features, body shape features or body texture features, the accuracy is not high, and in order to describe the features of animals more comprehensively and improve the accuracy of the identity identification, the facial feature map, the body shape feature map and the body texture feature map are fused in the technical scheme of the application to obtain the animal feature map.
In this embodiment, an implementation manner of fusing the facial feature map, the body shape feature map, and the body texture feature map to obtain the animal feature map may be: fusing the facial feature map, the body conformation feature map, and the body texture feature map to obtain the animal feature map with a fusion formula; wherein, the fusion formula is:
wherein F is the animal characteristic diagram, F a For the facial feature map, F b For the body conformation feature map, F c For the body texture feature map,elements representing the facial feature map, the body type feature map, and the body texture feature map at corresponding positions are added, and α, β, and γ are weighting parameters for controlling balance among the facial feature map, the body type feature map, and the body texture feature map in the animal feature map.
It should be well understood that an animal profile is a comprehensive profile obtained by fusing facial, body conformation and body texture profiles. It contains animal characteristic information in many ways, but some of these characteristics may be more distinguishing for identification. The animal profile is thus passed through a convolutional neural network model using a channel attention mechanism to derive an identification feature matrix. Channel attention mechanisms are a technique for adjusting the degree of attention of a model to different channels of a feature map. It directs the model's attention allocation by learning channel weights. In the identification task, the channel attention mechanism can help the model pay more attention to the characteristic channel which is most distinguishable from identification. The principle of the channel attention mechanism is similar to the spatial attention mechanism, except that it adjusts the channel weights of the feature map instead of the weights of the spatial locations. The channel attention mechanism may generate attention weights matching the feature map channel number by learning, where the value of each channel represents the attention weight of the corresponding channel.
In this embodiment, one implementation manner of obtaining the identification feature matrix from the animal feature map by using the convolutional neural network model of the channel attention mechanism may be: each layer of the convolutional neural network model using the channel attention mechanism performs the following steps on input data in forward transfer of the layer: performing convolution processing on the input data based on a two-dimensional convolution check to generate a convolution feature map; pooling the convolution feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; calculating the quotient of the characteristic value average value of the characteristic matrix corresponding to each channel in the activated characteristic diagram and the sum of the characteristic value average values of the characteristic matrices corresponding to all channels as the weighting coefficient of the characteristic matrix corresponding to each channel; weighting the feature matrix of each channel by the weighting coefficient of each channel in the activation feature map to generate a channel attention feature map; the input of the first layer of the convolutional neural network model is the animal feature map, and the output of the last layer of the convolutional neural network model is the identity recognition feature matrix.
Fig. 6 is a flowchart of determining the identity of the animal to be identified based on the identity recognition feature matrix in the method for recognizing the identity of the animal based on the image according to the embodiment of the application. As shown in fig. 6, the determining the identity of the animal to be identified based on the identity recognition feature matrix includes: s141, performing inter-feature node topology aggregation based on an objective function on the identity recognition feature matrix to obtain an optimized identity recognition feature matrix; and S142, the optimized identity recognition feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for representing the identity of the animal to be recognized.
In particular, in the technical scheme of the application, the method of using a noise reducer, a gray level co-occurrence matrix, an image encoder, a facial region of interest feature extraction network and the like is used for processing the image of the animal to be identified, so that a body texture feature map, a body shape feature map and a facial feature map are obtained. Then, topology aggregation between feature nodes based on an objective function can be performed on the identity recognition feature matrix to obtain an optimized identity recognition feature matrix. The attention mechanism typically performs a weighted aggregation based on the relationships between each element in the input sequence when computing feature weights. However, such weighted aggregation may lead to information dispersion of features in different directions, thereby reducing the performance and generalization ability of the model. By topology aggregation between feature nodes based on an objective function, the relationship between feature nodes in the identity recognition feature matrix can be constrained, and the objective function can be optimized to maintain the correlation between features. The purpose of this is to avoid losing important information during the model parameter updating process, while improving the performance and generalization ability of the model. The optimized identity recognition feature matrix can be obtained by carrying out topology aggregation among feature nodes based on the objective function on the identity recognition feature matrix. The optimized feature matrix can better reserve the correlation among features, and improve the accuracy and the robustness of the model to the identity of the animal to be identified. That is, the topology aggregation among feature nodes based on the objective function can solve the problem of information degradation of feature distribution in different directions in the model parameter updating process due to the attention mechanism. By optimizing the identity recognition feature matrix, the correlation among the features can be maintained, and the performance and generalization capability of the model are improved, so that the identity information of the animal to be recognized can be more accurately represented.
Fig. 7 is a flowchart of a method for identifying an identity of an animal based on an image according to an embodiment of the present application, in which topology aggregation between feature nodes based on an objective function is performed on the identity recognition feature matrix to obtain an optimized identity recognition feature matrix. As shown in fig. 7, the performing topology aggregation between feature nodes based on an objective function on the identification feature matrix to obtain an optimized identification feature matrix includes: s1411, calculating characteristic node factors based on an objective function of each row vector in the identity recognition characteristic matrix; and S1412, weighting each row vector in the identity recognition feature matrix by a feature node factor corresponding to each row vector based on an objective function so as to obtain the optimized identity recognition feature matrix.
Specifically, calculating characteristic node factors based on an objective function of each row vector in the identity recognition characteristic matrix according to the following characteristic node factor calculation formula; the characteristic node factor calculation formula is as follows:
w i =-log[|softmax(V i )-τ|]×bool[softmax(V i )-τ]+α||V i ||| F
wherein V is i Representing the ith row vector, softmax (V i ) Representing class probability values obtained by the i-th row vector in the identity recognition feature matrix through a classifier alone, wherein alpha represents a preset hyper-parameter, tau is the hyper-parameter representing a shift value, bol represents a Boolean function, log represents a logarithmic function value based on 2, and V i || F Frobenius norm, w representing the ith row vector in the identity recognition feature matrix i An objective function based feature node factor representing the ith row vector.
Wherein, the bool function can be expressed as:
that is, in order to avoid the problem of information degradation of feature distribution in different directions in the parameter updating process of the model due to the attention mechanism, a method for topological aggregation between feature nodes based on an objective function is provided herein, and the identity recognition feature matrix is optimized to obtain a feature representation with more discriminant. Specifically, the method considers that the class probability values of each row vector in the identity recognition feature matrix obtained based on the Softmax function can follow the probability distribution of the self under different attention mechanisms, so that the probability values of each class are more similar to the real class distribution through the information compensation by shifting the probability distribution, the information entropy brought by the compensation is maximized through the bool function and the F norm, and the information degradation problem can be effectively solved. Therefore, the accuracy of classification judgment of the identity recognition feature matrix through the classifier is improved, and meanwhile generalization capability and robustness of the model are enhanced.
In order to convert the abstract feature representation into a corresponding identity, namely mapping the feature map to a specific representation of the animal identity, and finally, passing the optimized identity recognition feature matrix through a classifier to obtain a classification result, wherein the classification result is used for representing the identity of the animal to be recognized. The classifier is referred to herein as a multi-label classifier. In the identification task, the classifier is used for converting the optimized identification feature matrix into a corresponding identification. The classification result is the result output by the classifier, which is used to represent the identity of the animal to be identified, such as the species name of the animal.
In this embodiment of the present application, the optimizing the identity recognition feature matrix is performed by a classifier to obtain a classification result, where one possible implementation manner of the classification result for representing the identity of the animal to be recognized may be: processing the optimized identity recognition feature matrix with the classifier in the following classification formula to generate the classification result; wherein, the classification formula is:
wherein O is the classification result, W i And b i The weight and bias vector corresponding to the ith classification are respectively, and exp (·) represents a natural exponential function value which is obtained by exponentiating the eigenvalue of each position in the vector.
In summary, the method for identifying the identity of the animal based on the image according to the embodiment of the application is explained, which adopts an artificial intelligent identification technology based on machine vision, firstly, the body type feature, the body texture feature and the facial feature of the animal are obtained by extracting the features of the image of the animal to be identified, then the body type feature, the body texture feature and the facial feature are fused to obtain the identity identification feature, and finally, the identity identification feature is classified to obtain the classification result for representing the identity of the animal. Thus, the accuracy of animal identification can be effectively improved.
Fig. 8 is a system block diagram of an image-based system for identifying an identity of an animal in accordance with an embodiment of the present application. As shown in fig. 8, an image-based system 100 for identifying an identity of an animal in accordance with an embodiment of the present application includes: an animal image data acquisition module 110, configured to acquire images of animals to be identified photographed at a plurality of angles; the animal image noise reduction module 120 is configured to pass the images of the animal to be identified, which are captured at the plurality of angles, through a noise reducer to obtain a plurality of noise reduction images; the image feature extraction module 130 is configured to perform feature extraction on the plurality of noise reduction images to obtain an identity recognition feature matrix; and the animal identification result generating module 140 is configured to determine the identity of the animal to be identified based on the identification feature matrix.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described image-based animal identity recognizing system 100 have been described in detail in the above description of the image-based animal identity recognizing method with reference to fig. 1 to 7, and thus, repetitive descriptions thereof will be omitted.
In summary, the system 100 for identifying the identity of an animal based on an image according to the embodiment of the present application is illustrated, which adopts an artificial intelligence identification technology based on machine vision, firstly, extracts the characteristics of the body shape, the body texture and the facial characteristics of the animal by extracting the characteristics of the image of the animal to be identified, then fuses the body shape, the body texture and the facial characteristics to obtain the identity identification characteristics, and finally classifies the identity identification characteristics to obtain the classification result for representing the identity of the animal. Thus, the accuracy of animal identification can be effectively improved.
As described above, the image-based system 100 for recognizing the identity of an animal according to the embodiment of the present application may be implemented in various wireless terminals, such as a server or the like for recognizing the identity of an animal based on an image. In one example, the image-based identifiable animal identity system 100 according to embodiments of the present application may be integrated into a wireless terminal as a software module and/or a hardware module. For example, the image-based identifiable animal identity system 100 may be a software module in the operating system of the wireless terminal or may be an application developed for the wireless terminal; of course, the image-based identifiable animal identity system 100 can equally be one of many hardware modules of the wireless terminal.
Alternatively, in another example, the image-based identifiable animal identity system 100 and the wireless terminal may be separate devices, and the image-based identifiable animal identity system 100 may be connected to the wireless terminal via a wired and/or wireless network and communicate the interactive information in a agreed-upon data format.
In the several embodiments provided in this application, it should be understood that the disclosed method, system, or apparatus may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of the modules is merely a logical function division, and other manners of division may be implemented in practice.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted equally without departing from the spirit of the technical solution of the present application.

Claims (10)

1. A method for identifying an identity of an animal based on an image, comprising:
acquiring images of animals to be identified, which are shot at a plurality of angles;
the images of the animals to be identified, which are shot at the plurality of angles, pass through a noise reducer to obtain a plurality of noise reduction images;
extracting features of the plurality of noise reduction images to obtain an identity recognition feature matrix;
and judging the identity of the animal to be identified based on the identity recognition feature matrix.
2. The method of claim 1, wherein performing feature extraction on the plurality of noise reduced images to obtain an identity recognition feature matrix comprises:
extracting features of the plurality of noise reduction images to obtain an animal feature map;
and carrying out channel characteristic enhancement on the animal characteristic diagram to obtain the identity recognition characteristic matrix.
3. The method of image-based identifiable animal identification of claim 2, wherein feature extraction of the plurality of noise-reduced images to obtain an animal feature map comprises:
extracting texture features of the plurality of noise reduction images to obtain a body texture feature map;
extracting body type characteristics of the plurality of noise reduction images to obtain a body type characteristic diagram;
Extracting facial features of one noise reduction image in the plurality of noise reduction images to obtain a facial feature map;
and fusing the facial feature map, the body figure feature map and the body texture feature map to obtain the animal feature map.
4. A method of image-based identifiable animal identity according to claim 3, wherein the plurality of noise-reduced images are textured to obtain a body texture map for: and the plurality of noise reduction images are processed through a gray level co-occurrence matrix to obtain a plurality of texture feature images, and the texture feature images are aggregated into the body texture feature image.
5. The method of claim 4, wherein the body type feature extraction is performed on the plurality of noise reduced images to obtain a body type feature map for: and the plurality of noise reduction images are passed through an image encoder to obtain a plurality of body type feature maps and the body type feature maps are aggregated into the body type feature map.
6. The method of image-based identifiable animal identification of claim 5, wherein performing facial feature extraction on one of the plurality of noise-reduced images to obtain a facial feature map comprises:
Extracting a noise-reduced image containing an animal basic face region from the plurality of noise-reduced images;
the noise reduction image containing the animal basic facial area passes through a facial interest area detection network to obtain a facial interest area;
the facial region of interest is passed through a facial feature extractor using a spatial attention mechanism to obtain the facial feature map.
7. The image-based method of identifying an identity of an animal of claim 6, wherein determining the identity of the animal to be identified based on the identification feature matrix comprises:
performing inter-feature node topology aggregation based on an objective function on the identity recognition feature matrix to obtain an optimized identity recognition feature matrix;
and the optimized identity recognition feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for representing the identity of the animal to be recognized.
8. The image-based method of identifying an identity of an animal of claim 7, wherein performing an inter-feature node topology aggregation of the identity feature matrix based on an objective function to obtain an optimized identity feature matrix comprises:
calculating characteristic node factors based on an objective function of each row vector in the identity recognition characteristic matrix;
And weighting each row vector in the identity recognition feature matrix by using a feature node factor corresponding to each row vector and based on an objective function so as to obtain the optimized identity recognition feature matrix.
9. The image-based identifiable animal identity method of claim 8, wherein computing objective function-based feature node factors for each row vector in the identification feature matrix comprises: calculating characteristic node factors based on an objective function of each row vector in the identity recognition characteristic matrix according to the following characteristic node factor calculation formula;
the characteristic node factor calculation formula is as follows:
w i =-log[|softmax(V i )-τ|]×bool[softmax(V i )-τ]+α||V i ||| F
wherein V is i Representing the ith row vector, softmax (V i ) Representing class probability values obtained by the i-th row vector in the identity recognition feature matrix through a classifier alone, wherein alpha represents a preset hyper-parameter, tau is the hyper-parameter representing a shift value, bol represents a Boolean function, log represents a logarithmic function value based on 2, and V i || F Frobenius norm, w representing the ith row vector in the identity recognition feature matrix i An objective function based feature node factor representing the ith row vector.
10. A system for identifying an identity of an animal based on an image, comprising:
the animal image data acquisition module is used for acquiring images of the animals to be identified, which are shot at a plurality of angles;
the animal image noise reduction module is used for enabling the images of the animals to be identified, which are shot at the plurality of angles, to pass through a noise reducer so as to obtain a plurality of noise reduction images;
the image feature extraction module is used for carrying out feature extraction on the plurality of noise reduction images to obtain an identity recognition feature matrix;
and the animal identity recognition result generation module is used for judging the identity of the animal to be recognized based on the identity recognition feature matrix.
CN202410050259.0A 2024-01-11 2024-01-11 Method and system for identifying animal identity based on image Active CN117636400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410050259.0A CN117636400B (en) 2024-01-11 2024-01-11 Method and system for identifying animal identity based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410050259.0A CN117636400B (en) 2024-01-11 2024-01-11 Method and system for identifying animal identity based on image

Publications (2)

Publication Number Publication Date
CN117636400A true CN117636400A (en) 2024-03-01
CN117636400B CN117636400B (en) 2024-07-23

Family

ID=90016591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410050259.0A Active CN117636400B (en) 2024-01-11 2024-01-11 Method and system for identifying animal identity based on image

Country Status (1)

Country Link
CN (1) CN117636400B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118089899A (en) * 2024-04-19 2024-05-28 中农吉牧(吉林)农业发展有限公司 Intelligent weighing system of intelligent cattle farm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GR1008049B (en) * 2012-07-19 2013-12-03 Θεοδωρος Παντελη Χατζηπαντελης Recognition,detection, location and information system
CN109165681A (en) * 2018-08-01 2019-01-08 长兴曼尔申机械科技有限公司 A kind of recognition methods of animal species
US20210068371A1 (en) * 2019-09-09 2021-03-11 Council Of Agriculture Method and system for distinguishing identities based on nose prints of animals
CN113657231A (en) * 2021-08-09 2021-11-16 广州中科智云科技有限公司 Image identification method and device based on multi-rotor unmanned aerial vehicle
CN116596900A (en) * 2023-05-25 2023-08-15 宁波同耀新材料科技有限公司 Method and system for manufacturing graphite crucible
CN116704264A (en) * 2023-07-12 2023-09-05 北京万里红科技有限公司 Animal classification method, classification model training method, storage medium, and electronic device
CN116884031A (en) * 2023-06-07 2023-10-13 中国农业科学院草原研究所 Artificial intelligence-based cow face recognition method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GR1008049B (en) * 2012-07-19 2013-12-03 Θεοδωρος Παντελη Χατζηπαντελης Recognition,detection, location and information system
CN109165681A (en) * 2018-08-01 2019-01-08 长兴曼尔申机械科技有限公司 A kind of recognition methods of animal species
US20210068371A1 (en) * 2019-09-09 2021-03-11 Council Of Agriculture Method and system for distinguishing identities based on nose prints of animals
CN113657231A (en) * 2021-08-09 2021-11-16 广州中科智云科技有限公司 Image identification method and device based on multi-rotor unmanned aerial vehicle
CN116596900A (en) * 2023-05-25 2023-08-15 宁波同耀新材料科技有限公司 Method and system for manufacturing graphite crucible
CN116884031A (en) * 2023-06-07 2023-10-13 中国农业科学院草原研究所 Artificial intelligence-based cow face recognition method and device
CN116704264A (en) * 2023-07-12 2023-09-05 北京万里红科技有限公司 Animal classification method, classification model training method, storage medium, and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周晓健: "基于信息多模态融合技术在动物识别模型中的应用", 《中国高新科技》, 10 February 2023 (2023-02-10) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118089899A (en) * 2024-04-19 2024-05-28 中农吉牧(吉林)农业发展有限公司 Intelligent weighing system of intelligent cattle farm

Also Published As

Publication number Publication date
CN117636400B (en) 2024-07-23

Similar Documents

Publication Publication Date Title
Atoum et al. Face anti-spoofing using patch and depth-based CNNs
CN112446270B (en) Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
CN111738064B (en) Haze concentration identification method for haze image
Mathur et al. Crosspooled FishNet: transfer learning based fish species classification model
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
Fourati et al. Anti-spoofing in face recognition-based biometric authentication using image quality assessment
CN111797683A (en) Video expression recognition method based on depth residual error attention network
CN117636400B (en) Method and system for identifying animal identity based on image
CN114241548A (en) Small target detection algorithm based on improved YOLOv5
CN112801057A (en) Image processing method, image processing device, computer equipment and storage medium
CN106650617A (en) Pedestrian abnormity identification method based on probabilistic latent semantic analysis
Kimura et al. CNN hyperparameter tuning applied to iris liveness detection
CN108334870A (en) The remote monitoring system of AR device data server states
Chen et al. Generalized face antispoofing by learning to fuse features from high-and low-frequency domains
ALDHAMARI et al. Abnormal behavior detection using sparse representations through sequentialgeneralization of k-means
Huang et al. Multi-Teacher Single-Student Visual Transformer with Multi-Level Attention for Face Spoofing Detection.
CN110858304A (en) Method and equipment for identifying identity card image
CN114036553A (en) K-anonymity-combined pedestrian identity privacy protection method
CN117475353A (en) Video-based abnormal smoke identification method and system
CN108446639A (en) Low-power consumption augmented reality equipment
Aparna Swarm intelligence for automatic video image contrast adjustment
CN111598144A (en) Training method and device of image recognition model
CN115965613A (en) Cross-layer connection construction scene crowd counting method based on cavity convolution
CN113723310B (en) Image recognition method and related device based on neural network
CN115546828A (en) Method for recognizing cow faces in complex cattle farm environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant