CN110705512A - Method and device for detecting identity characteristics of stored materials - Google Patents

Method and device for detecting identity characteristics of stored materials Download PDF

Info

Publication number
CN110705512A
CN110705512A CN201910985343.0A CN201910985343A CN110705512A CN 110705512 A CN110705512 A CN 110705512A CN 201910985343 A CN201910985343 A CN 201910985343A CN 110705512 A CN110705512 A CN 110705512A
Authority
CN
China
Prior art keywords
feature
identity
characteristic
feeder
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910985343.0A
Other languages
Chinese (zh)
Inventor
王萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN201910985343.0A priority Critical patent/CN110705512A/en
Publication of CN110705512A publication Critical patent/CN110705512A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides an feeder identity characteristic detection method and a device, wherein the feeder identity characteristic detection method comprises the following steps: firstly, detecting key characteristics contained in identity characteristic information acquired by various modes aiming at the feeder; and then determining a feature numerical value of the key feature under the feature dimension of the corresponding mode according to the identity feature information, inputting a multi-mode feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection, and finally outputting the result as a detection result for identity feature detection of the feeding material.

Description

Method and device for detecting identity characteristics of stored materials
Technical Field
The embodiment of the specification relates to the technical field of data processing, and in particular relates to an feeding identity characteristic detection method, a feeding identity characteristic detection device, computing equipment and a computer-readable storage medium.
Background
With the acceleration of social development rhythm, the working pressure and the living pressure of each person who is a social development participant are increased, so that the burden on the life of the person is not increased, more and more people like to raise pets, the pet cage can enrich the pet and help the body and mind health of the pet, especially for some sedentary old people with children in other places, the pet accompany can make the life of the old people more happy, and for some well-trained pets, the pet can also play a role in alarming when the old people have some accidents, such as illness, and the pet is very important to exist for the owner, which is equivalent to a part of a family, many pet-oriented services are therefore produced, such as pet hospitals, pet bathing, pet insurance, etc.
In the process of providing the service for the pet, in order to facilitate the management of the pet and provide a better service for the pet, the pet needs to be personalized on the basis of accurate identification of the pet, so that the satisfaction degree of the pet service is improved.
Disclosure of Invention
In view of this, the embodiments of the present specification provide a method for detecting identity characteristics of an feeder, so as to solve the technical defects in the prior art. One or more embodiments of the present specification provide an implement for detecting an identity of an asset, a computing device, and a computer-readable storage medium.
One embodiment of the specification provides an feeder identity characteristic detection method, which comprises the following steps:
detecting key features contained in identity feature information acquired by various modes aiming at the feeder;
determining a characteristic numerical value of the key characteristic under the characteristic dimension of the corresponding mode according to the identity characteristic information;
and inputting the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection, and outputting the result as a detection result for identity feature detection of the feeder.
Optionally, the determining, according to the identity feature information, a feature value of the key feature in a feature dimension of a corresponding modality includes:
extracting detected key features from the identity feature information of at least one modality aiming at the identity feature information of the at least one modality;
and calculating the characteristic numerical value of the key characteristic under the characteristic dimension of the modality according to the identity characteristic information of the modality, the extracted key characteristic and the incidence relation among the key characteristics.
Optionally, after the step of determining the feature numerical value of the key feature under the feature dimension of the corresponding modality according to the identity feature information is executed, and before the step of inputting the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection and outputting the result as the detection result for identity feature detection of the animal feed is executed, the method includes:
constructing a feature vector by taking the feature numerical values under the feature dimensions of each mode as vector elements to obtain the multi-mode feature vector;
and vector elements of the multi-modal feature vector correspond to feature numerical values under feature dimensions of each mode one by one.
Optionally, the modality includes at least one of: visual modality, acoustic modality, and physical modality;
wherein the identity feature information collected at the visual modality for the odorant comprises: utilizing image acquisition equipment to acquire identity image information aiming at the feeding materials;
characteristic information collected at the acoustic modality for the odorant includes: utilizing sound acquisition equipment to acquire identity sound information aiming at the feeding materials;
characteristic information collected at the shape modality for the animal feed comprises: and identity and shape information acquired by biological characteristic acquisition equipment aiming at the feeding materials is utilized.
Optionally, the visual modality includes at least one of the following for key features included in the identity image information acquired by the implement:
the method comprises the following steps of feeding a head characteristic point of a feed, a feed nose characteristic point, a feed opening characteristic point, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed opening characteristic frame, a feed full-body characteristic frame and a human face characteristic frame of a feed;
accordingly, the characteristic dimension of the visual modality includes at least one of:
the feed is characterized by comprising a feed head characteristic point, a feed nose characteristic point, a feed eye characteristic point, a number dimension of characteristic points of a feed mouth characteristic point, a feed head characteristic point, a feed nose characteristic point, a feed eye characteristic point, a feed mouth characteristic point, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body characteristic frame, a characteristic position dimension of a face characteristic frame of a feed, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body characteristic frame, a feature area dimension of a face characteristic frame of a feed, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body feed characteristic frame, a whole-body feed characteristic frame of a feed, a face characteristic frame of a feed, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed full-body feed nose characteristic frame, a feed nose characteristic, And (4) the features of at least two of the human face feature frames of the owners belonging to the owners overlap dimensions.
Optionally, the feature value of the key feature in the feature dimension of the visual modality is determined in the following manner:
counting the number of the feature points of the head feature point, the nose feature point, the eye feature point and/or the mouth feature point of the feeder in the feature point number dimension, which are contained in the identity image information;
calculating the feature position information of the feeder head feature point, the feeder nose feature point, the feeder eye feature point, the feeder mouth feature point, the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame and/or the face feature frame of the feeder belonging to the feeder in the feature position dimension, which are contained in the identity image information;
calculating the feature area information of the head feature frame of the feeder, the nose feature frame of the feeder, the eye feature frame of the feeder, the mouth feature frame of the feeder, the whole body feature frame of the feeder and/or the face feature frame of the feeder belonging to the feeder in the feature area dimension, which are contained in the identity image information;
and/or the presence of a gas in the gas,
analyzing feature overlapping relation information of at least two of the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame and the human face feature frame of the feeder to which the feeder belongs in the feature overlapping dimension, wherein the feature overlapping relation information is contained in the identity image information.
Optionally, the key features included in the identity sound information collected by the acoustic sensation modality for the occupant include: feeding material voiceprint characteristics;
accordingly, the characteristic dimensions of the acoustic modality include at least one of: the voiceprint volume dimension to which the support voiceprint feature belongs and the voiceprint timbre dimension to which the support voiceprint feature belongs.
Optionally, the shape modality includes at least one of the following for key features included in the identity shape information acquired by the feeder: walking posture characteristic, feeding material palm print characteristic and feeding material nose print characteristic;
correspondingly, the characteristic dimension of the body modality comprises at least one of the following: the walking posture feature belongs to a posture dimension, the feeding palm print feature and the feeding nose print feature belong to a feature definition dimension.
Optionally, the classification model includes at least one of: a two-classification model and a multi-classification model.
Optionally, if the classification model is a multi-classification model, after the multi-modal feature vector is input into the multi-classification model, the multi-classification model performs identity feature qualification detection on key features included in feature information of various modalities respectively based on feature values under feature dimensions of various modalities included in the multi-modal feature vector;
and if the mode detection result of the feature qualified detection of at least one mode is that the detection fails, outputting a multi-mode detection result carrying the feature qualified detection of the at least one mode as the detection failure, and taking the multi-mode detection result as the multi-classification model to detect the identity features of the livestock.
Optionally, the step of inputting the multi-modal feature vector obtained by performing feature construction based on the feature value into a classification model for identity feature qualification detection, and performing the step of outputting the result as the detection result for identity feature detection of the feeder includes:
determining at least one mode which is used for determining the identity feature qualification detection contained in the multi-mode detection result output by the multi-classification model as a detection failure mode;
comparing the characteristic value of the key characteristic of the failed detection mode under the characteristic dimension with the characteristic qualified threshold interval under the characteristic dimension of the failed detection mode, and determining the unqualified description of the identity characteristic of the failed detection mode according to the comparison result;
determining an acquisition prompt corresponding to the unqualified identity feature description according to a preset corresponding relation between the unqualified identity feature description of the failed detection mode and the acquisition prompt;
and executing the acquisition reminding according to the acquisition reminding mode corresponding to the detection failure mode.
Optionally, the key features included in the identity feature information of each modality are detected based on a key feature detection model corresponding to the corresponding modality, and the detected key features of the corresponding modality are extracted based on the key feature detection model;
inputting the identity image information of the visual modality into an image key feature detection model corresponding to the visual modality to detect and extract image key features, and outputting the image key features extracted from the identity image information;
inputting the identity sound information of the acoustic sense modality into a sound key feature detection model corresponding to the acoustic sense modality to detect and extract sound key features, and outputting the sound key features extracted from the identity sound information;
and inputting the identity body information of the body mode into a body key feature detection model corresponding to the body mode to detect and extract body key features, and outputting key feature detection extracted from the identity body information.
One embodiment of the present specification provides an feeder identity detection device, comprising:
a key feature detection module configured to detect key features included in identity feature information acquired for the animal by the various modalities;
a feature value determination module configured to determine, according to the identity feature information, a feature value of the key feature in a feature dimension of a corresponding modality;
and the identity characteristic detection module is configured to input the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity characteristic qualification detection, and output the identity characteristic vector as a detection result for identity characteristic detection of the feeder.
One embodiment of the present specification provides a computing device comprising:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
detecting key features contained in identity feature information acquired by various modes aiming at the feeder;
determining a characteristic numerical value of the key characteristic under the characteristic dimension of the corresponding mode according to the identity characteristic information;
and inputting the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection, and outputting the result as a detection result for identity feature detection of the feeder.
One embodiment of the present specification provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the implement the method for livestock identity detection.
According to the embodiment of the specification, the multi-mode feature vectors of the feeder are determined on the basis of collecting the identity feature information of the feeder in multiple modes and detecting and extracting the key features contained in the identity feature information of the multiple modes, and the multi-mode feature vectors are used as a classification model to input qualified identity feature detection, so that the identity feature of the feeder is more comprehensively and accurately reflected through the multi-mode feature vectors, more accurate identity feature detection of the feeder is realized, and the identity feature identification of the feeder is more accurate and more comprehensive on the basis.
Drawings
FIG. 1 is a process flow diagram of a method for detecting the identity of an animal feeding material according to the embodiments of the present disclosure;
fig. 2 is a processing flow diagram of an feeder identity feature detection method applied to a pet identity recognition scene provided in an embodiment of the present specification;
FIG. 3 is a schematic diagram of an implement for detecting the identity of an animal according to the present disclosure;
fig. 4 is a block diagram of a computing device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, as those skilled in the art will be able to make and use the present disclosure without departing from the spirit and scope of the present disclosure.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
One embodiment of the specification provides a method for detecting an identity characteristic of an inoculum, and one or more embodiments of the specification further provide an apparatus for detecting an identity characteristic of an inoculum, a computing device, and a computer-readable storage medium. The following detailed description and the explanation of the steps of the method are individually made with reference to the drawings of the embodiments provided in the present specification.
The embodiment of the method for detecting the identity characteristics of the feeding materials provided by the specification is as follows:
referring to the attached drawing 1, which shows a processing flow chart of the method for detecting the identity characteristic of the animal feeding material provided by the embodiment of the specification, referring to fig. 2, which shows a processing flow chart of the method for detecting the identity characteristic of the animal feeding material applied to a pet identity recognition scene provided by the embodiment of the specification.
And S102, detecting key features contained in identity feature information acquired by various modalities aiming at the feeder.
In practical application, in many scenes, it is necessary to identify animals such as pets, for example, in the process of providing services such as medical treatment, bathing care, hosting and the like for pets, more accurate service and management are provided by identifying the identity of the pets and establishing pet files, and in the process of purchasing insurance for the pets, it is also necessary to perform online insurance application on the basis of identifying the identity of the pets and establishing the identity files, and for example, the lost or wandering pets also need to retrieve or retrieve the pets online or remotely through pet identity identification.
According to the method for detecting the identity characteristics of the feeding materials, the multi-modal feature vectors of the feeding materials are determined and used as a classification model to input qualified identity characteristic detection on the basis of collecting the identity characteristic information of the feeding materials in multiple modes and detecting and extracting the key features contained in the identity characteristic information of the multiple modes, the identity characteristics of the feeding materials are more accurately and comprehensively reflected through the multi-modal feature vectors, more accurate identity characteristic detection of the feeding materials is realized, and therefore the identity characteristic identification of the feeding materials on the basis is more accurate and more comprehensive, and the application of the identity characteristic identification of the feeding materials in different scenes is promoted.
The animal raising described in this embodiment includes pets (canine pets, feline pets, amphibian pets, etc.) that enhance pleasure for the user at the emotional level, animals (e.g., poultry, livestock, etc.) that the user raises for economic purposes, and animals (e.g., protective animals raised in animal protection areas, zoonotic animals raised by socially beneficial organizations for socially beneficial purposes, etc.).
The embodiment takes a pet as an example to explain the feeding material, and particularly explains the characteristic acquisition and detection process of the pet. The feature acquisition and detection process of the two types of animals which are housed by the user for economic purposes and housed by the user for social public benefit or environmental protection purposes can be realized by referring to the specific implementation of the pet feature acquisition and detection process provided by the embodiment, and the detailed description of the embodiment is omitted.
The modality in this embodiment refers to the sensing dimension or the sensing field for identifying the identity of the feeder, and in this embodiment, the modality for identifying the identity of the feeder includes 3 modalities, namely a visual modality, an acoustic modality, and a body modality. In addition, the identity characteristic detection of the feeder can be carried out by combining other modes, which is not limited.
Specifically, the identity characteristic information acquired in the visual mode for the feeder is identity image information acquired by using image acquisition equipment for the feeder; wherein, the key features contained in the identity image information specifically include: the method comprises the following steps of feeding a head characteristic point of a feed, a feed nose characteristic point, a feed opening characteristic point, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed opening characteristic frame, a feed full-body characteristic frame and a human face characteristic frame of a feed.
Similarly, the identity characteristic information acquired in the acoustic sensation mode for the feeder is identity sound information acquired by using sound acquisition equipment for the feeder; wherein, the key features contained in the identity voice information specifically include: housing voice print feature.
The identity characteristic information acquired in the body mode aiming at the feeding material is identity body information acquired by using biological characteristic acquisition equipment aiming at the feeding material; wherein, the key features contained in the identity and shape information specifically include: walking posture characteristic, feeding material palm print characteristic and feeding material nose print characteristic.
It should be noted that the key features included in the identity image information of the visual modality are not limited to the 10 key features provided above, and may also be limb features or other key features such as eye features of the animal holder included in the identity image information. Similarly, the identity sound information of the acoustic sensation modality is not limited to the key feature of the provided acoustic line feature of the feeder, nor is the identity shape information of the shape modality limited to the 3 key features of the provided walking posture feature, the acoustic line feature of the feeder, and the acoustic line feature of the feeder.
During specific implementation, detecting key features contained in the acquired identity feature information of various modes aiming at the identity feature information acquired by various modes aiming at the feeder, and preparing for subsequent identity feature detection aiming at the feeder. For example, in the process that a user purchases insurance such as accidental injury risk and accidental loss risk for a pet dog D kept by the user, identity characteristic information of the pet dog is acquired in a visual mode, a sound mode and a body mode, and key characteristic detection is respectively performed on the identity characteristic information of the 3 modes on the basis of the acquired identity characteristic information of the 3 modes:
(1) a visual modality;
the method comprises the steps that a user detects 4 key features, namely a head feature point, a head feature frame, a whole body feature frame and a face feature frame of an owner, contained in identity image information of a pet dog D on the basis of receiving the identity image information uploaded by the user through identity image information acquired by the pet dog D which is fed by the user through a smart phone by using an image detection algorithm or a pre-trained image detection model;
(2) a sound sensation modality;
the method comprises the steps that a user identifies voiceprint features (voiceprint features) contained in identity voice information of a pet dog D which is housed by the user through a smart phone on the basis of receiving the identity voice information uploaded by the user through a voice detection algorithm or a pre-trained voice recognition model;
(3) and (4) a shape mode.
The method comprises the steps that a user identifies the walking posture characteristics of the pet dog D contained in identity body information (body image) of the pet dog D by using an image identification algorithm or a pre-trained image identification model on the basis of receiving the identity body information (body image) uploaded by the user through the identity body information (body image) collected by the pet dog D which is hosted by the user through a smart phone.
And step S104, determining a feature value of the key feature under the feature dimension of the corresponding mode according to the identity feature information.
On the basis of detecting the key features contained in the identity feature information of 3 modalities, namely the visual modality, the acoustic modality and the body modality, determining the feature value of the key feature of the visual modality in the feature dimension of the visual modality according to the identity image information, determining the feature value of the key feature of the acoustic modality in the feature dimension of the acoustic modality according to the identity sound information, and determining the feature value of the key feature of the body modality in the feature dimension of the body modality according to the identity body information.
As described above, the key features included in the identity image information of the visual modality include 10 lines of an feeder head feature point, an feeder nose feature point, an feeder eye feature point, a feeder mouth feature point, a feeder head feature frame, a feeder nose feature frame, a feeder eye feature frame, a feeder mouth feature frame, a feeder whole-body feature frame, and a face feature frame of a feeder to which the feeder belongs. Based on this, the feature dimensions of the visual modality in this embodiment specifically include the following 4 feature dimensions: the number dimensions of the feature points of the head of the feeder, the feature points of the nose of the feeder, the feature points of the eye of the feeder and the feature points of the mouth of the feeder are determined; the method comprises the following steps of (1) feeding head characteristic points, feeding nose characteristic points, feeding eye characteristic points, feeding mouth characteristic points, a feeding head characteristic frame, a feeding nose characteristic frame, a feeding eye characteristic frame, a feeding mouth characteristic frame, a feeding whole-body characteristic frame and characteristic position dimensionality of a face characteristic frame of a feeder to which feeding materials belong; the method comprises the following steps of (1) constructing a feature frame of a head of a feeder, a feature frame of a nose of the feeder, a feature frame of an eye of the feeder, a feature frame of a mouth of the feeder, a feature frame of a whole body of the feeder and a feature area dimension of a face feature frame of the feeder to which the feeder belongs; and the feature overlapping dimensions of at least two of the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame and the face feature frame of the feeder to which the feeder belongs are determined.
Similarly, the key features included in the identity sound information of the phonological modality include the occupant voiceprint feature. Based on this, the feature dimensions of the phonological modality described in this embodiment specifically include 2 feature dimensions, namely, a voiceprint volume dimension to which the acoustic feature of the feeder belongs and a voiceprint timbre dimension to which the acoustic feature of the feeder belongs.
The key features contained in the identity body information of the body mode comprise 3 items of a walking posture feature, an feeder palmprint feature and a feeder nose print feature. Based on this, the characteristic dimensions of the physical modality in this embodiment specifically include the following two characteristic dimensions: the walking posture feature belongs to a posture dimension, the feeding palm print feature and the feeding nose print feature belong to a feature definition dimension.
In a specific implementation process, the feature type and the number of the key features of each modality are different, and the feature type and the number of the key features of each modality are different, so that the feature value of each modality under the feature dimension is determined from the association between the key features of each modality and the identity features and the association between the key features of each modality in the process of determining the feature value of each modality under the feature dimension, so that the determined feature value under the feature dimension of each modality can more accurately reflect the identity features of the nutrients, and therefore, the identity feature detection for the nutrients is more accurate and more comprehensive on the basis.
In an optional implementation manner provided by this embodiment, the feature value under the feature dimension of each modality is specifically determined in the following manner:
1) extracting detected key features from the identity feature information of at least one modality aiming at the identity feature information of the at least one modality;
2) and calculating the characteristic numerical value of the key characteristic under the characteristic dimension of the modality according to the identity characteristic information of the modality, the extracted key characteristic and the incidence relation among the key characteristics.
Specifically, for identity feature information of 3 modalities, namely a visual modality, an acoustic modality and a physical modality, firstly, key features detected in identity image information of the visual modality, key features detected in identity sound information of the acoustic modality and key features detected in identity physical information of the physical modality are respectively extracted; then, according to identity image information of the visual modality, key features extracted from the identity image information and the incidence relation among the extracted key features, calculating a feature value of the key features of the visual modality under the feature dimension of the visual modality; calculating a characteristic value of the key characteristic of the acoustic modality under the characteristic dimension of the acoustic modality according to the identity sound information of the acoustic modality, the key characteristic extracted from the identity sound information and the incidence relation among the extracted key characteristics; and calculating the characteristic numerical value of the key characteristic of the body modality under the characteristic dimension of the body modality according to the identity body information of the body modality, the key characteristic extracted from the identity body information and the incidence relation among the extracted key characteristics.
In this embodiment, a calculation process of the feature value in the feature dimension of the visual modality is taken as an example for explanation, the calculation of the feature value in the feature dimension of the acoustic modality and the calculation of the feature value in the feature dimension of the physical modality are similar to the calculation process of the feature value in the feature dimension of the visual modality, and the calculation process of the feature value in the feature dimension of the visual modality provided below is referred to, and this embodiment is not described in detail herein.
Specifically, the calculation process of the feature value under the feature dimension of the visual modality is specifically realized as follows:
(a) counting the number of the feature points of the head feature point, the nose feature point, the eye feature point and/or the mouth feature point of the feeder in the feature point number dimension, which are contained in the identity image information;
it should be noted that, the number of the feature points of the key features included in the identity image information is counted here, and the purpose is to provide data input preparation for a subsequent classification model to perform identity feature eligibility detection from the feature points of the key features included in the identity image information, specifically, in the process of subsequently inputting the multi-modal feature vector including the number of the feature points into the classification model to perform identity feature eligibility detection on the food, it is possible to determine whether the identity feature eligibility detection of the identity image information of the food under the feature point number dimension passes through (for example, the number of the feature points of the left eye of the food is 1, the number of the feature points of the right eye is 0 (that is, the feature points of the right eye are not detected), if the food is not completely collected during image collection or the collection angle is deviated, namely, the acquisition angle is a side surface rather than a front surface), and finally determining the identity characteristic qualification detection result of the identity image information in the visual mode integrally by combining the identity characteristic qualification detection results of the identity image information of the livestock in the characteristic position dimension, the characteristic area dimension and the characteristic overlapping dimension.
Similarly, the feature eligibility detection of the identity image information can be realized by counting the number of feature frames of key features contained in the identity image information, correspondingly, in the process of subsequently inputting the multi-mode feature vector containing the number of the feature frames into a classification model to perform the identity feature eligibility detection on the livestock, whether the number of the feature frames of the key features meets the requirement or not (for example, the number of the head feature frames of the target livestock is 2, it is possible that the heads of other livestock are collected when the image of the target livestock is collected, which indicates that the image collection device is too far away from the target livestock or the collection view angle of the image collection device is not over against the target livestock) and whether the feature frames of the key features are complete or not can be judged to pass the identity feature eligibility detection of the identity image information of the livestock in the dimension of the number of the feature frames, and finally, determining the identity characteristic qualified detection result of the identity image information in the visual mode by combining the identity characteristic qualified detection results of the identity image information of the livestock in the characteristic position dimension, the characteristic area dimension and the characteristic overlapping dimension.
(b) Calculating the feature position information of the feeder head feature point, the feeder nose feature point, the feeder eye feature point, the feeder mouth feature point, the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame and/or the face feature frame of the feeder belonging to the feeder in the feature position dimension, which are contained in the identity image information;
it should be noted that the feature position information of the key features included in the identity image information is calculated here, and the purpose of providing data input preparation for performing identity feature eligibility detection on the subsequent classification model from the feature position of the key features included in the identity image information is also achieved, specifically, in the process of performing identity feature eligibility detection on the stored material by inputting the multi-mode feature vector including the feature position information of the key features into the classification model, an image acquisition angular relationship between the acquired stored material and the acquisition device of the identity image information may be determined by analyzing the position information between the key features (for example, the relative positions between the feature point of the key feature and the feature frame are different in the two cases that the stored material is directly facing the acquisition device and the storage side faces the acquisition device), so that the feature eligibility detection of the identity image information of the stored material in the feature position dimension is determined by using the image acquisition angular relationship And finally, determining the qualified detection result of the characteristic identity characteristic of the identity image information in the visual mode by combining the qualified detection results of the identity characteristic of the identity image information in the characteristic point number dimension, the characteristic area dimension and the characteristic overlapping dimension.
(c) Calculating the feature area information of the head feature frame of the feeder, the nose feature frame of the feeder, the eye feature frame of the feeder, the mouth feature frame of the feeder, the whole body feature frame of the feeder and/or the face feature frame of the feeder belonging to the feeder in the feature area dimension, which are contained in the identity image information;
(d) analyzing feature overlapping relation information of at least two of the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame and the human face feature frame of the feeder to which the feeder belongs in the feature overlapping dimension, wherein the feature overlapping relation information is contained in the identity image information.
Along the use example, the 4 key features of the head feature point, the head feature frame, the whole body feature frame and the face feature frame of the owner contained in the identity image information of the pet dog D are detected, the voiceprint feature contained in the identity voice information of the pet dog D is recognized, and the walking posture feature of the pet dog D contained in the identity body information (body image) of the pet dog D is recognized;
on the basis, firstly extracting 4 detected key features from the identity image information of the pet dog D, extracting the recognized voiceprint features from the identity voice information of the pet dog D, and extracting the recognized walking posture features from the identity body information (body image) of the pet dog D;
then, combining the extracted 4 key features of the visual modality, the association of the 4 key features with the identity image information of the pet dog D and the association among the 4 key features, calculating feature values of the 4 key features under feature dimensions (feature point number dimension, feature position dimension, feature area dimension and feature overlapping dimension) of the visual modality, wherein the specific calculation process comprises the following 4 steps:
the first step is as follows: counting the number of head characteristic points contained in the identity image information of the pet dog D;
the second step is that: calculating the positions of each head characteristic point, the head characteristic frame, the whole body characteristic frame and the face characteristic frame of the owner in the image corresponding to the identity image information according to the size information of the identity image information of the pet dog D, and calculating the relative position of each head characteristic point in the image corresponding to the identity image information;
the third step: combining the position calculated in the second step, further calculating the area size of the head feature frame, the whole body feature frame and the face feature frame of the person who belongs to the holder in the image corresponding to the identity image information;
the fourth step: and analyzing whether the 3 feature frames, namely the head feature frame, the whole body feature frame and the face feature frame of the owner, are overlapped in the image corresponding to the identity image information on the basis of the position calculated in the second step and the area calculated in the third step.
Similarly, combining the extracted key feature of the voiceprint feature of the vocal modality and the correlation between the key feature and the identity sound information of the pet dog D, calculating a feature value of the key feature under the feature dimension (the voiceprint volume dimension and the voiceprint tone dimension) of the vocal modality; and calculating the characteristic numerical value of the key characteristic under the characteristic dimension (posture dimension and characteristic definition dimension) of the body modality by combining the extracted 3 key characteristics of the body modality and the association of the 3 key characteristics and the identity body information of the pet dog D.
In practical applications, in the process of detecting and extracting key features included in identity feature information of 3 modalities, namely, a visual modality, an acoustic modality and a body modality, in order to improve the detection efficiency and the extraction efficiency of the key features, in an optional implementation manner provided by this embodiment, a pre-trained model is used to perform the detection and extraction efficiency of the key features, specifically, the identity image information of the visual modality is input into an image key feature detection model corresponding to the visual modality to perform the detection and extraction of the image key features, and the image key features extracted from the identity image information are output; similarly, the identity sound information of the acoustic sense modality is input into a sound key feature detection model corresponding to the acoustic sense modality to detect and extract sound key features, and the sound key features extracted from the identity sound information are output; and inputting the identity body information of the body mode into a body key feature detection model corresponding to the body mode to detect and extract body key features, and outputting key feature detection extracted from the identity body information.
In addition, the detection and extraction of the key features are performed on the identity feature information of 3 modalities, namely, the visual modality, the acoustic modality and the body modality, and can also be realized based on the same model, specifically, the identity feature information of the 3 modalities is input into a pre-trained feature detection and extraction model together, the feature detection and extraction model performs the extraction and detection of the key features in the identity feature information of the 3 modalities, and finally the key features extracted from the identity feature information of the 3 modalities are output. Or, detecting and extracting key features aiming at the identity feature information of the 3 modalities, wherein the detecting stage and the extracting stage of the key features can be realized based on different models, the identity feature information of the 3 modalities is input into a pre-trained feature detection model, the output of the feature detection model is used as the input of a feature extraction model for subsequently extracting the key features, the feature extraction model is used for extracting the key features in the identity feature information of the 3 modalities, and finally the key features extracted from the identity feature information of the 3 modalities are output.
In this embodiment, identity feature detection for the feeding object relates to 3 modalities, namely a visual modality, an acoustic modality and a body modality, and in order to enable feature values of key features extracted from identity feature information of different modalities under feature dimensions of corresponding modalities to be better compatible, in this embodiment, feature values under feature dimensions of 3 modalities, namely the visual modality, the acoustic modality and the body modality, are integrated into a multi-modal feature vector representing identity features of the feeding object in multiple modalities, so that identity features of different modalities are compatible, and therefore identity feature detection for the feeding object is more efficient and convenient on the basis. In an optional implementation manner provided by this embodiment, after determining, according to the identity feature information, a feature value of the key feature in a feature dimension of a corresponding modality, a feature vector construction is performed by using the feature value in the feature dimension of each modality as a vector element, so as to obtain the multi-modality feature vector; and the vector elements of the multi-modal feature vector correspond to feature values under the feature dimensions of 3 modes, namely a visual mode, a sound mode and a body mode one by one.
And S106, inputting the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection, and outputting the result as a detection result for identity feature detection of the feeder.
The classification model in this embodiment refers to a classifier for performing qualification detection on identity features of feeding materials, the input of the classification model is the multi-modal feature vector obtained by the above construction, after the multi-modal features are input into the classification model, the classification model performs analysis and judgment according to the multi-modal feature vector, and finally outputs a qualification detection result of the multi-modal feature vector, and the feeding materials characterized by the multi-modal feature vector are key features extracted from identity feature information of multiple modalities, so that it can be seen that the qualification detection result of the multi-modal feature vector finally output by the classification model is a qualification detection result of the feeding materials in the identity feature information of multiple modalities.
In an optional implementation manner provided by this embodiment, the classification model adopts a multi-classification model, where after the multi-modal feature vector is input into the multi-classification model, the multi-classification model performs identity feature qualification detection on key features included in feature information of various modalities respectively based on feature values under feature dimensions of various modalities included in the multi-modal feature vector; and if the mode detection result of the feature qualified detection of at least one mode is that the detection fails, outputting a multi-mode detection result carrying the feature qualified detection of the at least one mode as the detection fails, and taking the multi-mode detection result as the detection result of the identity feature detection of the livestock by using the multi-classification model.
It should be noted that vector elements of the multi-modal feature vector input by the multi-classification model correspond to feature values in feature dimensions of 3 modalities, namely, a visual modality, a sound sensation modality, and a body modality, one to one, and therefore, the multi-modal detection result output by the multi-classification model includes a detection result of whether the identity feature qualification detection of each modality of the 3 modalities, namely, the visual modality, the sound sensation modality, and the body modality, passes or not.
For example, a multi-modal feature Vector constructed based on feature values of the pet dog D in feature dimensions of 3 modalities, namely, a visual modality, an acoustic modality and a body modality, is input into a multi-classification Model trained in advance, and the multi-classification Model outputs a multi-modal detection result after identity feature qualification detection is performed on the visual modality, the acoustic modality and the body modality according to the input multi-modal feature Vector: the first position of the multi-mode detection result is '1, 0, 1', wherein the first position of the multi-mode detection result is '1', the identity characteristic of the pet dog D in the visual mode is qualified or passes through the detection, the second position of the multi-mode detection result is '0', the identity characteristic of the pet dog D in the acoustic mode is unqualified or fails in the detection, and the third position of the multi-mode detection result is '1', the identity characteristic of the pet dog D in the physical mode is qualified or passes through the detection.
Besides, the classification model can also adopt a two-classification model, which is different from the above-mentioned multiple classification model in that the output of the two-classification model is detected by only two types: and the identity feature eligibility detection result carried out aiming at the multi-mode feature vector is that the detection is passed, or the identity feature eligibility detection result carried out aiming at the multi-mode feature vector is that the detection is not passed.
In practical application, if the identity characteristics of the visual mode, the acoustic mode and the body mode in the multi-mode result are all detected to be passed, the identity characteristic information acquired by the visual mode, the acoustic mode and the body mode aiming at the livestock is qualified;
if the identity characteristic of at least one of the 3 modalities, namely the visual modality, the acoustic modality and the physical modality, in the multi-modality result is detected to be failed, it is indicated that the identity characteristic information acquired by the at least one modality for the material to be stored is unqualified, and in this case, the identity characteristic information of the modality with unqualified identity characteristic detection is perfected, in an optional implementation manner provided in this embodiment, the identity characteristic information of the modality with unqualified identity characteristic detection is guided and reminded to be perfected by sending a corresponding reminder, and the method is specifically implemented as follows:
1) determining at least one mode which is used for determining the identity feature qualification detection contained in the multi-mode detection result output by the multi-classification model as a detection failure mode;
2) comparing the characteristic value of the key characteristic of the failed detection mode under the characteristic dimension with the characteristic qualified threshold interval under the characteristic dimension of the failed detection mode, and determining the unqualified description of the identity characteristic of the failed detection mode according to the comparison result;
3) determining an acquisition prompt corresponding to the unqualified identity feature description according to a preset corresponding relation between the unqualified identity feature description of the failed detection mode and the acquisition prompt;
4) and executing the acquisition reminding according to the acquisition reminding mode corresponding to the detection failure mode.
According to the above example, a multi-modal feature Vector constructed based on feature values of the pet dog D in feature dimensions of 3 modes, namely, a visual mode, an acoustic mode and a body mode, is input into a multi-classification Model trained in advance, and the multi-classification Model outputs a multi-modal detection result after identity feature qualification detection is performed on the visual mode, the acoustic mode and the body mode according to the input multi-modal feature Vector: the first position of the multi-mode detection result is '1, 0, 1', wherein the first position of the multi-mode detection result is '1', the pet dog D is qualified or passes the identity characteristic detection in the visual mode, the second position of the multi-mode detection result is '0', the pet dog D is unqualified in the identity characteristic detection in the acoustic mode or fails in the detection, and the third position of the multi-mode detection result is '1', the pet dog D is qualified in the identity characteristic detection in the physical mode or passes in the detection; therefore, if the identity characteristic detection included in the multi-mode detection result is unqualified or the detection failure mode is the sound sensation mode, firstly, the sound sensation mode is determined as the detection failure mode;
then comparing the volume value of the voiceprint feature contained in the pet dog D identity sound information in the voiceprint volume dimension with the voiceprint volume threshold interval in the voiceprint volume dimension, wherein the comparison result is that the volume value is smaller than the lower limit value of the voiceprint volume threshold interval, comparing the tone value of the voiceprint feature contained in the pet dog D identity sound information in the voiceprint tone dimension with the voiceprint tone threshold interval in the voiceprint tone dimension, and determining that the identity feature disqualification of the sound sensation mode is described as 'the voiceprint volume is too small' based on the comparison result of the volume value and the tone color threshold interval;
secondly, searching for the acquisition prompt corresponding to the unqualified identity characteristic description of the voiceprint volume which is too small as the acquisition prompt volume in a preset corresponding relation table of the unqualified identity characteristic description and the acquisition prompt of the voiceprint mode;
and finally, sending a voice prompt corresponding to the acquisition prompt of 'increasing acquisition volume' from the smart phone of the user according to the voice prompt mode corresponding to the sound sensation mode.
The method for detecting the identity characteristic of the animal feeding material provided by the embodiment is further described by taking the application of the method for detecting the identity characteristic of the animal feeding material provided by the embodiment in a scene of pet identity recognition as an example with reference to the attached drawing 2. Referring to fig. 2, the method for detecting the identity of the animal feeding in the pet identity recognition scene specifically includes steps S202 to S220.
Step S202, detecting key features contained in the identity image information collected by the visual modality aiming at the pet dog D, key features contained in the identity sound information collected by the acoustic modality and key features contained in the identity body information collected by the body modality.
The user purchases the process of insuring such as accidental injury, accidental loss and the like for the pet dog D which is kept by the user, the pet dog is collected with identity characteristic information in a visual mode, a sound mode and a body mode, and key characteristic detection is respectively carried out on the identity characteristic information in the 3 modes on the basis of the collected identity characteristic information in the 3 modes:
(1) a visual modality;
the method comprises the steps that a user detects 4 key features, namely a head feature point, a head feature frame, a whole body feature frame and a face feature frame of an owner, contained in identity image information of a pet dog D on the basis of receiving the identity image information uploaded by the user through identity image information acquired by the pet dog D which is fed by the user through a smart phone by using an image detection algorithm or a pre-trained image detection model;
(2) a sound sensation modality;
the method comprises the steps that a user identifies voiceprint features (voiceprint features) contained in identity voice information of a pet dog D which is housed by the user through a smart phone on the basis of receiving the identity voice information uploaded by the user through a voice detection algorithm or a pre-trained voice recognition model;
(3) and (4) a shape mode.
The method comprises the steps that a user identifies the walking posture characteristics of the pet dog D contained in identity body information (body image) of the pet dog D by using an image identification algorithm or a pre-trained image identification model on the basis of receiving the identity body information (body image) uploaded by the user through the identity body information (body image) collected by the pet dog D which is hosted by the user through a smart phone.
And step S204, extracting the detected key features from the identity image information, the identity voice information and the identity body information of the pet dog D.
For the visual modality, the 4 detected key features are extracted from the identity image information of the pet dog D, the recognized voiceprint features are extracted from the identity voice information of the pet dog D, and the recognized walking posture features are extracted from the identity shape information (shape image) of the pet dog D.
Step S206, calculating characteristic values of the key characteristics under the characteristic dimensions of the visual mode, the acoustic mode and the body mode according to the identity characteristic information of the visual mode, the acoustic mode and the body mode, the extracted key characteristics and the incidence relation among the key characteristics.
The feature dimensions of the visual modality specifically include the following 4 feature dimensions: the head feature point and the head feature frame are connected with the whole body feature frame, the head feature point and the whole body feature frame are connected with the face feature frame of the owner, the head feature point and the head feature frame are connected with the whole body feature frame, the whole body feature frame is connected with the face feature frame of the owner, and the face feature frame of the owner is connected with the whole body feature frame.
The feature dimensions of the vocal modality specifically include 2 feature dimensions, namely a vocal print volume dimension to which the vocal print features belong and a vocal print tone dimension to which the vocal print features belong.
The characteristic dimensions of the body mode specifically include the following two characteristic dimensions: the walking posture feature belongs to a posture dimension, and the palm print feature and the nose print feature belong to a feature definition dimension.
Combining the extracted 4 key features of the visual modality, the association between the 4 key features and the identity image information of the pet dog D and the association among the 4 key features, calculating feature values of the 4 key features under feature dimensions (feature point number dimension, feature position dimension, feature area dimension and feature overlapping dimension) of the visual modality, wherein the specific calculation process comprises the following 4 steps:
the first step is as follows: counting the number of head characteristic points contained in the identity image information of the pet dog D;
the second step is that: calculating the positions of each head characteristic point, the head characteristic frame, the whole body characteristic frame and the face characteristic frame of the owner in the image corresponding to the identity image information according to the size information of the identity image information of the pet dog D, and calculating the relative position of each head characteristic point in the image corresponding to the identity image information;
the third step: combining the position calculated in the second step, further calculating the area size of the head feature frame, the whole body feature frame and the face feature frame of the person who belongs to the holder in the image corresponding to the identity image information;
the fourth step: and analyzing whether the 3 feature frames, namely the head feature frame, the whole body feature frame and the face feature frame of the owner, are overlapped in the image corresponding to the identity image information on the basis of the position calculated in the second step and the area calculated in the third step.
Similarly, combining the extracted key feature of the voiceprint feature of the vocal modality and the correlation between the key feature and the identity sound information of the pet dog D, calculating a feature value of the key feature under the feature dimension (the voiceprint volume dimension and the voiceprint tone dimension) of the vocal modality; and calculating the characteristic numerical value of the key characteristic under the characteristic dimension (posture dimension and characteristic definition dimension) of the body modality by combining the extracted 3 key characteristics of the body modality and the incidence relation between the 3 key characteristics and the identity body information of the pet dog D.
And S208, constructing a feature Vector by taking the feature numerical values under the feature dimensions of the visual mode, the acoustic mode and the body mode as Vector elements to obtain a multi-mode feature Vector of the pet dog D.
And the Vector elements of the multi-mode feature Vector of the pet dog D correspond to the feature values under the feature dimensions of 3 modes, namely the visual mode, the acoustic mode and the body mode, of the pet dog D one by one.
And step S210, inputting the multi-mode feature Vector of the pet dog D into a multi-classification Model for identity feature qualification detection, and outputting a multi-mode detection result for identity feature detection of the pet dog D.
If the multi-modal detection result output by the multi-classification Model includes the feature qualified detection of at least one modality, that is, the output multi-modal detection result includes the modality of the identity feature qualified detection failing, then step S212 to step S218 are executed;
for example, the multi-modal detection result output by the multi-classification Model is as follows: the first position of the multi-mode detection result is '1, 0, 1', wherein the first position of the multi-mode detection result is '1', the identity characteristic of the pet dog D in the visual mode is qualified or passes through the detection, the second position of the multi-mode detection result is '0', the identity characteristic of the pet dog D in the acoustic mode is unqualified or fails in the detection, and the third position of the multi-mode detection result is '1', the identity characteristic of the pet dog D in the physical mode is qualified or passes through the detection.
If all the modality detection results of the feature qualification detection of the visual modality, the acoustic modality and the body modality in the multi-modality detection results output by the multi-classification Model are passed, that is, the output multi-modality detection results do not include the modality in which the identity feature qualification detection fails, step S220 is executed without processing.
In step S212, the acoustic sensation modality is determined as the detection failure modality.
Step S214, comparing the feature value of the key feature contained in the identity feature information of the acoustic sensation modality with the feature qualified threshold interval under the corresponding feature dimension, and determining unqualified identity feature description of the acoustic sensation modality according to the comparison result.
Specifically, the volume value of the voiceprint feature contained in the pet dog D identity sound information in the voiceprint volume dimension is compared with the voiceprint volume threshold interval in the voiceprint volume dimension, the comparison result is that the volume value is smaller than the lower limit value of the voiceprint volume threshold interval, the tone value of the voiceprint feature contained in the pet dog D identity sound information in the voiceprint tone dimension is compared with the voiceprint tone threshold interval in the voiceprint tone dimension, the comparison result is that the tone value is in the voiceprint tone threshold interval, and the disqualification of the identity feature of the vocal sensation modality is determined to be described as 'the voiceprint volume is too small' based on the comparison result of the volume value and the tone color threshold interval.
Step S216, searching the acquisition prompt corresponding to the disqualified identity feature description in the relation table of disqualified identity feature description and acquisition prompt of the acoustic sensation modality.
Specifically, in the correspondence table of disqualified description of identity characteristics and acquisition reminding of the vocal sense modality, the acquisition reminding corresponding to the disqualified description of the identity characteristics of "too small voiceprint volume" is searched for as "increasing the acquisition volume".
Step S218, sending out voice prompt on the smart phone of the user according to the voice prompt mode corresponding to the sound sensation mode.
In conclusion, the method for detecting the identity characteristics of the feeding materials determines the multi-modal feature vectors of the feeding materials on the basis of collecting the identity characteristic information of the feeding materials in multiple modes and detecting and extracting the key features contained in the identity characteristic information of the multiple modes, and the multi-modal feature vectors are used as a classification model to input qualified identity characteristic detection, so that the identity characteristics of the feeding materials are more accurately and comprehensively reflected through the multi-modal feature vectors, more accurate identity characteristic detection of the feeding materials is realized, and the identity characteristic identification of the feeding materials on the basis is more accurate and more comprehensive.
The embodiment of the device for detecting the identity characteristics of the feeding materials provided by the specification is as follows:
in the embodiment, the method for detecting the identity characteristic of the animal feeding material is provided, and correspondingly, the device for detecting the identity characteristic of the animal feeding material is also provided, and the description is provided with the accompanying drawings.
Referring to FIG. 3, there is shown a schematic diagram of an implement for detecting the identity of an animal according to this embodiment.
Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to the corresponding description of the method embodiments provided above for relevant portions. The device embodiments described below are merely illustrative.
This specification provides a feeding thing identity characteristic detection device, includes:
a key feature detection module 302 configured to detect key features included in identity feature information acquired for the asset by the various modalities;
a feature value determination module 304 configured to determine, according to the identity feature information, a feature value of the key feature in a feature dimension of a corresponding modality;
and the identity characteristic detection module 306 is configured to input the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity characteristic qualification detection, and output the identity characteristic vector as a detection result for identity characteristic detection of the feeder.
Optionally, the eigenvalue determination module 304 includes:
the key feature extraction submodule is configured to extract the detected key features from the identity feature information of at least one modality;
and the feature numerical value calculation submodule is configured to calculate a feature numerical value of the key feature under the feature dimension of the modality according to the identity feature information of the modality, the extracted key feature and the incidence relation among the key features.
Optionally, the device for detecting the identity of the feeder comprises:
the multi-modal feature vector construction module is configured to perform feature vector construction by taking feature numerical values under feature dimensions of each modality as vector elements to obtain multi-modal feature vectors;
and vector elements of the multi-modal feature vector correspond to feature numerical values under feature dimensions of each mode one by one.
Optionally, the modality includes at least one of: visual modality, acoustic modality, and physical modality;
wherein the identity feature information collected at the visual modality for the odorant comprises: utilizing image acquisition equipment to acquire identity image information aiming at the feeding materials;
characteristic information collected at the acoustic modality for the odorant includes: utilizing sound acquisition equipment to acquire identity sound information aiming at the feeding materials;
characteristic information collected at the shape modality for the animal feed comprises: and identity and shape information acquired by biological characteristic acquisition equipment aiming at the feeding materials is utilized.
Optionally, the visual modality includes at least one of the following for key features included in the identity image information acquired by the implement:
the method comprises the following steps of feeding a head characteristic point of a feed, a feed nose characteristic point, a feed opening characteristic point, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed opening characteristic frame, a feed full-body characteristic frame and a human face characteristic frame of a feed;
accordingly, the characteristic dimension of the visual modality includes at least one of:
the feed is characterized by comprising a feed head characteristic point, a feed nose characteristic point, a feed eye characteristic point, a number dimension of characteristic points of a feed mouth characteristic point, a feed head characteristic point, a feed nose characteristic point, a feed eye characteristic point, a feed mouth characteristic point, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body characteristic frame, a characteristic position dimension of a face characteristic frame of a feed, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body characteristic frame, a feature area dimension of a face characteristic frame of a feed, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body feed characteristic frame, a whole-body feed characteristic frame of a feed, a face characteristic frame of a feed, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed full-body feed nose characteristic frame, a feed nose characteristic, And (4) the features of at least two of the human face feature frames of the owners belonging to the owners overlap dimensions.
Optionally, the eigenvalue determination module 304 includes:
the number counting submodule is configured to count the number of feature points of the feeder head feature point, the feeder nose feature point, the feeder eye feature point and/or the feeder mouth feature point of the feeder contained in the identity image information in the feature point number dimension;
the position calculation sub-module is configured to calculate feature position information of an feeder head feature point, a feeder nose feature point, a feeder eye feature point, a feeder mouth feature point, a feeder head feature frame, a feeder nose feature frame, a feeder eye feature frame, a feeder mouth feature frame, a feeder whole-body feature frame and/or a human face feature frame of a feeder contained in the identity image information in the feature position dimension;
the area calculation sub-module is configured to calculate a feature area information of the head feature frame of the feeder, the nose feature frame of the feeder, the eye feature frame of the feeder, the mouth feature frame of the feeder, the whole body feature frame of the feeder and/or the face feature frame of the feeder belonging to the feeder in the feature area dimension, which are contained in the identity image information;
and/or the presence of a gas in the gas,
and the overlap relation analysis sub-module is configured to analyze feature overlap relation information of at least two of the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame of the feeder and the human face feature frame of the feeder to which the feeder belongs in the feature overlap dimension, wherein the feature overlap relation information is contained in the identity image information.
Optionally, the key features included in the identity sound information collected by the acoustic sensation modality for the occupant include: feeding material voiceprint characteristics;
accordingly, the characteristic dimensions of the acoustic modality include at least one of: the voiceprint volume dimension to which the support voiceprint feature belongs and the voiceprint timbre dimension to which the support voiceprint feature belongs.
Optionally, the shape modality includes at least one of the following for key features included in the identity shape information acquired by the feeder: walking posture characteristic, feeding material palm print characteristic and feeding material nose print characteristic;
correspondingly, the characteristic dimension of the body modality comprises at least one of the following: the walking posture feature belongs to a posture dimension, the feeding palm print feature and the feeding nose print feature belong to a feature definition dimension.
Optionally, the classification model includes at least one of: a two-classification model and a multi-classification model.
Optionally, if the classification model is a multi-classification model, after the multi-modal feature vector is input into the multi-classification model, the multi-classification model performs identity feature qualification detection on key features included in feature information of various modalities respectively based on feature values under feature dimensions of various modalities included in the multi-modal feature vector;
and if the mode detection result of the feature qualified detection of at least one mode is that the detection fails, outputting a multi-mode detection result carrying the feature qualified detection of the at least one mode as the detection failure, and taking the multi-mode detection result as the multi-classification model to detect the identity features of the livestock.
Optionally, the device for detecting the identity of the feeder comprises:
a detection failure mode determining module configured to determine at least one mode, which is used for detecting the identity feature qualification contained in the multi-mode detection result output by the multi-classification model as detection failure, as a detection failure mode;
the identity feature disqualification description determining module is configured to compare a feature value of the key feature of the failed detection mode in a feature dimension with a feature qualification threshold interval of the key feature of the failed detection mode in the feature dimension, and determine disqualification description of the identity feature of the failed detection mode according to a comparison result;
the acquisition reminding determining module is configured to determine the acquisition reminding corresponding to the unqualified identity feature description according to a preset corresponding relation between the unqualified identity feature description of the detection failure mode and the acquisition reminding;
and the acquisition reminding execution module is configured to execute the acquisition reminding according to the acquisition reminding mode corresponding to the detection failure mode.
Optionally, the key features included in the identity feature information of each modality are detected based on a key feature detection model corresponding to the corresponding modality, and the detected key features of the corresponding modality are extracted based on the key feature detection model;
inputting the identity image information of the visual modality into an image key feature detection model corresponding to the visual modality to detect and extract image key features, and outputting the image key features extracted from the identity image information;
inputting the identity sound information of the acoustic sense modality into a sound key feature detection model corresponding to the acoustic sense modality to detect and extract sound key features, and outputting the sound key features extracted from the identity sound information;
and inputting the identity body information of the body mode into a body key feature detection model corresponding to the body mode to detect and extract body key features, and outputting key feature detection extracted from the identity body information.
The present specification provides an embodiment of a computing device as follows:
FIG. 4 is a block diagram illustrating a configuration of a computing device 400 provided according to one embodiment of the present description. The components of the computing device 400 include, but are not limited to, a memory 410 and a processor 420. Processor 420 is coupled to memory 410 via bus 430 and database 450 is used to store data.
Computing device 400 also includes access device 440, access device 440 enabling computing device 400 to communicate via one or more networks 460. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 440 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 400, as well as other components not shown in FIG. 4, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 4 is for purposes of example only and is not limiting as to the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 400 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 400 may also be a mobile or stationary server.
The present specification provides a computing device comprising a memory 410, a processor 420, and computer instructions stored on the memory and executable on the processor, the processor 420 being configured to execute the following computer-executable instructions:
detecting key features contained in identity feature information acquired by various modes aiming at the feeder;
determining a characteristic numerical value of the key characteristic under the characteristic dimension of the corresponding mode according to the identity characteristic information;
and inputting the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection, and outputting the result as a detection result for identity feature detection of the feeder.
This specification provides one example of a computer-readable storage medium, comprising:
one embodiment of the present specification provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the implement the method for livestock identity detection.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical scheme of the storage medium and the technical scheme of the method for detecting the identity characteristic of the animal feeding material belong to the same concept, and details that are not described in detail in the technical scheme of the storage medium can be referred to in the description of the technical scheme of the method for detecting the identity characteristic of the animal feeding material.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.

Claims (15)

1. A method for detecting the identity of livestock comprises the following steps:
detecting key features contained in identity feature information acquired by various modes aiming at the feeder;
determining a characteristic numerical value of the key characteristic under the characteristic dimension of the corresponding mode according to the identity characteristic information;
and inputting the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection, and outputting the result as a detection result for identity feature detection of the feeder.
2. The feeder identity feature detection method of claim 1, wherein the determining a feature value of the key feature in a feature dimension of a corresponding modality according to the identity feature information comprises:
extracting detected key features from the identity feature information of at least one modality aiming at the identity feature information of the at least one modality;
and calculating the characteristic numerical value of the key characteristic under the characteristic dimension of the modality according to the identity characteristic information of the modality, the extracted key characteristic and the incidence relation among the key characteristics.
3. The method for detecting identity characteristics of an feeder according to claim 2, wherein after the step of determining the feature numerical value of the key feature under the feature dimension of the corresponding modality according to the identity characteristic information is executed, and before the step of inputting a multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity characteristic qualification detection and outputting a detection result as identity characteristic detection of the feeder is executed, the method comprises the following steps of:
constructing a feature vector by taking the feature numerical values under the feature dimensions of each mode as vector elements to obtain the multi-mode feature vector;
and vector elements of the multi-modal feature vector correspond to feature numerical values under feature dimensions of each mode one by one.
4. The additive identity detection method of any one of claims 1 to 3, said modality comprising at least one of: visual modality, acoustic modality, and physical modality;
wherein the identity feature information collected at the visual modality for the odorant comprises: utilizing image acquisition equipment to acquire identity image information aiming at the feeding materials;
characteristic information collected at the acoustic modality for the odorant includes: utilizing sound acquisition equipment to acquire identity sound information aiming at the feeding materials;
characteristic information collected at the shape modality for the animal feed comprises: and identity and shape information acquired by biological characteristic acquisition equipment aiming at the feeding materials is utilized.
5. The additive identity feature detection method of claim 4, the visual modality comprising at least one of the following for a key feature contained in the identity image information acquired by the additive:
the method comprises the following steps of feeding a head characteristic point of a feed, a feed nose characteristic point, a feed opening characteristic point, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed opening characteristic frame, a feed full-body characteristic frame and a human face characteristic frame of a feed;
accordingly, the characteristic dimension of the visual modality includes at least one of:
the feed is characterized by comprising a feed head characteristic point, a feed nose characteristic point, a feed eye characteristic point, a number dimension of characteristic points of a feed mouth characteristic point, a feed head characteristic point, a feed nose characteristic point, a feed eye characteristic point, a feed mouth characteristic point, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body characteristic frame, a characteristic position dimension of a face characteristic frame of a feed, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body characteristic frame, a feature area dimension of a face characteristic frame of a feed, a feed head characteristic frame, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed whole-body feed characteristic frame, a whole-body feed characteristic frame of a feed, a face characteristic frame of a feed, a feed nose characteristic frame, a feed eye characteristic frame, a feed mouth characteristic frame, a feed full-body feed nose characteristic frame, a feed nose characteristic, And (4) the features of at least two of the human face feature frames of the owners belonging to the owners overlap dimensions.
6. The feeder identity feature detection method according to claim 5, wherein the feature value of the key feature in the feature dimension of the visual modality is determined as follows:
counting the number of the feature points of the head feature point, the nose feature point, the eye feature point and/or the mouth feature point of the feeder in the feature point number dimension, which are contained in the identity image information;
calculating the feature position information of the feeder head feature point, the feeder nose feature point, the feeder eye feature point, the feeder mouth feature point, the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame and/or the face feature frame of the feeder belonging to the feeder in the feature position dimension, which are contained in the identity image information;
calculating the feature area information of the head feature frame of the feeder, the nose feature frame of the feeder, the eye feature frame of the feeder, the mouth feature frame of the feeder, the whole body feature frame of the feeder and/or the face feature frame of the feeder belonging to the feeder in the feature area dimension, which are contained in the identity image information;
and/or the presence of a gas in the gas,
analyzing feature overlapping relation information of at least two of the feeder head feature frame, the feeder nose feature frame, the feeder eye feature frame, the feeder mouth feature frame, the feeder whole-body feature frame and the human face feature frame of the feeder to which the feeder belongs in the feature overlapping dimension, wherein the feature overlapping relation information is contained in the identity image information.
7. The method for identity feature detection of an feeder according to claim 4, wherein the acoustic perception modality comprises, for key features included in the identity acoustic information acquired by the feeder: feeding material voiceprint characteristics;
accordingly, the characteristic dimensions of the acoustic modality include at least one of: the voiceprint volume dimension to which the support voiceprint feature belongs and the voiceprint timbre dimension to which the support voiceprint feature belongs.
8. The method for identity feature detection of an feeder according to claim 4, said shape modality comprising at least one of the following for key features contained in said identity shape information acquired by said feeder: walking posture characteristic, feeding material palm print characteristic and feeding material nose print characteristic;
correspondingly, the characteristic dimension of the body modality comprises at least one of the following: the walking posture feature belongs to a posture dimension, the feeding palm print feature and the feeding nose print feature belong to a feature definition dimension.
9. The feeder identity feature detection method of claim 1, the classification model comprises at least one of: a two-classification model and a multi-classification model.
10. The feeding identity feature detection method according to claim 9, wherein if the classification model is a multi-classification model, after the multi-modal feature vector is input into the multi-classification model, the multi-classification model performs identity feature qualification detection on key features included in feature information of various modalities based on feature values under feature dimensions of various modalities included in the multi-modal feature vector;
and if the mode detection result of the feature qualified detection of at least one mode is that the detection fails, outputting a multi-mode detection result carrying the feature qualified detection of the at least one mode as the detection failure, and taking the multi-mode detection result as the multi-classification model to detect the identity features of the livestock.
11. The method for detecting identity characteristics of an feeder according to claim 10, wherein the step of inputting a multi-modal feature vector obtained by feature construction based on the feature value into a classification model for identity characteristic qualification detection and executing the output as a detection result for identity characteristic detection of the feeder comprises the steps of:
determining at least one mode which is used for determining the identity feature qualification detection contained in the multi-mode detection result output by the multi-classification model as a detection failure mode;
comparing the characteristic value of the key characteristic of the failed detection mode under the characteristic dimension with the characteristic qualified threshold interval under the characteristic dimension of the failed detection mode, and determining the unqualified description of the identity characteristic of the failed detection mode according to the comparison result;
determining an acquisition prompt corresponding to the unqualified identity feature description according to a preset corresponding relation between the unqualified identity feature description of the failed detection mode and the acquisition prompt;
and executing the acquisition reminding according to the acquisition reminding mode corresponding to the detection failure mode.
12. The feeding identity feature detection method according to claim 4, wherein key features contained in the identity feature information of each modality are detected based on a key feature detection model corresponding to the corresponding modality, and the detected key features of the corresponding modality are extracted based on the key feature detection model;
inputting the identity image information of the visual modality into an image key feature detection model corresponding to the visual modality to detect and extract image key features, and outputting the image key features extracted from the identity image information;
inputting the identity sound information of the acoustic sense modality into a sound key feature detection model corresponding to the acoustic sense modality to detect and extract sound key features, and outputting the sound key features extracted from the identity sound information;
and inputting the identity body information of the body mode into a body key feature detection model corresponding to the body mode to detect and extract body key features, and outputting key feature detection extracted from the identity body information.
13. An occupant identity detection apparatus comprising:
a key feature detection module configured to detect key features included in identity feature information acquired for the animal by the various modalities;
a feature value determination module configured to determine, according to the identity feature information, a feature value of the key feature in a feature dimension of a corresponding modality;
and the identity characteristic detection module is configured to input the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity characteristic qualification detection, and output the identity characteristic vector as a detection result for identity characteristic detection of the feeder.
14. A computing device, comprising:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
detecting key features contained in identity feature information acquired by various modes aiming at the feeder;
determining a characteristic numerical value of the key characteristic under the characteristic dimension of the corresponding mode according to the identity characteristic information;
and inputting the multi-modal feature vector obtained by feature construction based on the feature numerical value into a classification model for identity feature qualification detection, and outputting the result as a detection result for identity feature detection of the feeder.
15. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method for identity detection of an inoculum according to any one of claims 1 to 12.
CN201910985343.0A 2019-10-16 2019-10-16 Method and device for detecting identity characteristics of stored materials Pending CN110705512A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910985343.0A CN110705512A (en) 2019-10-16 2019-10-16 Method and device for detecting identity characteristics of stored materials

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910985343.0A CN110705512A (en) 2019-10-16 2019-10-16 Method and device for detecting identity characteristics of stored materials

Publications (1)

Publication Number Publication Date
CN110705512A true CN110705512A (en) 2020-01-17

Family

ID=69201294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910985343.0A Pending CN110705512A (en) 2019-10-16 2019-10-16 Method and device for detecting identity characteristics of stored materials

Country Status (1)

Country Link
CN (1) CN110705512A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766807A (en) * 2017-09-30 2018-03-06 平安科技(深圳)有限公司 Electronic installation, insure livestock recognition methods and computer-readable recording medium
CN108681611A (en) * 2018-06-04 2018-10-19 北京竞时互动科技有限公司 Pet management method and system
CN108734114A (en) * 2018-05-02 2018-11-02 浙江工业大学 A kind of pet recognition methods of combination face harmony line
CN109493873A (en) * 2018-11-13 2019-03-19 平安科技(深圳)有限公司 Livestock method for recognizing sound-groove, device, terminal device and computer storage medium
CN109886145A (en) * 2019-01-29 2019-06-14 浙江泽曦科技有限公司 Pet recognition algorithms and system
CN110083723A (en) * 2019-04-24 2019-08-02 成都大熊猫繁育研究基地 A kind of lesser panda individual discrimination method, equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766807A (en) * 2017-09-30 2018-03-06 平安科技(深圳)有限公司 Electronic installation, insure livestock recognition methods and computer-readable recording medium
CN108734114A (en) * 2018-05-02 2018-11-02 浙江工业大学 A kind of pet recognition methods of combination face harmony line
CN108681611A (en) * 2018-06-04 2018-10-19 北京竞时互动科技有限公司 Pet management method and system
CN109493873A (en) * 2018-11-13 2019-03-19 平安科技(深圳)有限公司 Livestock method for recognizing sound-groove, device, terminal device and computer storage medium
CN109886145A (en) * 2019-01-29 2019-06-14 浙江泽曦科技有限公司 Pet recognition algorithms and system
CN110083723A (en) * 2019-04-24 2019-08-02 成都大熊猫繁育研究基地 A kind of lesser panda individual discrimination method, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110705528B (en) Identity coding method and device and feeding material identity coding method and device
CN110929650B (en) Method and device for identifying livestock and feed identity, computing equipment and readable storage medium
US10319130B2 (en) Anonymization of facial images
US20190228211A1 (en) Au feature recognition method and device, and storage medium
KR102185469B1 (en) Companion Animal Emotion Bots Device using Artificial Intelligence and Communion Method
CN110276067B (en) Text intention determining method and device
CN109784199B (en) Peer-to-peer analysis method and related product
CN110298245B (en) Interest collection method, interest collection device, computer equipment and storage medium
CN107368567B (en) Animal language identification method and user terminal
Hantke et al. What is my dog trying to tell me? The automatic recognition of the context and perceived emotion of dog barks
CN108256500A (en) Recommendation method, apparatus, terminal and the storage medium of information
CN108153169A (en) Guide to visitors mode switching method, system and guide to visitors robot
CN110728244B (en) Method and device for guiding acquisition of stocking material identity information
CN110480656B (en) Accompanying robot, accompanying robot control method and accompanying robot control device
US11127181B2 (en) Avatar facial expression generating system and method of avatar facial expression generation
CN112214748A (en) Identity recognition system, method and device
CN110909683B (en) Guarantee verification method and device based on guarantee project
CN108399375B (en) Identity recognition method based on associative memory
CN107317974A (en) A kind of makeups photographic method and device
WO2015131571A1 (en) Method and terminal for implementing image sequencing
CN110737885A (en) Method and device for authenticating identity of livestock
CN115862120A (en) Separable variation self-encoder decoupled face action unit identification method and equipment
US20190193261A1 (en) Information processing device, information processing method, and non-transitory computer-readable recording medium for acquiring information of target
CN110704646A (en) Method and device for establishing stored material file
KR20170036927A (en) System for building social emotion network and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination