CN114841955A - Biological species identification method, device, equipment and storage medium - Google Patents
Biological species identification method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114841955A CN114841955A CN202210461485.9A CN202210461485A CN114841955A CN 114841955 A CN114841955 A CN 114841955A CN 202210461485 A CN202210461485 A CN 202210461485A CN 114841955 A CN114841955 A CN 114841955A
- Authority
- CN
- China
- Prior art keywords
- species
- biological object
- user
- image
- biological
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 230000004044 response Effects 0.000 claims description 10
- 238000013145 classification model Methods 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 241000894007 species Species 0.000 description 266
- 238000012549 training Methods 0.000 description 28
- 230000008569 process Effects 0.000 description 11
- 238000012360 testing method Methods 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 241000196324 Embryophyta Species 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 235000002566 Capsicum Nutrition 0.000 description 4
- 239000006002 Pepper Substances 0.000 description 4
- 241000722363 Piper Species 0.000 description 4
- 235000016761 Piper aduncum Nutrition 0.000 description 4
- 235000017804 Piper guineense Nutrition 0.000 description 4
- 235000008184 Piper nigrum Nutrition 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 240000004160 Capsicum annuum Species 0.000 description 1
- 235000008534 Capsicum annuum var annuum Nutrition 0.000 description 1
- 240000001844 Capsicum baccatum Species 0.000 description 1
- 235000007862 Capsicum baccatum Nutrition 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 241000607479 Yersinia pestis Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
Abstract
The invention discloses a biological species identification method, a biological species identification device, biological species identification equipment and a storage medium. Wherein the method comprises the following steps: taking an image to be recognized provided by a user as an input parameter and inputting the image to be recognized into a trained species recognition model, and determining a target species to which a biological object contained in the image to be recognized belongs based on an output result of the species recognition model; under the condition that the target species cannot be determined based on the output result of the species identification model, the image to be identified is input into a trained reason analysis model as input parameters, and the reason type of the target species cannot be determined based on the output result of the reason analysis model; and responding to the user for the guide operation corresponding to the reason type based on the reason type. The method and the device can improve the accuracy of species identification and the use experience of a user.
Description
Technical Field
One or more embodiments of the present invention relate to the field of artificial intelligence technology, and in particular, to a method, an apparatus, a device, and a storage medium for identifying a biological species.
Background
At present, in the related art, the technology in the field of artificial intelligence is adopted, which species a biological object contained in an image uploaded by a user belongs to can be identified, and then the actual requirements of the user in the aspects of work and study can be met, or certain life interests can be provided.
However, because the species features that can be reflected by the images uploaded by the user are limited, and there is a problem that features are similar among some specific species, the biological species finally identified by the algorithm may not be accurate, and may bring wrong recognition to the user.
Disclosure of Invention
In view of the above, one or more embodiments of the present invention provide a method, an apparatus, a device and a storage medium for identifying a biological species.
In order to achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
according to a first aspect of one or more embodiments of the present invention, there is provided a method of identifying a biological species, the method comprising:
inputting an image to be recognized provided by a user as an input parameter into a trained species recognition model, and determining a target species to which a biological object contained in the image to be recognized belongs based on an output result of the species recognition model;
under the condition that the target species cannot be determined based on the output result of the species recognition model, inputting the image to be recognized into a trained cause analysis model as an input parameter, and determining the cause type of the target species which cannot be determined based on the output result of the cause analysis model;
and responding to the user for the guide operation corresponding to the reason type based on the reason type.
In one implementation, the method further comprises:
in the event that the target species is successfully determined based on the output results of the species recognition model, responding to a user for the target species to which the biological object belongs.
In one implementation, the inability to determine the target species based on the output of the species recognition model includes:
and if the output result of the species identification model to the biological object belonging to each species cannot meet the preset confidence requirement, determining that the target species cannot be determined based on the output result of the species identification model.
In one implementation, where the biological object is a plant, the failure to determine the type of cause of the target species includes one or more of:
the biological subject is in a young seedling state;
the biological subject is in a withered yellow state;
the biological subject lacks strong identifying features;
the biological subject lacks a complete identifying feature;
the biological object is not at a proper shooting distance;
the image quality of the image to be recognized is too low;
the background of the image to be recognized is disordered;
others cannot determine the cause of the target species.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
and if the reason type is determined to be that the biological object is in the seedling state, responding to the fact that the biological object is in the seedling state to a user, and suggesting guide information for recognition after the biological object grows.
In one implementation, the method further comprises:
and responding guidance information for re-identifying the target species to which the biological object belongs to the user at the target time based on the preset waiting time.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
and if the reason type is determined to be that the biological object is in a withered yellow state, responding to the fact that the biological object is in the withered yellow state to the user, and recommending that the biological object is identified or providing guide information of the image to be identified containing the healthy biological object again after the biological object is recovered.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
and if the reason type is determined that the biological object lacks strong identification characteristics, determining the superior species to which the biological object belongs, and responding the superior species to which the biological object belongs to the user.
In one implementation, the method further comprises:
if the reason type is determined that the biological object lacks strong identification characteristics, determining a species range to which the biological object belongs, and responding the species range to which the biological object belongs to a user; a plurality of species is included within the range of species.
In one implementation, the method further comprises:
and when the species range is responded to the user, responding the strong identification characteristics of all species in the species range and/or the distinguishing characteristics among all species in the species range to the user so as to distinguish the target species to which the biological object belongs by the user.
In one implementation, the method further comprises:
and guiding a user to shoot a target part of the biological object, and determining a target species to which the biological object belongs again by combining the shot image to be identified.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
and if the reason type is determined that the biological object lacks complete identification characteristics, guiding a user to shoot a target part of the biological object, and determining a target species to which the biological object belongs again by combining the shot image to be identified.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
and if the reason type is determined that the biological object is not in the proper shooting distance, suggesting guiding information for identification after focal length adjustment to a user in response to the fact that the biological object is not in the proper shooting distance.
In one implementation, the method further comprises:
and after the focal length is automatically adjusted, guiding the user to shoot the biological object again, and determining the target species of the biological object in the shot image to be identified again.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
and if the reason type is determined to be that the image quality of the image to be recognized is too low, optimizing the image quality of the image to be recognized, and re-determining the target species to which the biological object belongs in the optimized image to be recognized.
In one implementation, the method further comprises:
and under the condition that the target species to which the biological object belongs in the optimized image to be recognized cannot be determined, guiding a user to adjust shooting parameters and shoot again on the basis of the image quality parameters of the image to be recognized before optimization, and determining the target species to which the biological object belongs in the shot image to be recognized again.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
and if the reason type is determined to be the background clutter of the image to be recognized, guiding a user to select a target biological object to be subjected to species recognition from a plurality of biological objects contained in the image to be recognized, and determining a target species to which the target biological object belongs.
In one implementation, the responding to the guiding operation corresponding to the reason type to the user based on the reason type includes:
if the reason type is determined to be that the background of the image to be recognized is disordered, respectively determining the species to which each biological object contained in the image to be recognized belongs, and responding the species to which each biological object belongs to the user so that the user can judge the target species to which the target biological object belongs.
In one implementation, the cause analysis model is a classification model implemented based on a convolutional neural network or a residual error network.
According to a second aspect of one or more embodiments of the present invention, there is provided an identification apparatus of a biological species, the apparatus including an identification unit, an analysis unit, and a guide unit; wherein:
the identification unit is used for inputting an image to be identified provided by a user into a trained species identification model as an input parameter, and determining a target species to which a biological object contained in the image to be identified belongs based on an output result of the species identification model;
the analysis unit is used for inputting the image to be recognized into a trained reason analysis model as an input parameter under the condition that the target species cannot be determined based on the output result of the species recognition model, and determining the reason type of the target species which cannot be determined based on the output result of the reason analysis model;
and the guiding unit is used for responding to guiding operation corresponding to the reason type to the user based on the reason type.
According to a third aspect of one or more embodiments of the present invention, there is provided an electronic device, including:
a processor, and a memory for storing processor-executable instructions;
wherein the processor implements the steps of the method of the first aspect by executing the executable instructions.
According to a fourth aspect of one or more embodiments of the present invention, a computer-readable storage medium is proposed, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of the first aspect as described above.
As can be seen from the above description, in the present invention, in a case where the species recognition model cannot determine the target species to which the biological object belongs in the image to be recognized, the type of cause for which the species recognition model cannot recognize the target species is determined based on the trained cause analysis model, and then, based on the type of cause, a guidance operation corresponding to the type of cause is responded to the user.
According to the scheme, the situation that inaccurate biological species identification results are fed back to the user can be avoided, the method is suitable for specific reasons that the biological species cannot be identified through the image, targeted guiding operation is responded to the user, the user can be guaranteed to obtain more accurate species identification results subsequently according to guiding, and the accuracy of species identification and the use experience of the user are improved.
Drawings
Fig. 1 is a flowchart of a method for identifying a biological species according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method for responding to a boot operation to a user in accordance with an illustrative embodiment.
FIG. 3 is a flow diagram illustrating a method for responding to a boot operation to a user in accordance with another illustrative embodiment.
FIG. 4 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 5 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 6 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 7 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 8 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 9 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 10 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 11 is a flowchart illustrating a method of responding to a boot operation to a user in accordance with yet another exemplary embodiment.
FIG. 12 is a flow chart illustrating training of a cause analysis model in an exemplary embodiment.
FIG. 13 is a flow chart illustrating the training of a cause analysis model in accordance with another exemplary embodiment.
FIG. 14 is a flow chart illustrating the training of a cause analysis model in accordance with yet another exemplary embodiment.
Fig. 15 is a schematic structural diagram of an electronic device in which an apparatus for identifying a biological species is provided according to an exemplary embodiment.
Fig. 16 is a block diagram of an apparatus for identifying a biological species according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with one or more embodiments of the invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the invention, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the respective methods are not necessarily performed in the order shown and described. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in the present disclosure may be divided into multiple steps for description in other embodiments; multiple steps described in the present invention may be combined into a single step in other embodiments.
At present, in the related art, the technology in the field of artificial intelligence is adopted, and based on a trained model, the species to which a biological object in an image belongs can be identified; after supervised learning of a classification model is realized by using an image sample printed with a species label, an image to be recognized is input into a trained classification model as input parameters, so that confidence coefficients of biological objects in the image to be recognized belonging to various species can be obtained, and the species with the highest confidence coefficient can be used as a target species to which the biological objects belong for feedback.
However, since different types of living beings in the same family or the same genus often have similar characteristics and are often difficult to distinguish, and images uploaded by users sometimes have a problem of lack of necessary characteristics, biological species identified in related technologies may not be necessarily accurate, how to avoid feedback error conclusions, and ensuring that users can always obtain more accurate identification results becomes a technical problem to be solved urgently.
In view of the above, the present invention provides a method for identifying a biological species, which can be applied to various electronic devices such as a smart phone, a PAD, or a personal computer in various manners such as APP, applet, and web page.
For example, when the identification method operates in an APP, an applet, a web page, and the like, the electronic device executing the identification method may be a smart phone, a PAD, and a personal computer, or may be a server performing data interaction with the smart phone, the PAD, and the personal computer, which is not limited in this respect.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for identifying a biological species according to an exemplary embodiment of the invention.
The identification method of the biological species can comprise the following specific steps:
step 102, inputting an image to be recognized provided by a user as an input parameter into a trained species recognition model, and determining a target species to which a biological object contained in the image to be recognized belongs based on an output result of the species recognition model.
In this embodiment, first, a user may provide an image to be identified for species identification by shooting or uploading an album image in real time, and in an ideal case, the image to be identified provided by the user only includes one biological object, and a target species to which the biological object belongs may be determined based on a trained species identification model.
After the image to be recognized provided by the user is acquired, the image to be recognized is used as an input parameter to be input into a trained species recognition model, the species recognition model outputs a corresponding result for a species to which the biological object belongs, based on the foregoing, in the case that the species recognition model is a classification model, the output result may be a confidence that the biological object belongs to each different species, and based on the confidence, the species with the highest confidence may be determined as a target species to which the biological object belongs.
The embodiment does not limit what kind of algorithm is specifically adopted for the species identification model, but it can be understood that the species identification model realized based on different algorithms has different output results, and the specific manner of determining the target species from the output results is also different, and the target species to which the biological object belongs may not be determined based on the confidence level as a standard. The training process of the species recognition model is also related to the algorithm adopted by the species recognition model, and reference may be made to related technologies, which are not described herein again.
In an alternative implementation, if the target species is successfully determined based on the output of the species recognition model, the user may be responded to the target species to which the biological object belongs.
For example, in the confidence levels that the biological object output by the species recognition model belongs to the respective species, if the maximum confidence level exceeds a preset confidence level threshold, it may be determined that the recognition is successful, and the species corresponding to the maximum confidence level is the target species in response to the user.
And 104, under the condition that the target species cannot be determined based on the output result of the species recognition model, inputting the image to be recognized into a trained reason analysis model as an input parameter, and determining the reason type of the target species which cannot be determined based on the output result of the reason analysis model.
In this embodiment, if the target species to which the biological object belongs cannot be accurately determined based on the output result of the species recognition model, a recognition result is not forced to be given as in the related art, and the image to be recognized is then input into a trained cause analysis model, and the cause analysis model analyzes the cause of the species recognition model failure.
In an alternative implementation, the inability to determine the target species based on the output of the species recognition model includes:
and if the output result of the species identification model to the biological object belonging to each species cannot meet the preset confidence requirement, determining that the target species cannot be determined based on the output result of the species identification model.
Based on the foregoing, in the case that the species identification model is a classification model, the output result is the confidence that the biological object in the image to be identified belongs to each species, and if the confidence that the biological object belongs to each species is lower than the preset confidence threshold, it may be determined that the species identification model fails to identify, and the target species cannot be determined.
For example, if the confidence level that the biological object output by the species recognition model belongs to each species is lower than the preset confidence level threshold value of 0.3, it may be determined that the target species cannot be determined based on the output result of the species recognition model, and the subsequent steps may be performed.
And the reason analysis model can be a supervised classification model, after the reason analysis model is trained by using the image samples with the reason type labels, the trained reason analysis model can output the confidence degrees of all reason types causing the identification failure of the target species, and the reason type with the highest confidence degree is determined as the reason type which can not determine the target species.
Similar to the species identification model described above, the reason analysis model implemented based on different algorithms has different output results in different forms, and the specific manner of determining the type of reason from the output results is also different, and the type of reason may not be determined based on the confidence level.
In the case where the biological object in the image to be recognized is a plant, the cause type of the target species to which the biological object belongs cannot be determined, and may be any of the following:
the biological subject is in a young seedling state;
the biological subject is in a withered yellow state;
the biological subject lacks strong identifying features;
the biological subject lacks a complete identifying feature;
the biological object is not at a proper shooting distance;
the image quality of the image to be recognized is too low;
the background of the image to be recognized is disordered;
others cannot determine the cause of the target species.
It is understood that the reason types may also include other reason types not shown, and a technician may add a new reason type label according to actual scene conditions.
For example, in the process of training and applying the cause analysis model, a technician may summarize the case that the cause type is the cause of the other undeterminable target species, count the respective occurrence times of the plurality of other causes of the undeterminable target species included in the cause type, and determine that any cause is a newly added cause type when the occurrence times of the cause exceeds a preset time threshold.
Specifically, assuming that the causes R1, R2, and R3 causing the identification failure are all included in the cause type of other causes that cannot determine the target species, in the application process, the to-be-identified images determined by the cause analysis model as the cause type may be collected, and the number of times that the causes R1, R2, and R3 cause the identification failure of the target species in the to-be-identified images respectively is counted, if the number of times that the cause R1 occurs exceeds a preset number threshold, or the ratio of the number of times that the cause R1 occurs exceeds a preset ratio threshold, the cause R1 may be added to the new cause type R1, and a guiding operation corresponding to the cause type R1 is subsequently set.
In the case that the biological object is an animal, the reason type that the target species cannot be determined can be inferred according to the reason type, for example, the biological object is in a cub state, and details thereof are not repeated.
And 106, responding to a guiding operation corresponding to the reason type to the user based on the reason type.
In this embodiment, after determining the cause type of the target species that cannot be identified based on the cause analysis model, a guidance operation corresponding to the cause type may be responded to the user based on the cause type, for example, a visual interface is used to display corresponding guidance information, or a camera is called to acquire a new image, and the guidance operation may indicate that the user obtains a more accurate species identification result subsequently.
In order to make the present invention more clearly understood by those skilled in the art, the following further describes a specific implementation manner of step 106 based on the foregoing types of reasons.
(1) The reason type is that the biological object is in a seedling state:
if it is determined in step 104 that the type of the reason for failing to identify the target species is that the biological object in the image to be identified is in a seedling state, i.e., the biological object is too young to identify the species, it is necessary to wait for the biological object to grow into a seedling before confirming the biological object.
Referring to fig. 2, fig. 2 is a flow chart illustrating a method for responding to a boot operation to a user according to an exemplary embodiment. In an alternative implementation, in step 106, responding to a guidance operation corresponding to the reason type to the user based on the reason type may include:
step 106A1, in response to the biological object being in a young seedling state, suggesting to the user guidance information for recognition after it has grown.
In particular, relevant teletext information may be displayed on a visual interface of an APP, applet or webpage for guidance.
Further, step 106 may further include:
step 106A2, based on the preset waiting time, responding the guiding information for the target species to which the biological object belongs to the re-identification to the user at the target time.
Specifically, the method can also guide the user to re-identify the target species to which the seedling belongs in the modes of popup pushing, alarm clock prompting and the like after the preset time length.
(2) The reason type is that the biological object is in a withered yellow state:
if the type of the reason for determining that the target species cannot be identified is determined in step 104 to be that the biological object in the image to be identified is in a withered yellow state, that is, the leaf of the biological object is withered and yellow, and is in a withered state, a withering state, a pest disaster, and the like, so that the species is difficult to identify, the biological object needs to be confirmed after being recovered to be healthy, or the biological object in the same species and in a healthy state needs to be searched for identification.
Referring to fig. 3, fig. 3 is a flow chart illustrating a method for responding to a boot operation to a user according to another exemplary embodiment. In an alternative implementation, in step 106, responding to a guidance operation corresponding to the reason type to the user based on the reason type may include:
step 106B1, responding to the withered yellow state of the biological object, advising the user to recognize or re-provide the guiding information of the image to be recognized containing the healthy biological object after the biological object is recovered to be healthy.
Specifically, relevant teletext information may also be displayed on a visual interface of an APP, applet, or web page for guidance.
(3) The cause type is the lack of strong recognition features of the biological object:
if, in step 104, it is determined that the type of cause for which the target species cannot be identified is that the biological object in the image to be identified lacks strong identification features that can refine the effective features of the lower species, the strong identification features can clearly distinguish the biological object from other species.
Referring to fig. 4, fig. 4 is a flow chart illustrating a method for responding to a boot operation to a user in accordance with yet another exemplary embodiment. In an alternative implementation, in step 106, responding to a guidance operation corresponding to the reason type to the user based on the reason type may include:
step 106C1, determining the superordinate species to which the biological object belongs, and responding to the user with the superordinate species to which the biological object belongs.
Specifically, when the species to which the biological object belongs cannot be determined, the superordinate species such as family, genus to which the biological object belongs may be determined, or when the subspecies or variety to which the biological object belongs may not be determined, the superordinate species such as family, species to which the biological object belongs may be determined, and then, relevant graphic and text information may be displayed through a visual interface of an APP, an applet, or a web page for guidance.
Referring to fig. 5, fig. 5 is a flow chart illustrating a method for responding to a boot operation to a user in accordance with yet another exemplary embodiment. In another alternative implementation, in step 106, responding to the guidance operation corresponding to the cause type to the user based on the cause type may include:
step 106C2, determining a species range to which the biological object belongs, and responding to the user the species range to which the biological object belongs, wherein the species range comprises a plurality of species.
Specifically, when the target species to which the biological object belongs cannot be determined, several species to which the biological object may belong may constitute a species range response to the user, for example, a plurality of species with a front confidence level in the output result of the species identification model are displayed in the form of text information through a visual interface of an APP, an applet, or a web page.
Further, step 106 may further include:
and 106C3, when the species range is responded to the user, responding the strong identification characteristics of each species in the species range and/or the distinguishing characteristics among the species in the species range to the user so as to enable the user to distinguish the target species to which the biological object belongs.
Specifically, when the species range is determined and a response is given to the user, a pre-stored picture of the strong identification features of each species in the species range may be displayed through a visual interface of an APP, an applet, or a web page, or the distinguishing features among the species in the species range may be displayed as image-text information, so that the user can automatically distinguish the target species to which the biological object belongs according to guidance of the image-text information.
For example, assuming that the species range includes a plurality of pepper varieties such as pod pepper, magic pepper, lantern pepper, sweet pepper, and two-thorn strip, a picture of the fruit, leaf and stem of each pepper variety may be displayed to the user through a visual interface of APP, applet, or web page, so that the user can identify the target species according to the picture comparison.
Referring to fig. 6, fig. 6 is a flow chart illustrating a method for responding to a boot operation to a user in accordance with yet another exemplary embodiment. In yet another alternative implementation, in step 106, responding to a guidance operation corresponding to the cause type to a user based on the cause type may include:
and step 106C4, guiding the user to shoot the target part of the biological object, and determining the target species to which the biological object belongs again by combining the shot image to be identified.
Specifically, in the case that the biological object lacks strong identification features in the image to be identified, the camera may also be invoked to guide the user to shoot and upload images of a target part such as a fruit, a leaf stem, a trunk and the like so as to acquire more strong identification features of the biological object, and then the target species can be confirmed again by combining the shot image to be identified. The target site of the biological object may be a plurality of sites set in advance, or may be a plurality of sites corresponding to the upper species or the species range after the upper species and/or the species range are determined.
In addition, there are various combinations of the above steps 106C1 to 106C4, which are not limited in this embodiment. For example, both steps 106C1 and 106C2 may be performed, or alternatively, the user may be responded to both the superordinate species and the species range, or only the superordinate species or only the species range; the step 106C4 may be executed alone, or executed after the step 106C1 or the step 106C2, that is, the user may be directly guided to shoot the image of the specific part of the living body for recognition after determining that the living body object lacks strong recognition features, or the upper species and/or the species range may be provided for the user to distinguish, and the user may be guided to shoot the image for recognition if the user cannot distinguish. It is understood that the combined implementation of the above steps 106C1 through 106C4, which can be suggested by those skilled in the art, is within the scope of the present invention.
(4) The cause type is the lack of complete recognition features of the biological object:
if, in step 104, it is determined that the type of the cause of the target species being unable to be identified is that the biological object in the image to be identified lacks complete identification features, that is, the image to be identified does not present complete appearance features of the biological object, which is more common in the case where the biological object is a tree, the image to be identified often includes only the trunk of a certain tree, but does not show leaves and the entire panorama of the tree.
Referring to fig. 7, fig. 7 is a flowchart illustrating a method for responding to a boot operation to a user in accordance with yet another exemplary embodiment. In an alternative implementation, in step 106, responding to a guidance operation corresponding to the reason type to the user based on the reason type may include:
step 106D1, guiding the user to shoot the target part of the biological object, and determining the target species to which the biological object belongs again by combining the shot image to be identified.
Specifically, the camera may be invoked to guide the user to shoot and upload images for a target part such as a trunk, a leaf, and a whole panorama, so that the target species can be confirmed again by combining the shot image to be identified. The target species of the biological object may be a plurality of preset parts, or a plurality of parts corresponding to specific plant types after determining that the biological object belongs to trees, shrubs or herbaceous plants.
For example, in the case that it is determined that the biological object lacks complete identification features and belongs to a tree in the image to be identified, the user may be guided to jump to a multi-shot mode for tree identification, shoot images are distributed for a trunk, leaves and the whole panorama and uploaded, and then the target species to which the biological object belongs is confirmed by combining the three images to be identified obtained by shooting.
(5) The reason type is that the biological object is not at a proper shooting distance:
if it is determined in step 104 that the type of the cause of the target species being unrecognizable is that the biological object in the image to be recognized is not at a proper shooting distance, that is, the biological object in the image to be recognized is too close to or too far from the lens, the biological object cannot be represented in the image to be recognized with a good area ratio and is difficult to recognize.
Referring to fig. 8, fig. 8 is a flow chart illustrating a method for responding to a boot operation to a user in accordance with yet another exemplary embodiment. In an alternative implementation, in step 106, responding to a guidance operation corresponding to the reason type to the user based on the reason type may include:
and step 106E1, if it is determined that the type of the reason is that the biological object is not in the proper shooting distance, suggesting guidance information for performing identification after adjusting the focal length in response to the fact that the biological object is not in the proper shooting distance to the user.
In particular, relevant teletext information may be displayed on a visual interface of an APP, applet or webpage for guidance.
Further, step 106 may further include:
and step 106E2, after the focal length is automatically adjusted, guiding the user to shoot the biological object again, and determining the target species to which the biological object belongs in the shot image to be identified again.
Specifically, the area ratio of the biological object in the image to be recognized may be determined, and then automatic focusing may be performed based on the area ratio with a preset ideal ratio as a target, and after the automatic focusing is completed, the user may be guided to shoot the biological object again, and a target species to which the biological object belongs in the image to be recognized obtained by shooting may be confirmed again.
For example, if the area ratio of the biological object in the image to be recognized before focusing is too small, that is, the biological object is far from the lens by a suitable shooting distance, the focal length of the camera can be automatically reduced by taking the area ratio of the biological object in the image to be recognized as the target when the area ratio of the biological object in the image to be recognized is increased to a preset ideal ratio, and re-shooting and species recognition are performed after the automatic focusing is completed.
(6) The reason type is that the image quality of the image to be identified is too low:
if the type of the reason for failing to identify the target species is determined as the image quality of the image to be identified is too low in step 104, that is, the brightness, contrast, definition, and the like of the image to be identified are difficult to meet the requirement of species identification.
Referring to fig. 9, fig. 9 is a flowchart illustrating a method for responding to a booting operation to a user according to yet another exemplary embodiment. In an alternative implementation, in step 106, responding to a guidance operation corresponding to the reason type to the user based on the reason type may include:
step 106F1, the image quality of the image to be recognized is optimized, and the target species to which the biological object belongs in the optimized image to be recognized is determined again.
Specifically, it may be determined whether image quality parameters of the image to be recognized, such as brightness, contrast, and sharpness, meet preset image quality requirements, and the image quality parameters in the image to be recognized, which do not meet the image quality requirements, are optimized, and then the target species to which the biological object belongs in the optimized image to be recognized is determined.
Further, step 106 may further include:
step 106F2, when the target species to which the biological object belongs in the optimized image to be recognized cannot be determined, guiding the user to adjust the shooting parameters and shoot again based on the image quality parameters of the image to be recognized before optimization, and determining the target species to which the biological object belongs in the shot image to be recognized again.
Specifically, if the image to be recognized after the image quality optimization still cannot realize the recognition of the target species, the user may be guided to adjust the shooting parameters and then call the camera to re-shoot the image to be recognized of the biological object and upload the image to be recognized, aiming at the image quality parameters which cannot meet the image quality requirement before the optimization, and then the target species to which the biological object belongs in the shot image to be recognized is confirmed.
For example, if the brightness of the image to be recognized before optimization is insufficient, the user can be guided to start the light supplement lamp or shoot the biological object after the image is transferred to a bright place, and if the definition is insufficient, the user can be guided to shoot the biological object after the image is focused correctly.
(7) The reason type is background clutter of the image to be recognized:
if the type of the cause of the target species being unrecognizable is determined as background clutter of the image to be recognized in step 104, that is, too many biological objects are included in the image to be recognized to be difficult to recognize.
Referring to fig. 10, fig. 10 is a flow chart illustrating a method for responding to a boot operation to a user in accordance with yet another exemplary embodiment. In an alternative implementation, in step 106, responding to a guidance operation corresponding to the reason type to the user based on the reason type may include:
step 106G1, guiding the user to select a target biological object to be species-identified from a plurality of biological objects included in the image to be identified, and determining a target species to which the target biological object belongs.
Specifically, a plurality of biological objects included in the image to be recognized may be determined and displayed through a visual interface of an APP, an applet, or a web page, for example, the plurality of biological objects included in the image to be recognized are framed, a user is guided to select a target biological object to be subjected to species recognition from the plurality of biological objects, and then a target species to which the target biological object belongs is confirmed.
Referring to fig. 11, fig. 11 is a flowchart illustrating a method for responding to a booting operation to a user according to yet another exemplary embodiment. In another alternative implementation, in step 106, responding to the guidance operation corresponding to the cause type to the user based on the cause type may include:
step 106G2, determining the species to which each biological object included in the image to be recognized belongs, and responding the species to which each biological object belongs to the user, so that the user can distinguish the target species to which the target biological object belongs.
Specifically, after a plurality of biological objects included in the image to be recognized are determined, the species to which each biological object belongs may be determined, and each biological object and the species to which the biological object belongs are displayed to the user through a visualization interface of an APP, an applet, or a web page, so that the user can identify the target species to which the target biological object belongs.
As can be seen from the above description, in the present invention, in a case where the species recognition model cannot determine the target species to which the biological object belongs in the image to be recognized, the type of cause for which the species recognition model cannot recognize the target species is determined based on the trained cause analysis model, and then, based on the type of cause, a guidance operation corresponding to the type of cause is responded to the user.
According to the scheme, the situation that inaccurate biological species identification results are fed back to the user can be avoided, the method is suitable for specific reasons that the biological species cannot be identified through the image, targeted guiding operation is responded to the user, the user can be guaranteed to obtain more accurate species identification results subsequently according to guiding, and the accuracy of species identification and the use experience of the user are improved.
The following describes the training process of the cause analysis model:
there are many alternative implementation algorithms for the cause analysis model, and this embodiment does not specifically limit this. In an alternative implementation, the cause analysis model may be a classification model implemented based on a convolutional neural network or a residual error network, which has the advantage of high accuracy.
Referring to fig. 12, fig. 12 is a flow chart illustrating training of a cause analysis model according to an exemplary embodiment. In an alternative implementation, the training process of the cause analysis model may include the following specific steps:
step 1202, obtaining a plurality of image samples to be identified; marking the image sample to be identified with a reason type which cannot determine a target species to which a biological object in the image sample to be identified belongs;
step 1204, determining a sample training set and a sample testing set from the plurality of image samples to be identified based on a preset proportion requirement;
step 1206, training the original cause analysis model by using the sample training set, and determining the accuracy of the trained cause analysis model by using the sample testing set;
and 1208, determining the trained reason analysis model as the trained reason analysis model under the condition that the accuracy of the trained reason analysis model meets the preset accuracy requirement.
Preparing a certain number of image samples to be identified in advance, marking the image samples to be identified, wherein the labels of the image samples to be identified are the types of reasons which can not be identified by the target species to which the biological objects in the image samples to be identified belong.
Dividing the image samples to be recognized into a sample training set and a sample testing set according to a preset proportion, for example, randomly selecting 90% of the image samples to be recognized as the sample training set, and the remaining 10% as the sample testing set.
And training the reason analysis model by using a sample training set, wherein the training mode can be that the reason analysis model is an independent model and can also be combined with the species recognition model to perform end-to-end training, and the training is not limited specifically.
And after the training is finished, measuring the accuracy of the current cause analysis model by using the sample test set, and determining the current cause analysis model as the trained cause analysis model under the condition that the accuracy of the current cause analysis model exceeds a preset accuracy threshold.
Referring to fig. 13, fig. 13 is a flow chart illustrating training of a cause analysis model according to another exemplary embodiment. In another alternative implementation, the training process of the cause analysis model further includes:
step 1210, adding a plurality of image samples to be recognized under the condition that the accuracy of the trained reason analysis model does not meet the preset accuracy requirement;
and 1212, combining the newly added image samples to be identified, and retraining the current reason analysis model until the reason analysis model meets the accuracy requirement.
If the current cause analysis model can not meet the accuracy requirement, a new image sample to be recognized can be added, the same marking is carried out, the newly added image sample to be recognized is utilized to expand a sample training set and a sample testing set, and the current cause analysis model is retrained until the current cause analysis model meets the accuracy requirement.
Referring to fig. 14, fig. 14 is a flow chart illustrating the training of a cause analysis model according to yet another exemplary embodiment. In yet another alternative implementation, the training process of the cause analysis model further includes:
step 1214, adjusting the proportion requirement and re-determining a new sample training set and a new sample testing set under the condition that the accuracy of the trained reason analysis model does not meet the preset accuracy requirement;
step 1216, retraining the current cause analysis model with the new sample training set, and determining the accuracy of the retrained cause analysis model with the new sample testing set until the accuracy of the cause analysis model meets the accuracy requirement.
If the current cause analysis model can not meet the accuracy requirement, besides expanding the number of samples, the proportional relation between the sample training set and the sample testing set can be adjusted, then the sample training set and the sample testing set are divided again according to the new proportional relation, and the current cause analysis model is retrained until the current cause analysis model meets the accuracy requirement.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an electronic device where an apparatus for identifying a biological species according to an exemplary embodiment of the invention is located. At the hardware level, the electronic device includes a processor 1502, an internal bus 1504, a network interface 1506, a memory 1508, and non-volatile storage 1510, although other hardware required for services may also be included. One or more embodiments of the invention may be implemented in software, for example, by the processor 1502 reading corresponding computer programs from the non-volatile storage 1510 into the memory 1508 and then running. Of course, besides software implementation, other implementations are not excluded from one or more embodiments of the present invention, such as logic devices or a combination of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 16, fig. 16 shows an identification apparatus for biological species according to an exemplary embodiment of the invention, which can be applied to the electronic device shown in fig. 15 to implement the technical solution of the invention. Wherein the identifying means comprises an identifying unit 1610, an analyzing unit 1620 and a guiding unit 1630; wherein:
the identifying unit 1610 is configured to input an image to be identified provided by a user as an input parameter into a trained species identification model, and determine a target species to which a biological object included in the image to be identified belongs based on an output result of the species identification model;
the analysis unit 1620 is configured to, in a case that the target species cannot be determined based on the output result of the species recognition model, input the image to be recognized as an input parameter into a trained cause analysis model, and determine a cause type of the target species that cannot be determined based on the output result of the cause analysis model;
the guiding unit 1630 is configured to respond to a guiding operation corresponding to the reason type to the user based on the reason type.
Optionally, the identification device further comprises a response unit 1640:
the response unit 1640 is configured to respond to the user the target species to which the biological object belongs if the target species is successfully determined based on the output result of the species recognition model.
Optionally, the identifying unit 1610 may not determine the target species based on the output result of the species identification model, which specifically includes:
and if the output result of the species identification model to the biological object belonging to each species cannot meet the preset confidence requirement, determining that the target species cannot be determined based on the output result of the species identification model.
Optionally, where the biological subject is a plant, the type of cause for which the target species cannot be determined includes one or more of:
the biological subject is in a young seedling state;
the biological subject is in a withered yellow state;
the biological subject lacks strong identifying features;
the biological subject lacks a complete identifying feature;
the image quality of the image to be recognized is too low;
the background of the image to be recognized is disordered;
others cannot determine the cause of the target species.
Optionally, the guiding unit 1630, when responding to the guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
and if the reason type is determined to be that the biological object is in the seedling state, responding to the fact that the biological object is in the seedling state to a user, and suggesting guide information for recognition after the biological object grows.
Optionally, the guiding unit 1630 is further configured to:
and responding guidance information for re-identifying the target species to which the biological object belongs to the user at the target time based on the preset waiting time.
Optionally, the guiding unit 1630, when responding to the guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
and if the reason type is determined to be that the biological object is in a withered yellow state, responding to the fact that the biological object is in the withered yellow state to the user, and recommending that the biological object is identified or providing guide information of the image to be identified containing the healthy biological object again after the biological object is recovered.
Optionally, the guiding unit 1630, when responding to the guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
and if the reason type is determined that the biological object lacks strong identification characteristics, determining the superior species to which the biological object belongs, and responding the superior species to which the biological object belongs to the user.
Optionally, the guiding unit 1630, when responding to a guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
if the reason type is determined that the biological object lacks strong identification characteristics, determining a species range to which the biological object belongs, and responding the species range to which the biological object belongs to a user; a plurality of species is included within the range of species.
Optionally, the guiding unit 1630 is further configured to:
and when the species range is responded to the user, responding the strong identification characteristics of all species in the species range and/or the distinguishing characteristics among all species in the species range to the user so as to distinguish the target species to which the biological object belongs by the user.
Optionally, the guiding unit 1630 is further configured to:
and guiding a user to shoot a target part of the biological object, and determining a target species to which the biological object belongs again by combining the shot image to be identified.
Optionally, the guiding unit 1630, when responding to a guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
and if the reason type is determined that the biological object lacks complete identification characteristics, guiding a user to shoot a target part of the biological object, and determining a target species to which the biological object belongs again by combining the shot image to be identified.
Optionally, the guiding unit 1630, when responding to the guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
and if the reason type is determined to be that the image quality of the image to be recognized is too low, optimizing the image quality of the image to be recognized, and re-determining the target species to which the biological object belongs in the optimized image to be recognized.
Optionally, the guiding unit 1630 is further configured to:
and under the condition that the target species to which the biological object belongs in the optimized image to be recognized cannot be determined, guiding a user to adjust shooting parameters and shoot the biological object again on the basis of the image quality parameters of the image to be recognized before optimization, and determining the target species to which the biological object belongs in the shot image to be recognized again.
Optionally, the guiding unit 1630, when responding to the guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
and if the reason type is determined to be the background clutter of the image to be recognized, guiding a user to select a target biological object to be subjected to species recognition from a plurality of biological objects contained in the image to be recognized, and determining a target species to which the target biological object belongs.
Optionally, the guiding unit 1630, when responding to the guiding operation corresponding to the reason type to the user based on the reason type, is specifically configured to:
if the reason type is determined to be that the background of the image to be recognized is disordered, respectively determining the species to which each biological object contained in the image to be recognized belongs, and responding the species to which each biological object belongs to the user so that the user can judge the target species to which the target biological object belongs.
Optionally, the cause analysis model is a classification model implemented based on a convolutional neural network or a residual error network.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of specific embodiments of the present invention has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the embodiment or embodiments of the invention is for the purpose of describing the particular embodiment only and is not intended to be limiting of the embodiment or embodiments of the invention. As used in one or more embodiments of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information in one or more embodiments of the invention, such information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (22)
1. A method of identifying a biological species, the method comprising:
inputting an image to be recognized provided by a user as an input parameter into a trained species recognition model, and determining a target species to which a biological object contained in the image to be recognized belongs based on an output result of the species recognition model;
under the condition that the target species cannot be determined based on the output result of the species recognition model, inputting the image to be recognized into a trained cause analysis model as an input parameter, and determining the cause type of the target species which cannot be determined based on the output result of the cause analysis model;
and responding to the user for the guide operation corresponding to the reason type based on the reason type.
2. The method of claim 1, further comprising:
in the event that the target species is successfully determined based on the output results of the species recognition model, responding to a user for the target species to which the biological object belongs.
3. The method of claim 1, wherein the inability to determine the target species based on the output of the species identification model comprises:
and if the output result of the species identification model to the biological object belonging to each species cannot meet the preset confidence requirement, determining that the target species cannot be determined based on the output result of the species identification model.
4. The method according to claim 1, wherein, in the case that the biological object is a plant, the type of cause for which the target species cannot be determined comprises one or more of:
the biological subject is in a young seedling state;
the biological subject is in a withered yellow state;
the biological subject lacks strong identifying features;
the biological subject lacks a complete identifying feature;
the biological object is not at a proper shooting distance;
the image quality of the image to be recognized is too low;
the background of the image to be recognized is disordered;
others cannot determine the cause of the target species.
5. The method of claim 4, wherein responding to a user, based on the cause type, to direct an action corresponding to the cause type comprises:
and if the reason type is determined to be that the biological object is in the seedling state, responding to the fact that the biological object is in the seedling state to a user, and suggesting guide information for recognition after the biological object grows.
6. The method of claim 5, further comprising:
and responding guidance information for re-identifying the target species to which the biological object belongs to the user at the target time based on the preset waiting time.
7. The method of claim 4, wherein responding to a user, based on the cause type, to direct an action corresponding to the cause type comprises:
and if the reason type is determined to be that the biological object is in a withered yellow state, responding to the fact that the biological object is in the withered yellow state to the user, and recommending that the biological object is identified or providing guide information of the image to be identified containing the healthy biological object again after the biological object is recovered.
8. The method of claim 4, wherein responding to a user, based on the cause type, to direct an action corresponding to the cause type comprises:
and if the reason type is determined that the biological object lacks strong identification characteristics, determining the superior species to which the biological object belongs, and responding the superior species to which the biological object belongs to the user.
9. The method of claim 4, further comprising:
if the reason type is determined that the biological object lacks strong identification characteristics, determining a species range to which the biological object belongs, and responding the species range to which the biological object belongs to a user; a plurality of species is included within the range of species.
10. The method of claim 9, further comprising:
and when the species range is responded to the user, responding the strong identification characteristics of all species in the species range and/or the distinguishing characteristics among all species in the species range to the user so as to distinguish the target species to which the biological object belongs by the user.
11. The method according to any one of claims 8 to 10, further comprising:
and guiding a user to shoot a target part of the biological object, and determining a target species to which the biological object belongs again by combining the shot image to be identified.
12. The method of claim 4, wherein responding to a user, based on the cause type, to direct an action corresponding to the cause type comprises:
and if the reason type is determined that the biological object lacks complete identification characteristics, guiding a user to shoot a target part of the biological object, and determining a target species to which the biological object belongs again by combining the shot image to be identified.
13. The method of claim 4, wherein responding to a user, based on the cause type, to direct an action corresponding to the cause type comprises:
and if the reason type is determined that the biological object is not in the proper shooting distance, suggesting guiding information for identification after focal length adjustment to a user in response to the fact that the biological object is not in the proper shooting distance.
14. The method of claim 13, further comprising:
and after the focal length is automatically adjusted, guiding the user to shoot the biological object again, and determining the target species of the biological object in the shot image to be identified again.
15. The method of claim 4, wherein responding to a user, based on the cause type, to a bootstrap operation corresponding to the cause type comprises:
and if the reason type is determined to be that the image quality of the image to be recognized is too low, optimizing the image quality of the image to be recognized, and re-determining the target species to which the biological object belongs in the optimized image to be recognized.
16. The method of claim 15, further comprising:
and under the condition that the target species to which the biological object belongs in the optimized image to be recognized cannot be determined, guiding a user to adjust shooting parameters and shoot the biological object again on the basis of the image quality parameters of the image to be recognized before optimization, and determining the target species to which the biological object belongs in the shot image to be recognized again.
17. The method of claim 4, wherein responding to a user, based on the cause type, to direct an action corresponding to the cause type comprises:
and if the reason type is determined to be the background clutter of the image to be recognized, guiding a user to select a target biological object to be subjected to species recognition from a plurality of biological objects contained in the image to be recognized, and determining a target species to which the target biological object belongs.
18. The method of claim 4, wherein responding to a user, based on the cause type, to direct an action corresponding to the cause type comprises:
if the reason type is determined to be that the background of the image to be recognized is disordered, respectively determining the species to which each biological object contained in the image to be recognized belongs, and responding the species to which each biological object belongs to the user so that the user can judge the target species to which the target biological object belongs.
19. The method of claim 1, wherein the cause analysis model is a classification model implemented based on a convolutional neural network or a residual error network.
20. An apparatus for identification of a biological species, the apparatus comprising an identification unit, an analysis unit and a guidance unit; wherein:
the identification unit is used for inputting an image to be identified provided by a user into a trained species identification model as an input parameter, and determining a target species to which a biological object contained in the image to be identified belongs based on an output result of the species identification model;
the analysis unit is used for inputting the image to be recognized into a trained reason analysis model as an input parameter under the condition that the target species cannot be determined based on the output result of the species recognition model, and determining the reason type of the target species which cannot be determined based on the output result of the reason analysis model;
and the guiding unit is used for responding to guiding operation corresponding to the reason type to the user based on the reason type.
21. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the method of any one of claims 1-17 by executing the executable instructions.
22. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1-19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210461485.9A CN114841955A (en) | 2022-04-28 | 2022-04-28 | Biological species identification method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210461485.9A CN114841955A (en) | 2022-04-28 | 2022-04-28 | Biological species identification method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114841955A true CN114841955A (en) | 2022-08-02 |
Family
ID=82567029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210461485.9A Pending CN114841955A (en) | 2022-04-28 | 2022-04-28 | Biological species identification method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114841955A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024082894A1 (en) * | 2022-10-17 | 2024-04-25 | 杭州睿胜软件有限公司 | Plant repotting detection method and apparatus, device, and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110192386A (en) * | 2017-01-31 | 2019-08-30 | 株式会社Ntt都科摩 | Information processing equipment and information processing method |
CN110263775A (en) * | 2019-05-29 | 2019-09-20 | 阿里巴巴集团控股有限公司 | Image-recognizing method, device, equipment and authentication method, device, equipment |
CN110363146A (en) * | 2019-07-16 | 2019-10-22 | 杭州睿琪软件有限公司 | A kind of object identification method, device, electronic equipment and storage medium |
US20190392819A1 (en) * | 2019-07-29 | 2019-12-26 | Lg Electronics Inc. | Artificial intelligence device for providing voice recognition service and method of operating the same |
CN111738284A (en) * | 2019-11-29 | 2020-10-02 | 北京沃东天骏信息技术有限公司 | Object identification method, device, equipment and storage medium |
CN113239804A (en) * | 2021-05-13 | 2021-08-10 | 杭州睿胜软件有限公司 | Image recognition method, readable storage medium, and image recognition system |
CN113313193A (en) * | 2021-06-15 | 2021-08-27 | 杭州睿胜软件有限公司 | Plant picture identification method, readable storage medium and electronic device |
CN113869364A (en) * | 2021-08-26 | 2021-12-31 | 北京旷视科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN114021480A (en) * | 2021-11-18 | 2022-02-08 | 共达地创新技术(深圳)有限公司 | Model optimization method, device and storage medium |
CN114170509A (en) * | 2021-12-03 | 2022-03-11 | 杭州睿胜软件有限公司 | Plant identification method, plant identification device and plant identification system |
-
2022
- 2022-04-28 CN CN202210461485.9A patent/CN114841955A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110192386A (en) * | 2017-01-31 | 2019-08-30 | 株式会社Ntt都科摩 | Information processing equipment and information processing method |
CN110263775A (en) * | 2019-05-29 | 2019-09-20 | 阿里巴巴集团控股有限公司 | Image-recognizing method, device, equipment and authentication method, device, equipment |
CN110363146A (en) * | 2019-07-16 | 2019-10-22 | 杭州睿琪软件有限公司 | A kind of object identification method, device, electronic equipment and storage medium |
US20190392819A1 (en) * | 2019-07-29 | 2019-12-26 | Lg Electronics Inc. | Artificial intelligence device for providing voice recognition service and method of operating the same |
CN111738284A (en) * | 2019-11-29 | 2020-10-02 | 北京沃东天骏信息技术有限公司 | Object identification method, device, equipment and storage medium |
CN113239804A (en) * | 2021-05-13 | 2021-08-10 | 杭州睿胜软件有限公司 | Image recognition method, readable storage medium, and image recognition system |
CN113313193A (en) * | 2021-06-15 | 2021-08-27 | 杭州睿胜软件有限公司 | Plant picture identification method, readable storage medium and electronic device |
CN113869364A (en) * | 2021-08-26 | 2021-12-31 | 北京旷视科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN114021480A (en) * | 2021-11-18 | 2022-02-08 | 共达地创新技术(深圳)有限公司 | Model optimization method, device and storage medium |
CN114170509A (en) * | 2021-12-03 | 2022-03-11 | 杭州睿胜软件有限公司 | Plant identification method, plant identification device and plant identification system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024082894A1 (en) * | 2022-10-17 | 2024-04-25 | 杭州睿胜软件有限公司 | Plant repotting detection method and apparatus, device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210165817A1 (en) | User interface for context labeling of multimedia items | |
CN110267119B (en) | Video precision and chroma evaluation method and related equipment | |
CN110555416B (en) | Plant identification method and device | |
WO2022262586A1 (en) | Method for plant identification, computer system and computer-readable storage medium | |
CN112966758B (en) | Crop disease, insect and weed identification method, device and system and storage medium | |
CN109117857A (en) | A kind of recognition methods of biological attribute, device and equipment | |
CN114170509A (en) | Plant identification method, plant identification device and plant identification system | |
CN111767424B (en) | Image processing method, image processing device, electronic equipment and computer storage medium | |
US20160171407A1 (en) | Method and system for classifying plant disease through crowdsourcing using a mobile communication device | |
CN112347997A (en) | Test question detection and identification method and device, electronic equipment and medium | |
CN114841955A (en) | Biological species identification method, device, equipment and storage medium | |
US20170235451A1 (en) | Minimally invasive user metadata | |
US12080063B2 (en) | Display method and display system for plant disease diagnosis information, and readable storage medium | |
CN110275820A (en) | Page compatibility test method, system and equipment | |
CN111860122A (en) | Method and system for recognizing reading comprehensive behaviors in real scene | |
CN115578591A (en) | Plant pot changing detection method, device, equipment and storage medium | |
CN117290481A (en) | Question and answer method and device based on deep learning, storage medium and electronic equipment | |
CN114463816A (en) | Satisfaction determining method and device, processor and electronic equipment | |
CN112906698B (en) | Alfalfa plant identification method and device | |
US11157777B2 (en) | Quality control systems and methods for annotated content | |
CN110826324B (en) | Language model training and word segmentation prediction method and device and language model | |
CN112819078B (en) | Iteration method and device for picture identification model | |
WO2023175931A1 (en) | Image classification device, image classification method, and recording medium | |
CN116843942A (en) | Tape information identification method, device, storage medium and electronic equipment | |
CN114120005A (en) | Image processing method, neural network model training method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |