CN112784925A - Method, computer system and electronic equipment for object recognition - Google Patents

Method, computer system and electronic equipment for object recognition Download PDF

Info

Publication number
CN112784925A
CN112784925A CN202110171761.3A CN202110171761A CN112784925A CN 112784925 A CN112784925 A CN 112784925A CN 202110171761 A CN202110171761 A CN 202110171761A CN 112784925 A CN112784925 A CN 112784925A
Authority
CN
China
Prior art keywords
classification
group
object recognition
recognition model
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110171761.3A
Other languages
Chinese (zh)
Other versions
CN112784925B (en
Inventor
徐青松
李青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruisheng Software Co Ltd
Original Assignee
Hangzhou Ruisheng Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruisheng Software Co Ltd filed Critical Hangzhou Ruisheng Software Co Ltd
Priority to CN202110171761.3A priority Critical patent/CN112784925B/en
Publication of CN112784925A publication Critical patent/CN112784925A/en
Priority to PCT/CN2022/073987 priority patent/WO2022166706A1/en
Application granted granted Critical
Publication of CN112784925B publication Critical patent/CN112784925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/45Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a method for object recognition, comprising: receiving a first classification of an identified object from a pre-established object identification model; in response to the first classification belonging to a first group, displaying a first screen, wherein the first screen comprises the first classification; and displaying a second screen in response to the first classification belonging to a second group, wherein the second screen does not include the first classification and includes a prompt requesting a user to input additional information about the identified object, wherein a first condition of the first group is that an identification accuracy of a classification unit of the individual classification as a kind is a first level, and a second condition of the second group is that an identification accuracy of a classification unit of the individual classification as a genus is a second level, the first level being higher than the second level. The disclosure also relates to a computer system and an electronic device for object recognition.

Description

Method, computer system and electronic equipment for object recognition
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, a computer system, and an electronic device for object recognition.
Background
In the field of computer technology, there are a number of Applications (APP) for identifying objects to be identified, for example for identifying plants. These applications typically receive images (including still images, moving images, videos, and the like) from a user, and recognize an object to be recognized in the images based on an object recognition model established by artificial intelligence technology to obtain a recognition result. For example, the recognition result obtained when the object is a living body may be a biological classification of the object to be recognized, which is recognized by the object recognition model, for example, the classification unit may be Family (Family), Genus (Genus), or Species (specials). The recognition result output by the object recognition model may include one or more classifications, typically ordered with confidence from top to bottom, and the classification with the highest confidence may be considered the classification with the highest degree of matching with the features of the object to be recognized present in the image. In addition, the recognition result output by the object recognition model may also include a classification similar to the one with the highest confidence. The image from the user typically includes at least a portion of the object to be identified, for example, the user takes an image including the stem, leaves, and flowers of the plant to be identified.
Disclosure of Invention
An object of the present disclosure is to provide a method, a computer system and an electronic device for object recognition.
According to a first aspect of the present disclosure, there is provided a method for object recognition, comprising: receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object; in response to the first classification belonging to a first group, displaying a first screen, wherein the first screen comprises the first classification; and in response to the first classification belonging to a second group, displaying a second screen, wherein the second screen does not include the first classification and includes a prompt requesting a user to enter additional information about the identified object, wherein the first and second groups are established based on the object recognition model's statistical recognition accuracy for classifying individuals in the targeted population of objects, wherein the first group includes classifications of individuals whose recognition accuracy satisfies a first condition, the second group includes classifications of individuals whose recognition accuracy satisfies a second condition, wherein the first condition is that the recognition accuracy of the classification unit species of the individual classification is a first level, the second condition is that the identification accuracy of the individual classification unit as the genus is a second level, and the first level is higher than the second level.
According to a second aspect of the present disclosure, there is provided a method for object recognition, comprising: receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object; in response to the first classification belonging to a first group, displaying a first screen, wherein the first screen comprises the first classification; and displaying a second screen in response to the first classification belonging to a second group, wherein the second screen does not include the first classification and includes a prompt requesting a user to input additional information about the identified object, wherein the first group and the second group are established based on the object recognition model for statistical recognition accuracy of individual classifications in the targeted population of objects, wherein the first group includes individual classifications for which recognition accuracy satisfies a first condition, the second group includes individual classifications for which recognition accuracy satisfies a second condition, wherein the first condition is that recognition accuracy in terms of classification units of the individual classifications is higher than a first threshold, the second condition is that recognition accuracy in terms of classification units of the individual classifications is lower than a second threshold, and wherein, the first threshold is higher than the second threshold.
According to a third aspect of the present disclosure, there is provided a method for object recognition, comprising: receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object; displaying information on a classification to which a classification unit corresponding to the first classification belongs in response to the first classification belonging to a pre-established group, wherein the group is established based on a statistical recognition accuracy of individual classifications of the target group to which the object recognition model is directed, wherein the group includes individual classifications to which a recognition accuracy of the classification unit is lower than a first threshold and to which the recognition accuracy of the classification unit is higher than a second threshold.
According to a fourth aspect of the present disclosure, there is provided a method for object recognition, comprising: receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object; in response to the first classification belonging to a pre-established group, not displaying the first classification and displaying a prompt requesting a user to enter additional information about the identified object, wherein the group is established based on the object recognition model for a statistical recognition accuracy rate for individual classifications in the targeted population of objects, wherein the group includes individual classifications for which the recognition accuracy rate is below a threshold.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: one or more processors configured to cause the electronic device to perform any of the methods described above.
According to a sixth aspect of the present disclosure, there is provided an apparatus for operating an electronic device, comprising: one or more processors configured to cause the electronic device to perform any of the methods described above.
According to a seventh aspect of the present disclosure, there is provided a computer system for object recognition, comprising: one or more processors; and one or more memories configured to store a series of computer-executable instructions and computer-accessible data associated with the series of computer-executable instructions, wherein the series of computer-executable instructions, when executed by the one or more processors, cause the computer system to perform any of the methods described above.
According to an eighth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a series of computer executable instructions which, when executed by one or more computing devices, cause the one or more computing devices to perform any of the methods described above.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flow diagram schematically illustrating at least a portion of a method for object recognition, in accordance with an embodiment of the present disclosure.
Fig. 2 to 8 are schematic views schematically illustrating a method display screen according to an embodiment of the present disclosure.
Fig. 9 is a block diagram that schematically illustrates at least a portion of a computer system for object recognition, in accordance with an embodiment of the present disclosure.
FIG. 10 is a block diagram that schematically illustrates at least a portion of a computer system for object recognition, in accordance with an embodiment of the present disclosure.
Note that in the embodiments described below, the same reference numerals are used in common between different drawings to denote the same portions or portions having the same functions, and a repetitive description thereof will be omitted. In this specification, like reference numerals and letters are used to designate like items, and therefore, once an item is defined in one drawing, further discussion thereof is not required in subsequent drawings.
Detailed Description
Various exemplary embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise. In the following description, numerous details are set forth in order to better explain the present disclosure, however it is understood that the present disclosure may be practiced without these details.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
Fig. 1 is a flow diagram schematically illustrating at least a portion of a method 100 for object recognition, in accordance with an embodiment of the present disclosure. The method 100 comprises: receiving a classification of the identified object from the object recognition model (step S110); judging to which group the classification instruction object belongs (step S120); displaying a screen including the classification in response to the classification indicating that the identified object belongs to the first group (step S130); and in response to the classification indicating that the identified object belongs to the second group, displaying a screen that does not include the classification and includes a prompt requesting the user to input additional information about the identified object (step S140).
In some cases, a user inputs an image of all or a portion of an identified object (also referred to herein as a "first image") to an application where object identification may be performed in an attempt to obtain information about the identified object. For example, when the identified object is a plant, the image may include any one or a combination of a root, a stem, a leaf, a flower, a fruit, a seed, and the like of the plant to be identified, wherein each included item may be the whole or a part of the item. The images may be previously stored by the user, taken in real-time, or downloaded from a network. The imagery may include any form of visual presentation, such as still images, moving images, and video. The image can be captured by using a device including a camera, such as a mobile phone, a tablet computer, and the like.
An application capable of implementing the method 100 may receive the imagery from the user and perform object recognition based on the imagery. The identification may include any known image-based method of object identification. For example, the identified objects in the imagery may be identified by the computing device and a pre-trained (or referred to as "trained") object identification model to arrive at an identification result (e.g., including one or more identified classes). The recognition model may be built based on a neural network, such as a deep Convolutional Neural Network (CNN) or a deep residual network (Resnet), etc. For example, a certain number of image samples labeled with the classification name of each plant, i.e., a training sample set, are obtained for the classification of each plant, and the neural network is trained by using the image samples until the output accuracy of the neural network meets the requirement. The images may also be preprocessed before object recognition based on the images. The pre-processing may include normalization, brightness adjustment, or noise reduction, among others. The noise reduction process can highlight the description of the characteristic part in the image, so that the characteristic is more vivid.
As described above, the recognition results provided by the object recognition model typically include one or more classifications of the recognized objects. One or more classes are ranked from high to low confidence (the class approaches the confidence level of the true class). The first ranked highest confidence class, also referred to herein as the "Top 1 recognition result," may be described in at least some claims as the "first class. Ranked second, the classification of recognition results with confidence next to Top1, also referred to herein as "Top 2 recognition results". The third position is called "Top 3 recognition result", and so on. In one embodiment, the classification unit of the one or more classifications included in the recognition result provided by the object recognition model is a seed. The classification unit of each recognition result as the classification of the genus can be obtained according to the corresponding relationship between the species and the genus. In one embodiment, the classification unit of the one or more classifications included in the recognition result provided by the object recognition model is a species and a genus. For the sake of simplicity, the classification with classification unit as species is hereinafter referred to as "classification", and the classification with classification unit as genus is hereinafter referred to as "genus classification".
In reality, there are many objects with similar shapes, including local shape similarity and overall shape similarity. Objects that are similar to each other may have the same classification or may have different classifications. For example, if the first plant and the second plant are similar to each other, the first plant and the second plant may have the same genus classification and different species classification, or may have different genus classifications. In some embodiments, a classification of individuals having a similar morphology to the individuals indicated by at least one of the one or more classifications described above, also referred to herein as a "similar result", may be derived from the respective recognition results. For example, similar results for each recognition result may be obtained from a pre-established rule database. The similarity result may be provided by the object recognition model or may be obtained from a recognition result obtained from the object recognition model.
The following describes the term "population of objects for which the object recognition model is directed," individuals "in the population of objects," classification of individuals "and" groups "as used herein. In one example, if the object recognition model is used to identify plants, the object population for which it is directed is plants, the individuals in the object population refer to the respective species of plants, and the individual classification refers to the classification (e.g., species classification) of the respective species of plants. In this context, unless otherwise indicated, "class" used to define an individual generally refers to a classification in units of classes. This is also the case when the object recognition model is used to identify an animal. In addition, the object recognition model can be used to recognize specific plants (or animals). In one example, if the object recognition model is used to recognize ferns, the object population for which it is directed is ferns, and the individuals in the object population refer to various kinds of ferns. A group is a set of individual classifications that is established based on the statistical recognition accuracy of an object recognition model to the individual classifications in the targeted object population, including the recognition accuracy of the individual classifications that satisfy a certain condition.
The group referred to herein is illustrated below with a specific example. In this example, the population of objects for which the object recognition model is directed is plants. The recognition accuracy of the object recognition model for each kind of plants was counted using a large amount of test data (e.g., 10000 sets of data), and the statistical results are shown in table 1. The identification accuracy for a certain type of plant is the ratio of the number of samples for which the object identification model correctly identified the classification to the total number of samples in the test data set for the plant of that type.
According to the statistical result, for some kinds of plants, the accuracy of the classification corresponding to the identification result of Top1 provided by the object identification model is higher than 85%, which means that the object identification model is almost correct for the identification of the plants on the classification unit level. These kinds of plants may be grouped into group one, which may be in the form of, for example, a collection of species classifications for these plants. For some kinds of plants, the accuracy of the classification corresponding to the recognition result of Top1 provided by the object recognition model is about 51%, but the accuracy of the corresponding genus classification is about 93%, which means that the object recognition model is almost correct for the recognition of the plants on the genus classification unit level, but may be incorrect on the genus classification unit level. This is generally due to the fact that there are 2 or more similar classes under the genus class, and the object recognition model cannot accurately distinguish between these more similar classes. These kinds of plants may be classified into group two, which may be in the form of, for example, a collection of species classifications of these plants. For some kinds of plants, the accuracy rate of the classification corresponding to the recognition result of Top1 provided by the object recognition model is about 51%, which indicates that the recognition result of the species classification may not be correct, but the accuracy rate of the genus classification is about 66%, which indicates that the recognition result of the genus classification may be acceptable but not ideal. These kinds of plants may be grouped into group three, which may be in the form of, for example, a collection of species classifications for these plants. For other kinds of plants, the accuracy of the classification corresponding to the identification result of Top1 provided by the object identification model is about 22% and the accuracy of the genus classification is about 29%, which means that the identification result is almost wrong. These kinds of plants may be grouped into group four, which may be in the form of, for example, a collection of species classifications of these plants. It can be seen that there is no intersection between groups established in this way.
TABLE 1 group and recognition accuracy
Figure BDA0002939139440000081
In the above method 100, a classification of the identified object, i.e. the identification result of Top1, is received from the object identification model in step S110. It is determined to which group the Top1 recognition result belongs in step S120. For example, if it is determined in step S120 that the classification of the recognition result of Top1 is included in group one and the recognition result can be considered to be accurate, a screen including the recognition result of Top1 may be displayed in step S130. If it is determined in step S120 that the classification of the recognition result of Top1 is included in the group four, the recognition result may be considered unreliable, and thus the result may not be displayed to the user, for example, a screen that does not include the recognition result of Top1 and includes a prompt requesting the user to input additional information about the recognized object is displayed in step S140.
The additional information on the recognized object may include morphological information, growth environment information, recognition environment information, and the like of the recognized object. The screen displayed in step S140 may include a prompt requesting the user to input such information. Such information may be entered in various forms, such as text, voice, video, etc. In one embodiment, the prompt requesting the user to enter additional information about the identified object may include: requesting a user to input a prompt area for one or more additional images; and/or to inform the user of shooting instructions for taking pictures from different angles and/or distances. The user may input additional information about the identified object in the form of an image (these images are referred to herein as "additional images"), the method according to this embodiment may drive the object recognition model to re-identify the classification of the identified object based on the aforementioned first image and the additional image, or based only on the additional image, and retrieve the re-identification result from the object recognition model. The re-recognition result may be displayed to the user without grouping (i.e., without performing step S120 or the like), or the above-described steps S130 to S140 may be performed with respect to the re-recognition result.
Those skilled in the art will appreciate that the division of the groups shown in table 1 is merely illustrative, and in other embodiments, the individuals corresponding to the individuals in the object group may be classified into fewer or more groups according to other division conditions. It will be understood by those skilled in the art that the interaction with the user (including the displayed screen) performed in fig. 1 for each group is also only illustrative, and in other embodiments, an appropriate interaction manner may be designed for each group according to other division conditions.
The interaction of the method according to embodiments of the present disclosure for different groups is explained below with reference to the specific examples of fig. 2 to 9.
The first condition is as follows: the recognition result of Top1 belongs to group one
In the first case, the recognition result of Top1 received from the object recognition model belongs to the group one, that is, the recognition result of the type classification of the object recognition model at this time can be considered to be correct, and the screen 10 shown in fig. 2 can be displayed. The frame 10 may include the recognition result of Top 1. In this specific example, the identification result of Top1 is the seed class "Baby rubber plant" with the highest confidence of the identified object identified by the object identification model.
When the screen 10 is displayed, a user operation such as clicking, sliding, or the like may be received. In response to a specific operation (e.g., a rightward slide) by the user while the screen 10 is displayed, an additional page for case one may be displayed. The additional pages may include one or more of the following: shooting guidance; a prompt for changing the recognition result output by the method (i.e., the recognition result of Top1 displayed on the screen 10); and a classification of individuals having a similar morphology to the individual indicated by the recognition result of Top1 (i.e., a similar result).
Fig. 3 shows an additional page 20. The additional page 20 includes shooting instructions for the user (shown as "Tips for tagging pictures" in the screen 20, which may also be referred to as shooting tricks, shooting methods, etc.), for example, "focus the plants in the middle of the frame and avoid dark or contaminated images". The additional page 20 further includes a prompt (displayed as "Change the result" in the screen 20) for changing the recognition result of Top1 below the shooting guidance, so that the user can correct himself/herself when considering that the recognition result of Top1 provided by the object recognition model is wrong.
In one embodiment, although not shown in the figures, the additional page may also include similar results to the recognition result of Top 1. For example, the screen 10 displays the recognition result of Top1 provided by the object recognition model as winter jasmine, and the additional page may display the classification of individuals having a similar form to winter jasmine, such as forsythia flower, peach flower, and cherry flower.
Case two: the recognition result of Top1 belongs to group two
In the second case, the recognition result of Top1 received from the object recognition model belongs to the group two, that is, if the recognition result of the type classification of the object recognition model is deemed to be not accurate at this time, but the corresponding genus classification is correct, the screen 30 shown in fig. 4 may be displayed. The screen 30 includes information on the category corresponding to the recognition result of Top1, and is the content displayed in the area 31 on the screen 30. After the information on the genus classification corresponding to the recognition result of Top1, the screen 30 may further include the recognition result of Top1, the content displayed in the screen 30 for the area 32.
After the recognition result of Top1, the screen 30 may further include, for example, in the area 33, one or more categories of the recognition result with a confidence lower than that of Top1, such as the recognition results of Top 2, Top 3, etc. received from the object recognition model (in one embodiment, only if the category corresponding to the recognition result of Top 2, Top 3, etc. is the same as the category corresponding to the recognition result of Top1, the category is displayed in the area 33); and/or similar results to the recognition result of Top 1. It is to be noted that, in the case where both of the foregoing are displayed, the same classification of the recognition results of Top 2, Top 3, and the like and the similar results is not repeatedly displayed. For example, if the recognition result of Top 2 is the same as one of the similar results, the frame 30 may further include the recognition result of Top 2, the recognition result of Top 3, and the similar result excluding the recognition result of Top 2 in sequence after the recognition result of Top 1.
Case three: the recognition result of Top1 belongs to group three
In case three, the recognition result of Top1 received from the object recognition model belongs to group three, that is, the recognition result of the species classification and the genus classification of the object recognition model can be considered to be not accurate at this time, and the screen 40 shown in fig. 5 can be displayed. Screen 40 may include the recognition result of Top1 (e.g., displayed in region 41) and one or more classifications received from the object recognition model with a confidence level lower than that of Top1 (e.g., displayed in region 42), such as the recognition results of Top 2, Top 3, etc. (in one embodiment, only if the generic classification corresponding to the recognition results of Top 2, Top 3, etc. is the same as the generic classification corresponding to the recognition result of Top1, then displayed in region 42). After this information, the screen 40 may also include a similar result (e.g., displayed in the area 43) of the recognition result of Top 1. It is to be noted that the same classification in the recognition results of Top 2, Top 3, etc. and the similar results is not repeatedly displayed. The screen 40 may also include a prompt (e.g., displayed in the area 44) requesting the user to enter additional information about the identified object. The prompt requesting the user to enter additional information about the identified object may include: requesting a user to input a prompt area for one or more additional images; and/or to inform the user of shooting instructions for taking pictures from different angles and/or distances. Further information requesting a prompt for the user to enter additional information about the identified object may be referred to below in relation to the description of fig. 6-8.
Case four: the recognition result of Top1 belongs to group four
In the fourth case, the recognition result of Top1 received from the object recognition model belongs to the group four, that is, the recognition result of the object recognition model at this time may be considered to be incorrect, and the screen 50 shown in fig. 6 may be displayed. The screen 50 does not display the recognition result of Top1 but displays a prompt requesting the user to input additional information about the recognized object, and may include, for example, a prompt area requesting the user to input one or more additional images; and/or to inform the user of shooting instructions for taking pictures from different angles and/or distances. In the example shown in fig. 6, the screen 50 displays a prompt "Could you please try 'Multi-image' identification? "to request the user to input one or more additional images about the identified object. Since the recognition result of Top1 is incorrect this time, it is not displayed to the user in the screen 50 and may not be stored in the history of successful recognition.
The user may operate in response to the prompt, for example clicking on the button "Multi-image identification" in the screen 50 to enter additional information about the identified object by entering one or more additional images. In one example, the screen 61 is displayed corresponding to the button "Multi-image identification" in the screen 50 being clicked. In the illustrated example, the screen 61 includes shooting instructions that tell the user to take pictures from different angles and/or distances. An area 63 of the frame 61 is located below the captured viewfinder frame and includes 3 small boxes that request the user to enter 3 additional images about the identified subject to re-identify the subject. The user can operate the button 64 to take one or more of the requested 3 additional images. After the image is captured, its thumbnail is displayed in a small box of the area 63, during which an animation effect can be presented in which the image is reduced to the small box. For example, after the first additional image is captured, the screen 62 may be displayed. Further, the user may also operate a button 65 to select one or more of the requested 3 additional images from the album. Similar to the photographed image, a thumbnail of the selected image is also displayed in the small box of the area 63. These additional images may be displayed sequentially from left to right in the order of input.
When the number of additional images inputted reaches a predetermined number (for example, the requested number, in this example, 3), the re-recognition of the recognized object may be automatically started, or the re-recognition may be manually started by the user operating the button 66 (i.e., an operation for instructing the start of the re-recognition). If the number of additional images inputted is less than the predetermined number, the user can manually initiate re-recognition by operating the button 66. The re-recognition may be based only on the current input of one or more additional images, or may be based on the previously input first image and the current input of one or more additional images. The thumbnail of each additional picture includes a deletion operation area (e.g., an "x" symbol in the upper right corner thereof), and the user can delete any one of the additional pictures before the re-recognition starts. The re-recognition may be performed by the aforementioned object recognition model. The object recognition model re-identifies the recognized object based on the one or more additional images, or based on the one or more additional images and the first image, and provides a re-identified result (e.g., may include only one classification with the highest confidence). After the re-recognition result is obtained, a screen may be displayed to display the result to the user. One or more of the additional imagery may be saved in a history of successful identifications.
Fig. 9 is a block diagram that schematically illustrates at least a portion of a computer system 700 for object recognition, in accordance with an embodiment of the present disclosure. Those skilled in the art will appreciate that the system 700 is merely an example and should not be considered as limiting the scope of the present disclosure or the features described herein. In this example, the system 700 may include one or more storage devices 710, one or more electronic devices 720, and one or more computing devices 730, which may be communicatively connected to each other via a network or bus 740. The one or more storage devices 710 provide storage services for one or more electronic devices 720, as well as one or more computing devices 730. Although the one or more storage 710 are shown in the system 700 as separate blocks apart from the one or more electronic devices 720, and the one or more computing devices 730, it should be understood that the one or more storage 710 may actually be stored on any of the other entities 720, 730 included in the system 700. Each of the one or more electronic devices 720 and the one or more computing devices 730 may be located at different nodes of the network or bus 740 and may be capable of communicating directly or indirectly with other nodes of the network or bus 740. Those skilled in the art will appreciate that the system 700 may also include other devices not shown in fig. 9, with each different device being located at a different node of the network or bus 740.
The one or more storage devices 710 may be configured to store any of the data described above, including but not limited to: first image, additional image, object recognition model, each sample set/test data set, recognition result, each group, program file of application, and the like. One or more computing devices 730 may be configured to perform one or more of the methods according to embodiments described above, and/or one or more steps of one or more methods according to embodiments. One or more electronic devices 720 may be configured to provide a service to a user, which may display screens 10 through 50 and 61, 62 as described above. The one or more electronic devices 720 may also be configured to perform one or more steps of a method according to an embodiment.
The network or bus 740 may be any wired or wireless network and may include cables. The network or bus 740 may be part of the internet, world wide web, a specific intranet, a wide area network, or a local area network. The network or bus 740 may utilize standard communication protocols such as ethernet, WiFi, and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing. The network or bus 740 may also include, but is not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA (eisa) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Each of the one or more electronic devices 720 and the one or more computing devices 730 may be configured similarly to the system 800 shown in fig. 10, i.e., with one or more processors 810, one or more memories 820, and instructions and data. Each of the one or more electronic devices 720 and the one or more computing devices 730 may be a personal computing device intended for use by a user or a commercial computer device for use by an enterprise, and have all of the components typically used in connection with a personal computing device or a commercial computer device, such as a Central Processing Unit (CPU), memory (e.g., RAM and internal hard drives) that stores data and instructions, one or more I/O devices such as a display (e.g., a monitor having a screen, a touch screen, a projector, a television, or other device operable to display information), a mouse, a keyboard, a touch screen, a microphone, speakers, and/or a network interface device, among others.
One or more electronic devices 720 may also include one or more cameras for capturing still images or recording video streams, as well as all components for connecting these elements to each other. While one or more of the electronic devices 720 may each comprise a full-sized personal computing device, they may alternatively comprise a mobile computing device capable of wirelessly exchanging data with a server over a network such as the internet. For example, the one or more electronic devices 720 may be mobile phones, or devices such as PDAs with wireless support, tablet PCs, or netbooks capable of obtaining information via the internet. In another example, one or more electronic devices 720 may be wearable computing systems.
Fig. 10 is a block diagram that schematically illustrates at least a portion of a computer system 800 for object recognition, in accordance with an embodiment of the present disclosure. The system 800 includes one or more processors 810, one or more memories 820, and other components (not shown) typically present in a computer or like device. Each of the one or more memories 820 may store content accessible by the one or more processors 810, including instructions 821 executable by the one or more processors 810, and data 822 retrievable, manipulable, or stored by the one or more processors 810.
The instructions 821 may be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by one or more processors 810. The terms "instructions," "applications," "processes," "steps," and "programs" herein may be used interchangeably. The instructions 821 may be stored in an object code format for direct processing by the one or more processors 810, or in any other computer language, including scripts or collections of independent source code modules that are interpreted or compiled in advance, as needed. Instructions 821 may include instructions that cause, for example, one or more processors 810 to function as neural networks herein. The functions, methods, and routines of the instructions 821 are explained in more detail elsewhere herein.
The one or more memories 820 may be any temporary or non-temporary computer-readable storage medium capable of storing content accessible by the one or more processors 810, such as a hard drive, memory card, ROM, RAM, DVD, CD, USB memory, writable and read-only memories, etc. One or more of the one or more memories 820 may comprise a distributed storage system, where the instructions 821 and/or data 822 may be stored on a plurality of different storage devices, which may be physically located at the same or different geographic locations. One or more of the one or more memories 820 may be connected to the one or more first devices 810 via a network and/or may be directly connected to or incorporated into any of the one or more processors 810.
The one or more processors 810 may retrieve, store, or modify data 822 according to instructions 821. Data 822 stored in one or more memories 820 may include at least portions of one or more of the items stored in one or more storage devices 710 described above. For example, although the subject matter described herein is not limited by any particular data structure, data 822 might also be stored in a computer register (not shown) as a table or XML document having many different fields and records stored in a relational database. The data 822 may be formatted in any computing device readable format, such as, but not limited to, binary values, ASCII, or unicode. Further, the data 822 may include any information sufficient to identify the relevant information, such as a number, descriptive text, proprietary code, pointer, reference to data stored in other memory, such as at other network locations, or information used by a function to calculate the relevant data.
The one or more processors 810 may be any conventional processor, such as a commercially available Central Processing Unit (CPU), Graphics Processing Unit (GPU), or the like. Alternatively, one or more processors 810 may also be special-purpose components, such as an Application Specific Integrated Circuit (ASIC) or other hardware-based processor. Although not required, one or more of the processors 810 may include specialized hardware components to perform particular computational processes faster or more efficiently, such as image processing of imagery.
Although one or more processors 810 and one or more memories 820 are schematically illustrated in fig. 10 within the same block, system 800 may actually comprise multiple processors or memories that may reside within the same physical housing or within different physical housings. For example, one of the one or more memories 820 may be a hard disk drive or other storage medium located in a different housing than the housing of each of the one or more computing devices (not shown) described above. Thus, references to a processor, computer, computing device, or memory are to be understood as including references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel.
In the specification and claims, the word "a or B" includes "a and B" and "a or B" rather than exclusively including only "a" or only "B" unless specifically stated otherwise.
Reference in the present disclosure to "one embodiment," "some embodiments," means that a feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment, at least some embodiments, of the present disclosure. Thus, the appearances of the phrases "in one embodiment," "in some embodiments" in various places throughout this disclosure are not necessarily referring to the same or like embodiments. Furthermore, the features, structures, or characteristics may be combined in any suitable combination and/or sub-combination in one or more embodiments.
As used herein, the word "exemplary" means "serving as an example, instance, or illustration," and not as a "model" that is to be replicated accurately. Any implementation exemplarily described herein is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, the disclosure is not limited by any expressed or implied theory presented in the preceding technical field, background, brief summary or the detailed description.
In addition, certain terminology may also be used in the following description for the purpose of reference only, and thus is not intended to be limiting. For example, the terms "first," "second," and other such numerical terms referring to structures or elements do not imply a sequence or order unless clearly indicated by the context. It will be further understood that the terms "comprises/comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In this disclosure, the terms "component" and "system" are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or the like. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers.
Those skilled in the art will appreciate that the boundaries between the above described operations merely illustrative. Multiple operations may be combined into a single operation, single operations may be distributed in additional operations, and operations may be performed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments. However, other modifications, variations, and alternatives are also possible. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
In addition, embodiments of the present disclosure may also include the following examples:
1. a method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
in response to the first classification belonging to a first group, displaying a first screen, wherein the first screen comprises the first classification; and
in response to the first classification belonging to a second group, displaying a second screen, wherein the second screen does not include the first classification and includes a prompt requesting a user to enter additional information about the identified object, wherein,
the first group and the second group are established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group to which the object recognition model is directed, wherein the first group includes individual classifications for which the recognition accuracy rates satisfy a first condition, the second group includes individual classifications for which the recognition accuracy rates satisfy a second condition, wherein,
the first condition is that the identification accuracy of the classification unit of the individual classification is a first grade, the second condition is that the identification accuracy of the classification unit of the individual classification is a second grade, and the first grade is higher than the second grade.
2. The method of 1, wherein the classification unit of the first classification is seed.
3. The method of 1, wherein the object recognition model provides one or more classifications of the recognized object, wherein the first classification is a most confident classification of the one or more classifications.
4. The method of 1, further comprising:
in response to a first operation by a user while the first screen is displayed, displaying a first additional page including one or more of:
shooting guidance;
altering the prompt for the first classification; and
a classification of individuals having a similar morphology as the individual indicated by the first classification.
5. The method of 1, wherein requesting a prompt for a user to enter additional information about the identified object comprises:
requesting a user to input a prompt area for one or more additional images; and/or
And (3) informing the user of shooting guidance for shooting images from different angles and/or distances.
6. The method of 1, further comprising:
in response to an input of the additional information while the second screen is displayed, driving the object recognition model to re-recognize the classification of the recognized object based on the first picture and the additional information or based on the additional information;
receiving a second classification of the re-identified object from the object identification model; and
displaying the second classification.
7. The method of 1, further comprising:
displaying a third screen in response to the first category belonging to a third group, wherein,
the third group is established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group for which the recognition accuracy rates satisfy a third condition that the recognition accuracy rate of a classification unit of the individual classification is a third level and the recognition accuracy rate of a classification unit of the individual classification is a first level, the third level being lower than the first level and higher than the second level, wherein,
the third screen includes information on a category to which a classification unit corresponding to the first category belongs.
8. The method according to claim 7, wherein the third picture further includes the first classification after the information on the classification of which the classification unit is a genus.
9. The method of 8, wherein after the first classification, the third picture further comprises:
receiving one or more classifications of lower confidence than the first classification from the object recognition model, wherein the classification of the one or more classifications is the same as the classification of the first classification; and/or
A classification of individuals having a similar morphology as the individual indicated by the first classification.
10. The method of 1, further comprising:
displaying a fourth screen in response to the first classification belonging to a fourth group, wherein,
the fourth group is established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group for which the recognition accuracy rates satisfy a fourth condition that the recognition accuracy rate of a classification unit of the individual classification is a third level and the recognition accuracy rate of a classification unit of the individual classification is a third level, the third level being lower than the first level and higher than the second level, wherein,
the fourth picture includes the first classification and at least one of:
receiving one or more classifications of lower confidence than the first classification from the object recognition model, wherein the classification of the one or more classifications is the same as the classification of the first classification; and
a classification of individuals having a similar morphology as the individual indicated by the first classification.
11. The method of 10, wherein the fourth screen further comprises a prompt requesting a user to enter additional information about the identified object.
12. A method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
in response to the first classification belonging to a first group, displaying a first screen, wherein the first screen comprises the first classification; and
in response to the first classification belonging to a second group, displaying a second screen, wherein the second screen does not include the first classification and includes a prompt requesting a user to enter additional information about the identified object, wherein,
the first group and the second group are established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group to which the object recognition model is directed, wherein the first group includes individual classifications for which the recognition accuracy rates satisfy a first condition, the second group includes individual classifications for which the recognition accuracy rates satisfy a second condition, wherein,
the first condition is that the accuracy of identification of the classification unit species of the individual classification is higher than a first threshold, the second condition is that the accuracy of identification of the classification unit species of the individual classification is lower than a second threshold, and wherein the first threshold is higher than the second threshold.
13. The method of claim 12, further comprising:
displaying a third screen in response to the first category belonging to a third group, wherein,
the third group is established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the targeted object population, the third group including individual classifications for which the recognition accuracy rates satisfy a third condition, wherein,
the third condition is that the recognition accuracy of the classification unit species of the individual classification falls within a first range whose upper limit is lower than a first threshold and whose lower limit is higher than a second threshold,
the third picture includes the first classification, and:
a classification of individuals having a similar morphology as the individual indicated by the first classification; and/or
The confidence level received from the object recognition model is lower than one or more classifications of the first classification, wherein the classification unit of the one or more classifications is the same as the classification unit of the first classification.
14. The method of 13, the third group comprising a first subset and a second subset, the method further comprising:
displaying a first sprite in response to the first classification belonging to a first sub-group, and displaying a second sprite in response to the first classification belonging to a second sub-group, wherein,
the first subset comprising individual classifications for which the recognition accuracy also satisfies a first sub-condition, the second subset comprising individual classifications for which the recognition accuracy also satisfies a second sub-condition,
the first sub-condition is that the identification accuracy of the individual classification of which the classification unit belongs is higher than a first threshold value, the second sub-condition is that the identification accuracy of the individual classification of which the classification unit belongs falls within a first range,
the first sprite includes information on a category to which a classification unit corresponding to the first category belongs, and the second sprite does not include information on a category to which a classification unit corresponding to the first category belongs.
15. The method of claim 12, wherein the first threshold value ranges from greater than or equal to 80%.
16. The method of claim 12, wherein the second threshold value ranges from less than or equal to 35%.
17. The method of claim 13, wherein the first range includes a numerical range of 45% to 65%.
18. A method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
displaying information on a category to which a classification unit corresponding to the first category belongs in response to the first category belonging to a pre-established group, wherein,
the group is established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group, wherein the group comprises individual classifications for which the recognition accuracy rate of the classification unit is lower than a first threshold value and the recognition accuracy rate of the classification unit is higher than a second threshold value.
19. The method of 18, further comprising: displaying the first classification after the information on the classification of which the classification unit is a genus.
20. The method of claim 19, further comprising: displaying, after the first classification:
receiving one or more classifications of lower confidence than the first classification from the object recognition model, wherein the classification of the one or more classifications is the same as the classification of the first classification; and/or
A classification of individuals having a similar morphology as the individual indicated by the first classification.
21. A method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
responsive to the first classification belonging to a pre-established group, not displaying the first classification and displaying a prompt requesting a user to enter additional information about the identified object, wherein,
the group is established based on the object recognition model for statistically identifying accuracy rates of individual classifications in a targeted object population, wherein the group includes individual classifications for which the identifying accuracy rate is below a threshold.
22. The method of 21, wherein requesting a prompt for a user to enter additional information about the identified object comprises:
requesting a user to input a prompt area for one or more additional images; and/or
And (3) informing the user of shooting guidance for shooting images from different angles and/or distances.
23. The method of 22, further comprising:
in response to input of a predetermined number of additional images, or in response to input of less than the predetermined number of additional images and an operation indicating start of re-recognition, driving the object recognition model to re-recognize the classification of the recognized object based on the first image and the additional images, or based on the additional images;
receiving a second classification of the re-identified object from the object identification model; and
displaying the second classification.
24. An electronic device, comprising:
one or more processors configured to cause the electronic device to perform the method of any of claims 1-23.
25. An apparatus for operating an electronic device, comprising:
one or more processors configured to cause the electronic device to perform the method of any of claims 1-23.
26. A computer system for object recognition, comprising:
one or more processors; and
one or more memories configured to store a series of computer-executable instructions and computer-accessible data associated with the series of computer-executable instructions,
wherein the series of computer-executable instructions, when executed by the one or more processors, cause the computer system to perform the method of any of claims 1-23.
27. A non-transitory computer-readable storage medium having stored thereon a series of computer-executable instructions that, when executed by one or more computer systems, cause the one or more computer systems to perform the method of any one of claims 1-23.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. The various embodiments disclosed herein may be combined in any combination without departing from the spirit and scope of the present disclosure. It will also be appreciated by those skilled in the art that various modifications may be made to the embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. A method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
in response to the first classification belonging to a first group, displaying a first screen, wherein the first screen comprises the first classification; and
in response to the first classification belonging to a second group, displaying a second screen, wherein the second screen does not include the first classification and includes a prompt requesting a user to enter additional information about the identified object, wherein,
the first group and the second group are established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group to which the object recognition model is directed, wherein the first group includes individual classifications for which the recognition accuracy rates satisfy a first condition, the second group includes individual classifications for which the recognition accuracy rates satisfy a second condition, wherein,
the first condition is that the identification accuracy of the classification unit of the individual classification is a first grade, the second condition is that the identification accuracy of the classification unit of the individual classification is a second grade, and the first grade is higher than the second grade.
2. The method of claim 1, wherein the classification unit of the first classification is a seed.
3. The method of claim 1, wherein the object recognition model provides one or more classifications of the recognized object, wherein the first classification is a most confident classification of the one or more classifications.
4. A method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
in response to the first classification belonging to a first group, displaying a first screen, wherein the first screen comprises the first classification; and
in response to the first classification belonging to a second group, displaying a second screen, wherein the second screen does not include the first classification and includes a prompt requesting a user to enter additional information about the identified object, wherein,
the first group and the second group are established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group to which the object recognition model is directed, wherein the first group includes individual classifications for which the recognition accuracy rates satisfy a first condition, the second group includes individual classifications for which the recognition accuracy rates satisfy a second condition, wherein,
the first condition is that the accuracy of identification of the classification unit species of the individual classification is higher than a first threshold, the second condition is that the accuracy of identification of the classification unit species of the individual classification is lower than a second threshold, and wherein the first threshold is higher than the second threshold.
5. A method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
displaying information on a category to which a classification unit corresponding to the first category belongs in response to the first category belonging to a pre-established group, wherein,
the group is established based on the object recognition model for statistically recognizing accuracy rates of individual classifications in the object group, wherein the group comprises individual classifications for which the recognition accuracy rate of the classification unit is lower than a first threshold value and the recognition accuracy rate of the classification unit is higher than a second threshold value.
6. A method for object recognition, comprising:
receiving a first classification of an identified object from a pre-established object recognition model, the object recognition model recognizing the classification of the identified object based on a first image representing at least a portion of the identified object;
responsive to the first classification belonging to a pre-established group, not displaying the first classification and displaying a prompt requesting a user to enter additional information about the identified object, wherein,
the group is established based on the object recognition model for statistically identifying accuracy rates of individual classifications in a targeted object population, wherein the group includes individual classifications for which the identifying accuracy rate is below a threshold.
7. An electronic device, comprising:
one or more processors configured to cause the electronic device to perform the method of any of claims 1-6.
8. An apparatus for operating an electronic device, comprising:
one or more processors configured to cause the electronic device to perform the method of any of claims 1-6.
9. A computer system for object recognition, comprising:
one or more processors; and
one or more memories configured to store a series of computer-executable instructions and computer-accessible data associated with the series of computer-executable instructions,
wherein the series of computer-executable instructions, when executed by the one or more processors, cause the computer system to perform the method of any of claims 1-6.
10. A non-transitory computer-readable storage medium having stored thereon a series of computer-executable instructions that, when executed by one or more computer systems, cause the one or more computer systems to perform the method of any one of claims 1-6.
CN202110171761.3A 2021-02-08 2021-02-08 Method for object recognition, computer system and electronic equipment Active CN112784925B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110171761.3A CN112784925B (en) 2021-02-08 2021-02-08 Method for object recognition, computer system and electronic equipment
PCT/CN2022/073987 WO2022166706A1 (en) 2021-02-08 2022-01-26 Object recognition method, computer system, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110171761.3A CN112784925B (en) 2021-02-08 2021-02-08 Method for object recognition, computer system and electronic equipment

Publications (2)

Publication Number Publication Date
CN112784925A true CN112784925A (en) 2021-05-11
CN112784925B CN112784925B (en) 2024-05-31

Family

ID=75761265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110171761.3A Active CN112784925B (en) 2021-02-08 2021-02-08 Method for object recognition, computer system and electronic equipment

Country Status (2)

Country Link
CN (1) CN112784925B (en)
WO (1) WO2022166706A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298180A (en) * 2021-06-15 2021-08-24 杭州睿胜软件有限公司 Method and computer system for plant identification
CN113313193A (en) * 2021-06-15 2021-08-27 杭州睿胜软件有限公司 Plant picture identification method, readable storage medium and electronic device
WO2022166706A1 (en) * 2021-02-08 2022-08-11 杭州睿胜软件有限公司 Object recognition method, computer system, and electronic device
WO2024027476A1 (en) * 2022-08-03 2024-02-08 杭州睿胜软件有限公司 Identification processing method and system for plant image, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114333A1 (en) * 2017-10-13 2019-04-18 International Business Machines Corporation System and method for species and object recognition
CN110490086A (en) * 2019-07-25 2019-11-22 杭州睿琪软件有限公司 A kind of method and system for Object identifying result secondary-confirmation
CN110852376A (en) * 2019-11-11 2020-02-28 杭州睿琪软件有限公司 Method and system for identifying biological species
CN112270297A (en) * 2020-11-13 2021-01-26 杭州睿琪软件有限公司 Method and computer system for displaying recognition result

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784925B (en) * 2021-02-08 2024-05-31 杭州睿胜软件有限公司 Method for object recognition, computer system and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114333A1 (en) * 2017-10-13 2019-04-18 International Business Machines Corporation System and method for species and object recognition
CN110490086A (en) * 2019-07-25 2019-11-22 杭州睿琪软件有限公司 A kind of method and system for Object identifying result secondary-confirmation
CN110852376A (en) * 2019-11-11 2020-02-28 杭州睿琪软件有限公司 Method and system for identifying biological species
CN112270297A (en) * 2020-11-13 2021-01-26 杭州睿琪软件有限公司 Method and computer system for displaying recognition result

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022166706A1 (en) * 2021-02-08 2022-08-11 杭州睿胜软件有限公司 Object recognition method, computer system, and electronic device
CN113298180A (en) * 2021-06-15 2021-08-24 杭州睿胜软件有限公司 Method and computer system for plant identification
CN113313193A (en) * 2021-06-15 2021-08-27 杭州睿胜软件有限公司 Plant picture identification method, readable storage medium and electronic device
WO2022262586A1 (en) * 2021-06-15 2022-12-22 杭州睿胜软件有限公司 Method for plant identification, computer system and computer-readable storage medium
WO2024027476A1 (en) * 2022-08-03 2024-02-08 杭州睿胜软件有限公司 Identification processing method and system for plant image, and storage medium

Also Published As

Publication number Publication date
CN112784925B (en) 2024-05-31
WO2022166706A1 (en) 2022-08-11

Similar Documents

Publication Publication Date Title
CN112784925B (en) Method for object recognition, computer system and electronic equipment
EP3779774B1 (en) Training method for image semantic segmentation model and server
US10242250B2 (en) Picture ranking method, and terminal
CN112270297B (en) Method and computer system for displaying recognition results
JP5214760B2 (en) Learning apparatus, method and program
JP2017168057A (en) Device, system, and method for sorting images
WO2019119396A1 (en) Facial expression recognition method and device
CN111582342A (en) Image identification method, device, equipment and readable storage medium
CN115294150A (en) Image processing method and terminal equipment
US20240203097A1 (en) Method and apparatus for training image processing model, and image classifying method and apparatus
CN112101300A (en) Medicinal material identification method and device and electronic equipment
CN110728194A (en) Intelligent training method and device based on micro-expression and action recognition and storage medium
CN112417970A (en) Target object identification method, device and electronic system
CN112347997A (en) Test question detection and identification method and device, electronic equipment and medium
CN114357206A (en) Education video color subtitle generation method and system based on semantic analysis
CN114419133A (en) Method and device for judging whether container of plant is suitable for maintaining plant
CN112801266B (en) Neural network construction method, device, equipment and medium
JP6314071B2 (en) Information processing apparatus, information processing method, and program
CN110852376B (en) Method and system for identifying biological species
CN113298180A (en) Method and computer system for plant identification
CN113705310A (en) Feature learning method, target object identification method and corresponding device
CN113255828B (en) Feature retrieval method, device, equipment and computer storage medium
CN113435942A (en) Method and computer system for estimating mineral prices
CN110490027B (en) Face feature extraction training method and system
CN113704623A (en) Data recommendation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant