CN111950344B - Biological category identification method and device, storage medium and electronic equipment - Google Patents

Biological category identification method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111950344B
CN111950344B CN202010594653.2A CN202010594653A CN111950344B CN 111950344 B CN111950344 B CN 111950344B CN 202010594653 A CN202010594653 A CN 202010594653A CN 111950344 B CN111950344 B CN 111950344B
Authority
CN
China
Prior art keywords
category
image
living
environment
environmental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010594653.2A
Other languages
Chinese (zh)
Other versions
CN111950344A (en
Inventor
杨敏
崔程
魏凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010594653.2A priority Critical patent/CN111950344B/en
Publication of CN111950344A publication Critical patent/CN111950344A/en
Application granted granted Critical
Publication of CN111950344B publication Critical patent/CN111950344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a biological category identification method, a biological category identification device, a storage medium and electronic equipment, and relates to the technical field of computer vision and deep learning. The specific implementation scheme is as follows: acquiring an environment image of a living being, and acquiring a main category of the living being; inputting the environmental image into an environmental recognition model to generate an environmental category; the sub-category of the living beings is determined according to the main category of the living beings and the environment category, and the environment category of the environment where the living beings are located and the main category of the living beings can be combined, so that the sub-category of the living beings is identified, fine-grained identification of the living beings is realized, and the identification effect of the category of the living beings is improved.

Description

Biological category identification method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of computer vision and deep learning technologies, and in particular, to a method and apparatus for identifying a biological class, a storage medium, and an electronic device.
Background
Fine granularity recognition is to accurately and finely distinguish and recognize sub-categories of a certain object. These subcategories are very similar in visual sense, and are very challenging for both people and algorithms, such as different types of birds, dogs, flowers, automobiles, etc., and are generally difficult to distinguish without corresponding expertise. Fine-grained recognition is more complex and difficult than general recognition analysis of objects, and has greater reference significance for guiding life and practice. Currently, there are many fine-grained identification applications, such as identifying sub-categories of living things, and the like, which play an important role as an assistant in the life of people, and fine-grained identification of living things can provide help for people to better recognize living things, and can also contribute to protecting rare living things, so that the method has a very good practical value.
Disclosure of Invention
The method, the device, the storage medium, the electronic equipment and the computer program product for identifying the biological category are provided, and the environment category of the environment where the living beings are located is combined with the main category of the living beings, so that the sub-category of the living beings is identified, fine-grained identification for the living beings is realized, and the identification effect of the biological category is improved.
According to a first aspect, there is provided a method of identifying a biological class, comprising: acquiring an environment image of a living being, and acquiring a main category of the living being; inputting the environmental image into an environmental recognition model to generate an environmental category; determining a sub-category of the living being based on the main category of the living being in combination with the environmental category.
According to the biological category identification method, the environment image of the living beings is acquired, and the main category of the living beings is acquired; inputting the environmental image into an environmental recognition model to generate an environmental category; the sub-category of the living beings is determined according to the main category of the living beings and the environment category, and the environment category of the environment where the living beings are located and the main category of the living beings can be combined, so that the sub-category of the living beings is identified, fine-grained identification of the living beings is realized, and the identification effect of the category of the living beings is improved.
According to a second aspect, there is provided an identification device of a biological class, comprising: the first acquisition module is used for acquiring an environment image of the living being and acquiring a main category of the living being; a generation module for inputting the environmental image to an environmental recognition model to generate an environmental category; and the first determining module is used for determining the subcategory of the living beings according to the main category of the living beings and the environment category.
The biological category identification device acquires an environment image of a living being and acquires a main category of the living being; inputting the environmental image into an environmental recognition model to generate an environmental category; the sub-category of the living beings is determined according to the main category of the living beings and the environment category, and the environment category of the environment where the living beings are located and the main category of the living beings can be combined, so that the sub-category of the living beings is identified, fine-grained identification of the living beings is realized, and the identification effect of the category of the living beings is improved.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of identifying a biological class according to embodiments of the present application.
According to the electronic equipment, the environment image of the living beings is obtained, and the main category of the living beings is obtained; inputting the environmental image into an environmental recognition model to generate an environmental category; the sub-category of the living beings is determined according to the main category of the living beings and the environment category, and the environment category of the environment where the living beings are located and the main category of the living beings can be combined, so that the sub-category of the living beings is identified, fine-grained identification of the living beings is realized, and the identification effect of the category of the living beings is improved.
According to a fourth aspect, a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method of identifying a biological class disclosed in an embodiment of the present application is presented.
According to a fifth aspect, a computer program product is presented, comprising a computer program, which, when executed by a processor, implements a method of identifying a biological class as disclosed in an embodiment of the present application.
According to the technology of the application, the technical problems that the identification effect of the biological category is poor, category identification is limited to the characteristics of a biological main body, and fine granularity identification is not accurate are solved, the environmental category of the environment where the organism is located is combined with the main category of the organism, so that the sub-category of the organism is identified, fine granularity identification for the organism is realized, and the identification effect of the biological category is improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present application;
fig. 2 is a schematic diagram of a photographed image according to an embodiment of the present application;
FIG. 3 is a graph of the thermal response of attention of an embodiment of the present application;
FIG. 4 is a schematic illustration of a biological image according to an embodiment of the present application;
FIG. 5 is a schematic diagram according to a second embodiment of the present application;
FIG. 6 is a schematic diagram according to a third embodiment of the present application;
FIG. 7 is a schematic diagram according to a fourth embodiment of the present application;
fig. 8 is a block diagram of an electronic device for implementing a method of identifying a biological class according to an embodiment of the application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present application. It should be noted that, the execution body of the biological class identification method in this embodiment is a biological class identification device, and the device may be implemented in a software and/or hardware manner, and the device may be configured in an electronic device, where the electronic device may include, but is not limited to, a terminal, a server, and the like.
The embodiment of the application relates to the technical field of computer vision and deep learning, wherein the technical field of computer vision mainly refers to machine vision such as recognition, tracking and measurement of targets by using a camera and electronic equipment instead of human eyes, and further performs graphic processing, so that the electronic equipment is processed into images which are more suitable for human eyes to observe or transmit to an instrument for detection. Deep learning is the inherent regularity and presentation hierarchy of learning sample data, and the information obtained during such learning is helpful in interpreting data such as text, images and sounds. The final goal of deep learning is to enable a machine to analyze learning capabilities like a person, and to recognize text, images, and sound data.
As shown in fig. 1, the method for identifying a biological class may include:
s101: an environmental image of the living being is acquired, and a main category of the living being is acquired.
The environmental image may be an image of a living environment of the living being, which is captured in advance, and may represent vegetation, a sea area, or the like in the living environment of the living being.
It will be appreciated that since the sub-categories of the same organism may be further divided into main categories, e.g. birds, and birds may be subdivided into finer sub-categories, e.g. seabirds, sparrows, seagulls, magpies, etc., the living environments of the organisms of different sub-categories may be the same or different.
In the application, the environment category of living environment of living beings is identified by combining computer vision with deep learning, so that the sub-category of the living beings can be further determined according to the main category of the living beings and the environment category, and the living beings are efficiently and accurately divided into fine grains.
In some embodiments, the above-mentioned environmental image and main category may be captured and calibrated in advance, for example, if the main category of the living being is known to be bird and the environmental image of the living being has been captured, the environmental image of the living being may be directly obtained, and the main category of the living being may be obtained.
In other embodiments, the main category of the living being is acquired, and a biological image of the living being is acquired; the biological image is identified through the classification model to obtain the main category of the living beings, so that the functional dimension of the identification method of the category of the living beings can be effectively enriched, the sub-category of the living beings is identified, the main category of the living beings is identified, and the comprehensiveness of category identification is improved.
The above-mentioned biological image may be obtained by photographing the biological subject, and the biological image may be used to describe the posture, shape, and other characteristics of the biological subject, and thus, the biological image may be input into a classification model, the characteristics of the biological subject such as the posture, shape, and the like of the biological subject may be obtained by analyzing the biological image by the classification model, and the classification corresponding to the characteristics of the biological subject such as the posture, shape, and the like may be determined as the main classification of the biological subject according to a model classification algorithm, which is not limited thereto.
The above classification model may be, for example, a res net classification network model, where the res net classification network model is a convolutional neural network-based feature extraction network, and in this embodiment of the present application, multiple data enhancement methods may be added in advance, and an identification model of a biological main class is trained based on an ImageNet pre-training model, and the supervision information is the main class of the organism, and is quantized to 0, 1, 2, 3, and 4 in training, which is used as a coarse-grained classification model.
In the embodiment of the application, the feature map corresponding to the biological image can be identified according to the classification model; determining the attention response value of each feature point in the feature map; forming an attention thermal response diagram corresponding to the feature diagram according to the attention response value; performing enhancement processing on the feature map according to the attention thermal response map to obtain a target feature map; and processing the target feature map according to the classification model to obtain a main category, identifying the main category according to the biological image, improving the identification effect, improving the identification efficiency of the main category, and guaranteeing the accuracy of the subsequent fine granularity classification.
As an example, according to the classification model, the feature map corresponding to the biological image is identified, and the attention response value of the feature map is calculated, for example, the channel number of the last feature map of the res net classification model is 2048, the size of the last feature map is 14×14, the feature map is averaged in the channel direction, an attention thermal response map of 1×14×14 is obtained, the normalized attention thermal response map is mapped to the size of the original map, and the attention thermal response map is directly multiplied by the original map to obtain the enhanced region in the original captured image, which can be regarded as the biological image of the foreground in the captured image.
Referring to fig. 2, fig. 3, fig. 4, fig. 2 is a schematic view of a photographed image of an embodiment of the present application, fig. 3 is a schematic view of an attention thermal response image of an embodiment of the present application, and fig. 4 is a schematic view of a biological image of an embodiment of the present application, it can be seen that the biological subject area in fig. 4 has been enhanced to have a more comprehensive biological characteristic expression effect, and the background image in fig. 2 can be regarded as an environmental image of a living organism.
When the biological image of the living being is obtained, the method can be used for obtaining the biological image, segmenting the foreground and the background of the biological image, taking the foreground image as the biological image and taking the background image as the environment image, that is, the method can be used for immediately identifying the living being in fine granularity, and when the requirement for identifying the living being in fine granularity exists, the living being can be directly shot to obtain the shooting image, so that the shooting image is segmented, segmenting the foreground and the background, taking the foreground image as the biological image and taking the background image as the environment image.
S102: the environmental image is input to an environmental recognition model to generate an environmental category.
Before the environmental image is input into the environmental recognition model to generate the environmental category, the environmental recognition model corresponding to the main category is determined according to the main category of the living being, that is, the corresponding environmental recognition model is corresponding to the main category of the living being, the environmental recognition model corresponding to each main category in a plurality of main categories can be established in advance, so that the environmental recognition model corresponding to the main category can be determined first, the environmental image is input into the environmental recognition model corresponding to the main category to generate the environmental category, the relevance between the identified environmental category and the main category is guaranteed, and the recognition effect of the method is improved.
After the environmental image is acquired, for example, the background image in fig. 2 may be regarded as an environmental image of living beings, and the environmental image may be input into the environmental recognition model to generate an environmental category.
Referring to fig. 5, fig. 5 is a schematic diagram according to a second embodiment of the present application, before acquiring an environmental image of a living being, the method may further include:
s501: a sample environment image is acquired and a sample environment category corresponding to the sample image is determined.
S502: the sample environmental image and the corresponding sample environmental category are input to an initial environmental recognition model to generate a predicted environmental category.
S503: training the initial environment recognition model according to the predicted environment category and the sample environment category corresponding to the sample image to obtain an environment recognition model.
In this embodiment of the present application, an initial environmental recognition model (for example, a neural network model in deep learning) may be trained in advance, a neural network model in deep learning may be trained by using a sample environmental image and a sample environmental category, and compared with other machine learning methods, the deep learning performs better on a big data set, and by training a model in a deep learning field, the sample environmental image and a corresponding sample environmental category are input to the initial environmental recognition model (model in the deep learning field) to generate a predicted environmental category, so that the trained environmental recognition model may efficiently recognize the environmental category of the living being.
S103: the subcategories of the creature are determined based on the main category of the creature in combination with the environmental category.
In some embodiments, a correspondence table may be preconfigured, where the correspondence table marks a main category, a plurality of environment categories corresponding to the main category, and a correspondence between sub-categories corresponding to the main category and each environment category, so that after determining the main category and the environment category of the living being, the sub-category of the living being may be determined directly according to the correspondence table.
In the embodiment of the application, the number of the environmental categories is at least two, the number of the main categories is at least two, the sub-categories of the living beings are determined according to the combination of the main categories of the living beings and the environmental categories, the first grading value based on the environmental recognition model is obtained, the second grading value corresponding to each main category is obtained by recognizing the main categories of the living beings in advance based on the classification model, and the sub-categories of the living beings are determined according to the first grading value and the second grading value.
In the embodiment, the environment image of the living beings is acquired, and the main category of the living beings is acquired; inputting the environmental image into an environmental recognition model to generate an environmental category; the sub-category of the living beings is determined according to the main category of the living beings and the environment category, and the environment category of the environment where the living beings are located and the main category of the living beings can be combined, so that the sub-category of the living beings is identified, fine-grained identification of the living beings is realized, and the identification effect of the category of the living beings is improved.
Fig. 6 is a schematic diagram according to a third embodiment of the present application.
As shown in fig. 6, the biological class identification apparatus 600 includes:
a first acquiring module 601, configured to acquire an environmental image of a living being, and acquire a main category of the living being;
a generation module 602 for inputting an environmental image into the environmental recognition model to generate an environmental category;
a first determining module 603 is configured to determine a sub-category of the living being according to the main category of the living being in combination with the environmental category.
In one embodiment of the present application, referring to fig. 7, fig. 7 is a schematic diagram according to a fourth embodiment of the present application, further comprising:
a training module 604 that obtains a sample environment image and determines a sample environment category corresponding to the sample image and inputs the sample environment image and the corresponding sample environment category to an initial environment recognition model to generate a predicted environment category; and training the initial environment recognition model according to the predicted environment category and the sample environment category corresponding to the sample image to obtain an environment recognition model.
In one embodiment of the present application, the first obtaining module 601 is further configured to:
acquiring a biological image of a living being;
the biological image is identified by the classification model to obtain a primary class of living beings.
In one embodiment of the present application, referring to fig. 7, further includes:
a second determining module 605 is configured to determine an environment recognition model corresponding to the main category according to the main category of the living being.
In one embodiment of the present application, referring to fig. 7, further includes:
a second acquisition module 606 for acquiring a photographed image of the living being;
the image processing module 607 is configured to segment the captured image of the living being into a foreground and a background, and take the foreground image as a living being image and the background image as an environment image.
In one embodiment of the present application, the first obtaining module 601 is further configured to:
identifying a feature map corresponding to the biological image according to the classification model;
determining the attention response value of each feature point in the feature map;
forming an attention thermal response diagram corresponding to the feature diagram according to the attention response value;
performing enhancement processing on the feature map according to the attention thermal response map to obtain a target feature map;
and processing the target feature map according to the classification model to obtain a main category.
In one embodiment of the present application, the number of environmental categories is at least two, and the number of main categories is at least two, where the first determining module 603 is specifically configured to:
acquiring each environment category, and based on a first grading value of an environment identification model;
obtaining second grading values corresponding to all the main categories, wherein the second grading values are obtained by identifying the main categories of the living beings on the basis of the classification model in advance;
a subcategory of the organism is determined based on the first score value in combination with the second score value.
It should be noted that the foregoing explanation of the method for identifying the biological category is also applicable to the apparatus for identifying the biological category in the present embodiment, and will not be repeated here.
In the embodiment, the environment image of the living beings is acquired, and the main category of the living beings is acquired; inputting the environmental image into an environmental recognition model to generate an environmental category; the sub-category of the living beings is determined according to the main category of the living beings and the environment category, and the environment category of the environment where the living beings are located and the main category of the living beings can be combined, so that the sub-category of the living beings is identified, fine-grained identification of the living beings is realized, and the identification effect of the category of the living beings is improved.
According to embodiments of the present application, there is also provided an electronic device, a readable storage medium and a computer program product.
Fig. 8 is a block diagram of an electronic device for implementing a method of identifying a biological class according to an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 801 performs the respective methods and processes described above, for example, the identification method of the biological class.
For example, in some embodiments, the method of identifying a biological class may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the above-described method of identifying a biological class may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method of identifying the biological category in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out the methods of identifying biological categories of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (16)

1. A method of identifying a biological class, comprising:
acquiring an environment image of a living being, and acquiring a main category of the living being;
inputting the environmental image into an environmental recognition model to generate an environmental category;
determining a sub-category of the living being based on the main category of the living being in combination with the environmental category.
2. The method of claim 1, further comprising, prior to the acquiring the environmental image of the living being:
acquiring a sample environment image and determining a sample environment category corresponding to the sample image;
inputting the sample environmental image and the corresponding sample environmental category to an initial environmental recognition model to generate a predicted environmental category; and
and training the initial environment recognition model according to the predicted environment category and the sample environment category corresponding to the sample image to obtain the environment recognition model.
3. The method of claim 1, wherein the obtaining the primary category of the living being comprises:
acquiring a biological image of the living being;
the biological image is identified by a classification model to obtain a primary class of the living being.
4. A method according to claim 1 or 3, further comprising, prior to said inputting the environmental image into an environmental recognition model to generate an environmental category:
and determining an environment recognition model corresponding to the main category according to the main category of the living being.
5. A method according to claim 3, further comprising:
acquiring a photographed image of the living being;
and cutting the foreground and the background of the shot image of the living being, taking the foreground image as the biological image and taking the background image as the environment image.
6. A method according to claim 3, wherein said identifying the biological image by a classification model to obtain a main class of the organism comprises:
identifying a feature map corresponding to the biological image according to the classification model;
determining the attention response value of each feature point in the feature map;
forming an attention thermal response diagram corresponding to the characteristic diagram according to the attention response value;
performing enhancement processing on the feature map according to the attention thermal response map to obtain a target feature map;
and processing the target feature map according to the classification model to obtain the main category.
7. A method according to claim 3, the number of environmental categories being at least two, the number of primary categories being at least two, wherein the determining the sub-category of the living being in accordance with the primary category of the living being in combination with the environmental categories comprises:
acquiring each environment category, and based on a first grading value of the environment recognition model;
obtaining a second grading value corresponding to each main category, wherein the second grading value is obtained by identifying the main category of the living being based on the classification model in advance;
determining a sub-category of the organism based on the first scoring value in combination with the second scoring value.
8. An apparatus for identifying a biological category, comprising:
the first acquisition module is used for acquiring an environment image of the living being and acquiring a main category of the living being;
a generation module for inputting the environmental image to an environmental recognition model to generate an environmental category;
and the first determining module is used for determining the subcategory of the living beings according to the main category of the living beings and the environment category.
9. The apparatus of claim 8, further comprising:
the training module is used for acquiring a sample environment image, determining a sample environment category corresponding to the sample image, and inputting the sample environment image and the corresponding sample environment category into an initial environment recognition model to generate a predicted environment category; and training the initial environment recognition model according to the predicted environment category and the sample environment category corresponding to the sample image to obtain the environment recognition model.
10. The apparatus of claim 8, wherein the first acquisition module is further configured to:
acquiring a biological image of the living being;
the biological image is identified by a classification model to obtain a primary class of the living being.
11. The apparatus of claim 8 or 10, further comprising:
and the second determining module is used for determining an environment recognition model corresponding to the main category according to the main category of the living being.
12. The apparatus of claim 10, further comprising:
the second acquisition module is used for acquiring a shooting image of the living being;
the image processing module is used for segmenting the foreground and the background of the shot image of the living being, taking the foreground image as the biological image and taking the background image as the environment image.
13. The apparatus of claim 10, wherein the first acquisition module is further to:
identifying a feature map corresponding to the biological image according to the classification model;
determining the attention response value of each feature point in the feature map;
forming an attention thermal response diagram corresponding to the characteristic diagram according to the attention response value;
performing enhancement processing on the feature map according to the attention thermal response map to obtain a target feature map;
and processing the target feature map according to the classification model to obtain the main category.
14. The apparatus of claim 10, the number of environment categories is at least two, the number of main categories is at least two, wherein the first determining module is specifically configured to:
acquiring each environment category, and based on a first grading value of the environment recognition model;
obtaining a second grading value corresponding to each main category, wherein the second grading value is obtained by identifying the main category of the living being based on the classification model in advance;
determining a sub-category of the organism based on the first scoring value in combination with the second scoring value.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202010594653.2A 2020-06-28 2020-06-28 Biological category identification method and device, storage medium and electronic equipment Active CN111950344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010594653.2A CN111950344B (en) 2020-06-28 2020-06-28 Biological category identification method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010594653.2A CN111950344B (en) 2020-06-28 2020-06-28 Biological category identification method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111950344A CN111950344A (en) 2020-11-17
CN111950344B true CN111950344B (en) 2023-06-27

Family

ID=73337265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010594653.2A Active CN111950344B (en) 2020-06-28 2020-06-28 Biological category identification method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111950344B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959304A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 A kind of Tag Estimation method and device
CN110084285A (en) * 2019-04-08 2019-08-02 安徽艾睿思智能科技有限公司 Fish fine grit classification method based on deep learning
CN110458233A (en) * 2019-08-13 2019-11-15 腾讯云计算(北京)有限责任公司 Combination grain object identification model training and recognition methods, device and storage medium
WO2019242222A1 (en) * 2018-06-21 2019-12-26 北京字节跳动网络技术有限公司 Method and device for use in generating information
CN110929774A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Method for classifying target objects in image, method and device for training model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10692220B2 (en) * 2017-10-18 2020-06-23 International Business Machines Corporation Object classification based on decoupling a background from a foreground of an image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959304A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 A kind of Tag Estimation method and device
WO2019242222A1 (en) * 2018-06-21 2019-12-26 北京字节跳动网络技术有限公司 Method and device for use in generating information
CN110084285A (en) * 2019-04-08 2019-08-02 安徽艾睿思智能科技有限公司 Fish fine grit classification method based on deep learning
CN110458233A (en) * 2019-08-13 2019-11-15 腾讯云计算(北京)有限责任公司 Combination grain object identification model training and recognition methods, device and storage medium
CN110929774A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Method for classifying target objects in image, method and device for training model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Probability Fusion Decision Framework of Multiple Deep Neural Networks for Fine-Grained Visual Classification;Yang-Yang Zheng等;《IEEE Access》;122740 - 122757 *
基于Xception的细粒度图像分类;张潜;桑军;吴伟群;吴中元;向宏;蔡斌;;重庆大学学报(05);全文 *

Also Published As

Publication number Publication date
CN111950344A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN113191256B (en) Training method and device of lane line detection model, electronic equipment and storage medium
CN113379718B (en) Target detection method, target detection device, electronic equipment and readable storage medium
WO2020238054A1 (en) Method and apparatus for positioning chart in pdf document, and computer device
EP3859605A2 (en) Image recognition method, apparatus, device, and computer storage medium
CN111598164B (en) Method, device, electronic equipment and storage medium for identifying attribute of target object
CN112560874B (en) Training method, device, equipment and medium for image recognition model
CN112633276B (en) Training method, recognition method, device, equipment and medium
CN112861885B (en) Image recognition method, device, electronic equipment and storage medium
CN113780098B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN113537192B (en) Image detection method, device, electronic equipment and storage medium
CN113205041B (en) Structured information extraction method, device, equipment and storage medium
CN113361572B (en) Training method and device for image processing model, electronic equipment and storage medium
CN113947188A (en) Training method of target detection network and vehicle detection method
CN113191261B (en) Image category identification method and device and electronic equipment
CN113705362B (en) Training method and device of image detection model, electronic equipment and storage medium
CN111640123A (en) Background-free image generation method, device, equipment and medium
CN113344862A (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN113792876B (en) Backbone network generation method, device, equipment and storage medium
CN113011155B (en) Method, apparatus, device and storage medium for text matching
JP2023531759A (en) Lane boundary detection model training method, lane boundary detection model training device, electronic device, storage medium and computer program
CN115457329B (en) Training method of image classification model, image classification method and device
CN111950344B (en) Biological category identification method and device, storage medium and electronic equipment
CN114881227B (en) Model compression method, image processing device and electronic equipment
CN114926322B (en) Image generation method, device, electronic equipment and storage medium
CN116052288A (en) Living body detection model training method, living body detection device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant