CN116343076A - Image detection method, device and equipment for harmful organisms - Google Patents

Image detection method, device and equipment for harmful organisms Download PDF

Info

Publication number
CN116343076A
CN116343076A CN202211698966.8A CN202211698966A CN116343076A CN 116343076 A CN116343076 A CN 116343076A CN 202211698966 A CN202211698966 A CN 202211698966A CN 116343076 A CN116343076 A CN 116343076A
Authority
CN
China
Prior art keywords
image
processed
information
pest
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211698966.8A
Other languages
Chinese (zh)
Inventor
薛松
冯原
辛颖
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211698966.8A priority Critical patent/CN116343076A/en
Publication of CN116343076A publication Critical patent/CN116343076A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Catching Or Destruction (AREA)

Abstract

The disclosure provides a method, a device and equipment for detecting images of pests, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, video processing, deep learning and the like, and can be applied to scenes such as intelligent industry, intelligent cities and the like. The method comprises the following steps: acquiring an image to be processed, and inputting the image to be processed into a biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used for identifying harmful organisms in the image; if the image processing result represents that harmful organisms exist, acquiring adjacent frame images adjacent to the image to be processed; and determining the validity of an image processing result according to the adjacent frame images, the position information of the pests and the image to be processed, wherein the validity is used for indicating whether the pests exist in the position information in the image to be processed so as to improve the accuracy of pest detection.

Description

Image detection method, device and equipment for harmful organisms
Technical Field
The present disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, video processing, deep learning, etc., which can be applied to scenes such as smart industry, smart city, etc.; and more particularly to a method, apparatus and device for image detection of pests.
Background
At present, in detecting harmful organisms which are harmful to human production and life, an artificial intelligence technology can be adopted to train a detection model, and the harmful organisms in an input image are detected based on the trained detection model.
However, due to the influence of the accuracy of the detection model and the influence of ambient light or the like in which the harmful organisms are located in the input image, a phenomenon that the presence or absence of the harmful organisms is determined according to the detection model is likely to be erroneously detected is caused. Therefore, how to improve the detection accuracy of the harmful organisms is a problem to be solved.
Disclosure of Invention
The present disclosure provides a pest image detection method, apparatus and device for improving pest detection accuracy.
According to a first aspect of the present disclosure, there is provided an image detection method of a pest, including:
acquiring an image to be processed, and inputting the image to be processed into a biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used for identifying harmful organisms in the image;
if the image processing result represents that the harmful organisms exist, acquiring an adjacent frame image adjacent to the image to be processed;
And determining the validity of the image processing result according to the adjacent frame image, the position information of the pest and the image to be processed, wherein the validity is used for indicating whether the pest exists in the position information in the image to be processed.
According to a second aspect of the present disclosure, there is provided an image detection device of a pest, including:
the first acquisition unit is used for acquiring an image to be processed;
the processing unit is used for inputting the image to be processed into the biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used for identifying harmful organisms in the image;
the second acquisition unit is used for acquiring an adjacent frame image adjacent to the image to be processed if the image processing result represents that the harmful organism exists;
and the determining unit is used for determining the validity of the image processing result according to the adjacent frame image, the position information of the pest and the image to be processed, wherein the validity is used for indicating whether the pest exists in the position information in the image to be processed.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device that may implement the method of image detection of pests of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
At present, with the development of artificial intelligence technology, the application field of the artificial intelligence technology is also becoming wider and wider. For example, in the field of video monitoring, artificial intelligence technology may be used to process images obtained by monitoring to ensure safety issues in a monitored location, so as to avoid potential safety hazards easily caused by long-time observation of the monitoring video by security personnel. In particular, in the application scenarios of smart industry, smart city, etc., in order to realize real-time detection of pests, a model training method provided in the related art may be generally adopted to obtain a detection model that can be used to identify the pests present in the environment. For example, in practical application, the application scenario may be a canteen safety detection scenario, where detecting whether there are pests in the canteen may cause damage to food or tools, components, etc. in the canteen.
However, when the trained model is used for detecting the pests, detection errors exist in the detection model obtained by training, and further, when the pests input into the detection model are in different environments (for example, different illumination conditions, different environments of reference objects and the like), the recognition accuracy of the detection model is affected.
To avoid at least one of the above technical problems, the inventors of the present disclosure have creatively worked to obtain the inventive concept of the present disclosure: inputting the acquired image to be processed into a biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used for identifying harmful organisms in the image; if the image processing result is determined to represent that the harmful organisms exist, acquiring adjacent frame images adjacent to the image to be processed; and determining the validity of the image processing result according to the adjacent frame images, the position information of the pests and the image to be processed, wherein the validity is used for indicating whether the pests exist in the position information in the image to be processed.
Based on the above inventive concept, the disclosure provides a method, a device and equipment for detecting images of pests, which are applied to the technical fields of computer vision, video processing, deep learning and the like in artificial intelligence, and can be applied to scenes such as intelligent industry, intelligent city and the like so as to achieve the effect of improving the accuracy of the detection results of the images of the pests.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. As shown in fig. 1, a pest image detection method according to an embodiment of the present disclosure includes:
s101, acquiring an image to be processed.
The execution body of the present embodiment may be an image detection device of a pest, and the image detection device may be a server (such as a cloud server or a local server), may also be a computer, may also be a terminal device, may also be a processor, may also be a chip, or the like, which is not limited in this embodiment.
The image to be processed in the present embodiment can be understood as an image taken of an environment in which a pest is to be detected. The image to be processed in the present embodiment may be sent to the execution body of the present embodiment for the remaining devices.
S102, inputting an image to be processed into a biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used to identify pests in the image.
Illustratively, the biological detection model in the present embodiment can recognize whether or not a pest is contained in an image input into the model. In this embodiment, the model structure of the biological detection model is not particularly limited, and may be a common convolutional neural network, a cyclic neural network, or the like.
After the image to be processed is acquired, the image to be processed may be input into the biological detection model so that the biological detection model may output an image processing result of the image to be processed.
When the biological detection model determines that the image to be processed contains the harmful organism, at the moment, the position information corresponding to the harmful organism is directly output as an image processing result.
In one example, if the biological detection model determines that no pest exists in the image to be processed, the preset text information may be directly output, for example, the preset text information may be "the image XXX does not include the pest", where "XXX" represents the name of the image. Or, the position identifier with the corresponding position value being null may be directly output, that is, when the value corresponding to the output position identifier (i.e. the position information) is null, it indicates that no pest exists at this time. And if the value corresponding to the position identifier is not null, representing that the pest exists.
S103, if the image processing result indicates that the harmful organisms exist, acquiring an adjacent frame image adjacent to the image to be processed.
For example, when it is determined that a pest is included in an image to be processed based on a biological detection model, an adjacent frame image adjacent to the image to be processed may be acquired at this time.
The adjacent frame image in this embodiment may be understood as an image captured by an image capturing device corresponding to the image to be processed for the same place at an adjacent time adjacent to the capturing time of the image to be processed. The adjacent frame image may be an image captured before the capturing time of the image to be processed, or may be an image captured after the capturing time of the image to be processed. In addition, the number of adjacent frame images is not particularly limited in the present embodiment.
S104, determining validity of an image processing result according to the adjacent frame images, the position information of the pests and the image to be processed, wherein the validity is used for indicating whether the pests exist in the position information in the image to be processed.
For example, after the adjacent frame image is acquired, it may be further determined whether there is a pest in the image to be processed, that is, the validity of the image processing result is detected, in combination with the adjacent frame image, the image to be processed, and the location information of the pest identified based on the image to be processed.
In one example, a local image at the position information of the pest in the image to be processed can be matched with an adjacent frame image, if it is determined that an area image with high similarity to the local image exists in the adjacent frame image, and when the position distance between the area image in the adjacent frame image and the position of the pest in the image to be processed is greater than a preset value, the existence of the pest in the image to be processed is indicated, and further based on the movement characteristic of the pest, the phenomenon that a static object in the image is mistakenly identified as the pest can be avoided by setting the preset value. The preset value can be determined according to the moving speed of the pest and the time interval between the image to be processed and the adjacent frame image.
It can be appreciated that in this embodiment, when determining that there is a pest in the image to be processed through the biological detection model, at this time, the validity of the image recognition result may be detected according to the determined position information of the pest, the image to be processed, and the adjacent frame images of the image to be processed, so as to effectively avoid the problem that whether there is an inaccurate result of the pest in the image to be processed determined by the model, which is caused by the influence of the model accuracy of the biological detection model in the related art.
In order for the reader to more fully understand the principles of implementation of the present disclosure, the embodiment shown in fig. 1 will now be further refined in conjunction with fig. 2 and 3 below.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure. As shown in fig. 2, the pest image detection method according to the embodiment of the present disclosure includes:
s201, acquiring an image to be processed.
The execution body of the present embodiment may be an image detection device of a pest, and the image detection device may be a server (such as a cloud server or a local server), may also be a computer, may also be a terminal device, may also be a processor, may also be a chip, or the like, which is not limited in this embodiment.
For example, the specific principle of step S201 may be referred to step S101, which is not described herein.
S202, inputting the image to be processed into a biological detection model for image processing, and obtaining a plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one by one.
In this embodiment, when the image to be processed is input to the biological detection model, the biological detection model generates a plurality of candidate detection frames for the same pest to mark the position of the pest when the image to be processed is image-processed. Wherein, a candidate detection frame corresponds to the position information of a pest, and each candidate detection frame corresponds to a confidence information, wherein, the confidence information is used for representing the position information of the candidate detection frame and can accurately mark the credibility of the position of the pest.
For example, when acquiring the biological detection model, an image to be trained may be acquired first, where the image to be trained has identification information, and the identification information includes: a marking frame of the pest and position information of the marking frame. Then, inputting the image to be trained into an initial model to obtain an initial operation result output by the initial model, wherein the initial operation result comprises the following steps: the device comprises a plurality of detection frames, prediction position information corresponding to the detection frames and confidence information of the detection frames. And further determining a loss function corresponding to the initial model according to the initial operation result and the marking information, and adjusting parameters of the initial model based on the obtained loss function to obtain the biological detection model.
In one example, before step S202, the following steps may be further included: performing format conversion processing on the image to be processed to obtain the image to be processed after format conversion, wherein the format conversion processing comprises the following steps: at least one of normalization processing and image resizing processing.
Illustratively, in the present embodiment, the image to be processed also needs to be subjected to the format conversion process before being input to the biological detection model.
In one example, after the image to be processed is acquired, the pixel values in the image to be processed may be normalized, and then the normalized result may be input to the biological detection model, so that processing resources required by the subsequent biological detection module in the image processing process of the image to be processed may be reduced. In this example, specific requirements are not made in the embodiment for the specific implementation of the normalization process.
In one example, after the image to be processed is acquired, the image size in the image to be processed may also be adjusted. It can be understood that the biological image detection model has a requirement on the size of the input image during practical application, so that in order to ensure that the biological image detection model can accurately acquire the information carried by the image to be processed, the biological image detection model needs to be input into the biological detection model after the image size of the image to be processed is adjusted, so as to improve the detection accuracy of the harmful organism.
S203, determining an image processing result according to the plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one by one; wherein the image processing result includes location information of the pest; the biological detection model is used to identify pests in the image.
In this embodiment, after the biological detection model outputs the plurality of candidate detection frames and the confidence information corresponding to each candidate detection frame, the detection frame corresponding to the pest may be selected from the plurality of candidate detection frames based on the confidence information corresponding to each candidate detection frame, and the position information corresponding to the detection frame may be used as the position information corresponding to the pest (i.e., the image processing result).
In one example, according to the relation between the confidence information and the preset confidence, the detection frame corresponding to the confidence information with the value larger than the preset confidence is used as the finally selected detection frame.
It can be understood that when the biological detection model in this embodiment detects a pest, multiple candidate detection frames with confidence information are generated for the same pest, and then the detection frames capable of accurately identifying the pest can be screened out of the multiple candidate detection frames based on the confidence information, so as to improve the detection accuracy of the pest.
In one example, step S203 includes the steps of:
a first step of step S203: determining an initial set; the initial set includes a plurality of candidate detection boxes.
In this embodiment, when determining the image processing result according to the plurality of candidate detection frames and the confidence information corresponding to each candidate detection frame, the plurality of candidate detection frames may be placed in the initial set first.
A second step of step S203: determining candidate detection frames corresponding to confidence information with the maximum value in the initial set as detection frames corresponding to pests; and deleting the detection frame corresponding to the pest in the initial set.
Third step of step S203: determining overlapping area information of candidate detection frames in the initial set and detection frames corresponding to pests; and deleting candidate detection frames corresponding to the overlapping area information with the value larger than the preset value from the initial set.
After the initial set is obtained, the candidate detection frame corresponding to the confidence information with the maximum confidence information value in the initial set may be selected as the detection frame corresponding to the pest. The selected detection box is then removed from the initial set. And determining overlapping part area information between the candidate detection frames in the initial set and the detection frames corresponding to the selected pests one by one in the rest initial set. When the overlapping area information is greater than a preset value, the candidate detection frame is indicated to be the same pest as the detection frame corresponding to the pest determined before, and the candidate detection frame can be deleted from the initial set. And when the overlapping area information is smaller than or equal to a preset value, indicating that the object in the candidate detection frame and the determined pest are not the same object. Then, the second step of step S203 and the third step of step S203 are repeated until the number of candidate detection frames in the initial set is zero, and the preset condition is reached.
Fourth step of step S203: when the preset condition is reached, determining the position information of the detection frame corresponding to the pest as an image processing result.
As an example, it can be understood that, each time the second step of step S203 and the third step of step S203 are performed, a detection frame corresponding to a pest may be determined, and when a preset condition is reached, a position corresponding to the obtained detection frame corresponding to the pest is used as an image processing result.
It can be appreciated that in this embodiment, in the initial set, the detection frames marked with the same pest can be screened out of the plurality of candidate detection frames by the area of the overlapping portion of the detection frames, so that the detection frames capable of accurately calibrating the pest can be screened out of the plurality of candidate detection frames, and the detection accuracy of the pest is improved.
S204, if the image processing result represents that the harmful organisms exist, acquiring an adjacent frame image adjacent to the image to be processed.
For example, the specific principle of step S204 may be referred to step S103, which is not described herein.
S205, determining the image part at the position information of the pest in the adjacent frame images as a first image.
Illustratively, in the present embodiment, when determining the validity of the image detection result output by the biological detection model, first, the image portion at the position information in the image to be processed may be taken as the first image based on the position information of the pest in the image detection result.
S206, determining the image part at the position information of the pest in the image to be processed as a second image.
For example, the image portion at the position information in the adjacent frame image may also be taken as the second image based on the position information of the pest in the image detection result.
S207, determining the similarity between the first image and the second image.
For example, after determining the first image and the second image (i.e., the partial image of the image to be processed and the adjacent frame image at the same position information, where the position information is the position of the pest determined based on the biological detection model), the validity of the image detection result may be determined according to the similarity between the first image and the second image.
In one example, step S207 includes the steps of:
a first step of step S207: extracting first histogram information of the first image;
a second step of step S207: extracting second histogram information of the second image;
third step of step S207: and determining the similarity according to the first histogram information and the second histogram information.
In this embodiment, when determining the similarity between the first image and the second image, first, the first histogram information corresponding to the first image and the second histogram information corresponding to the second image may be acquired, and the similarity between the first image and the second image may be determined by comparing the similarity between the first histogram information and the second histogram information. In this embodiment, the similarity between the first histogram information and the second histogram information may be determined according to a correlation comparison method, a chi-square comparison method, a barth distance method, or the like provided in the related art, which is not particularly limited in this embodiment.
It can be appreciated that, in the method for determining the similarity between images by comparing histograms in this embodiment, the implementation is simpler, consumes less time and does not consume more computing resources, which is beneficial to improving the detection efficiency of the pest.
And S208, if the similarity is smaller than the preset threshold, determining that the harmful organisms exist in the position information in the image to be processed. Wherein the validity is used to indicate whether there are pests at the location information in the image to be processed.
In this embodiment, after the similarity between the first image and the second image is obtained, the similarity may be compared with a preset threshold, and if the similarity is smaller than the preset threshold, it is indicated that the images corresponding to the adjacent frame image and the image to be processed at the same location information (i.e., the location information identified by the biological detection model) are different, which indicates that the object identified by the biological detection model has mobility, and it is further determined that there is a pest at the location information in the image to be processed.
It can be understood that, in order to avoid a phenomenon that a certain static object in an image to be processed is erroneously identified as a pest, for example, a large black spot on the ground is identified as a pest, in this embodiment, whether an object at the position information of the pest in the image to be detected is a static object is determined by comparing the similarity of image portions indicated at the position information of the pest in the image to be processed and the pest in the adjacent frame image, that is, if the similarity is smaller than a preset threshold, it indicates that the object has mobility, it may indicate that the pest exists at the position information in the image to be processed, otherwise, no pest exists, and further accuracy of pest detection is improved.
In this embodiment, when it is determined that the image detection result corresponding to the biological detection model indicates that there is a pest, at this time, it may be determined whether an object at the position information of the pest in the image to be detected is a static object by comparing the similarity between the image portion indicated at the position information of the pest in the image to be processed and the image portion indicated at the position information of the pest in the adjacent frame image, so as to avoid a phenomenon that the static object is reported to the user as the pest. In addition, when the biological detection model in the embodiment detects the pests, a plurality of candidate detection frames with confidence information are generated for the same pests, and then the detection frames capable of accurately identifying the pests can be screened out of the plurality of candidate detection frames based on the confidence information, so that the detection accuracy of the pests is improved. And when the detection frames corresponding to the pests are selected from the candidate detection frames, the detection frames marked with the same pests can be screened out from the candidate detection frames through the area of the overlapped part of the detection frames, so that the detection frames capable of accurately calibrating the pests can be screened out from the candidate detection frames, and the detection accuracy of the pests is improved.
Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure. As shown in fig. 3, the pest image detection method according to the embodiment of the present disclosure includes:
s301, obtaining video stream information to be detected.
The execution body of the present embodiment may be an image detection device of a pest, and the image detection device may be a server (such as a cloud server or a local server), may also be a computer, may also be a terminal device, may also be a processor, may also be a chip, or the like, which is not limited in this embodiment.
In this embodiment, when monitoring a certain scene, the video stream information to be detected that is shot by the image acquisition device may be obtained in real time, and in order to improve the detection efficiency of the pest, frame extraction processing may be performed on the video stream information to be detected.
S302, determining extraction frequency information according to the moving speed of the living things to be monitored; the living things to be monitored are harmful living things which are frequently appeared in the monitoring area corresponding to the video stream information.
For example, in this embodiment, in order to determine the frame extraction frequency information corresponding to the frame extraction process of the video stream information to be detected, the determination may be made by the movement speed of the pest that often occurs in the area to be monitored. In one example, the commonly occurring pests may be pests that were previously detected in the area; the system can also be a pest which the user wants to monitor; alternatively, the pests to be monitored may be determined from the objects in the monitored area that are desired to be protected.
It can be understood that, when the moving speed of the living being to be monitored is greater, the value of the corresponding extraction frequency information is also greater.
Or, in practical application, the processing resources corresponding to the execution subject can be combined to determine.
S303, performing frame extraction processing on the video stream information to be detected according to the frame extraction frequency information to obtain an image to be processed.
For example, after the frame extraction frequency information is obtained, frame extraction processing may be performed on the obtained video stream information to be detected according to the frame extraction frequency information, so as to obtain a plurality of images to be processed.
It can be appreciated that in this embodiment, the detection efficiency of the pest may be improved by determining the image to be processed by performing frame extraction processing on the video stream information to be detected. And the extraction frequency can be determined according to the moving speed of the living beings to be monitored, so that the accuracy of detecting the harmful living beings is ensured while the detection efficiency is improved.
S304, performing color space mode conversion on the image to be processed to obtain a converted image to be processed.
In this embodiment, after the image to be processed is acquired, since the color space modes corresponding to the images acquired by different image reading modes are different when the video stream information to be detected is subjected to frame extraction processing, in order to ensure that the biological detection model can accurately identify the image color of the input image to be processed, the color space modes corresponding to the image to be processed are converted after the image to be processed is acquired, so that the color space modes which can be identified by the biological detection model are converted.
For example, when an image is read in a video stream in practical application, the color space mode corresponding to the obtained image may be BGR mode. While the biological detection model adopts an image in a color space mode perceived by human eyes, namely an image in an RGB mode when training is carried out. Therefore, in order to ensure the readability of the model output image by the subsequent user, the original BGR mode image to be processed may be converted into the RGB mode image to be processed, and then the converted image to be processed may be input into the biological detection model.
It can be appreciated that in this embodiment, the color space mode of the acquired image to be processed is converted, so as to ensure that the biological detection model can accurately identify the input image, thereby improving the accuracy of detecting the harmful organism.
S305, inputting the image to be processed into a biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used to identify pests in the image.
In one example, the biological detection model is also used to identify categories of pests; the image processing result also comprises category information of the harmful organisms.
Illustratively, the biological detection model provided in the present embodiment is also used to determine the category of identified pests. For example, when the biological detection model is trained, different category identifiers can be set for different pests in the image to be trained when the image to be trained is generated, so that when the biological detection model is trained later, the category identifiers corresponding to the image to be trained can be compared with the category identifiers in the training result obtained in the model training process, and model parameters can be adjusted based on the loss function obtained by the comparison, so that the trained biological detection model can be obtained.
In one example, when the image to be trained is acquired, the simulated pest can be set in the real environment, and then the image to be trained carrying the pest is acquired based on shooting of an image acquisition device such as a camera.
In one example, the biometric model may employ a different deep learning framework when training and the biometric model may also employ a yolo_v3 in combination with a mobilent_v3 lightweight network to detect pests in the input image. In particular, the model architecture of the biological detection model can be described by referring to the principles in the related art, and will not be described herein.
It can be appreciated that in this embodiment, the biological detection model may also be used to detect the types of pests, so that a user may learn the types of pests that easily occur in the environment corresponding to the image to be processed, so that a subsequent user may perform pest control based on the obtained types of pests.
S306, if the image processing result indicates that the harmful organisms exist, acquiring an adjacent frame image adjacent to the image to be processed.
S307, determining validity of an image processing result according to the adjacent frame images, the position information of the pests and the image to be processed, wherein the validity is used for indicating whether the pests exist in the position information in the image to be processed.
For example, the specific principles of step S306 and step S307 can be referred to step S103 and step S104, which are not described herein.
And S308, if the validity is determined to represent that the pest exists in the image at the position information in the image to be processed, pushing prompt information to a user, wherein the prompt information is at least one of text information and the image to be processed marked with the pest position.
In this embodiment, when the determined validity indicates that the pest exists in the image at the position information in the image to be processed, prompt information may be pushed to the user at this time, so as to prompt the user that the pest exists in the environment currently monitored by the user. In practical application, when the prompt information is pushed to the user, text information can be directly pushed to the user, wherein the text information can be used for indicating information such as image shooting time corresponding to the image to be processed, and the position information of the contained harmful organisms. Or, the obtained image to be processed can be directly pushed to the user, and the position of the pest in the image to be processed can be identified by a detection frame in the image to be processed, and the category information of the pest can be identified in the image to be processed. Or, the two prompt messages can be combined to push the prompt messages to the user.
It can be appreciated that in this embodiment, when it is determined that the harmful organisms exist in the image to be processed, prompt information may be timely sent to the user, so that the user may timely prevent and control the harmful organisms in the monitoring environment, so as to effectively protect objects (such as food, industrial parts, equipment, etc.) in the monitoring environment.
In this embodiment, the method determines that the video stream information to be detected is to be subjected to frame extraction processing
The image is processed, so that the detection efficiency of the harmful organisms can be improved. In addition, the extraction frequency can be determined according to the moving speed of the living beings to be monitored 5, so as to ensure the harmful living beings while improving the detection efficiency
Accuracy of detection. And after the image to be detected is acquired, the color space mode of the acquired image to be processed is converted, so that the biological detection model can accurately identify the input image, and the accuracy of detecting the harmful organisms is improved. Further, the dough in the present embodiment
The object detection model can also be used for detecting the types of the harmful organisms so that a user can know the types of the harmful organisms which are easy to appear in the environment corresponding to the image to be processed 0, and the subsequent user can
Pest control is performed based on the type of pest obtained.
Fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure. As shown in fig. 4, an image detection apparatus 400 of a pest of an embodiment of the present disclosure includes:
a first acquiring unit 401 for acquiring an image to be processed;
a 5 processing unit 402 for inputting the image to be processed into the biological detection model for image processing
Obtaining an image processing result; wherein the image processing result includes location information of the pest;
the biological detection model is used for identifying harmful organisms in the image;
a second obtaining unit 403 for obtaining if the image processing result indicates that there is a pest
An adjacent frame image adjacent to the image to be processed;
a 0 determination unit 404 for determining whether the pest is at the target site or not based on the neighboring frame images and the pest position information
And processing the image, and determining the validity of the image processing result, wherein the validity is used for indicating whether the harmful organisms exist in the position information in the image to be processed.
The device of the present embodiment may execute the technical solution in the above method, and the specific implementation process and the technical principle of the technical solution are the same and are not described herein again.
Fig. 5 is a schematic diagram according to a fifth embodiment of the present disclosure. As shown in fig. 5, an image detection device 500 of a pest of an embodiment of the present disclosure includes:
a first acquiring unit 501 configured to acquire an image to be processed;
a processing unit 502 for inputting the image to be processed into the biological detection model for image processing
Obtaining an image processing result; wherein the image processing result includes location information of the pest; the 0 biological detection model is used for identifying harmful organisms in the image;
a second obtaining unit 503, configured to obtain an adjacent frame image adjacent to the image to be processed if the image processing result indicates that there is a pest;
a determining unit 504, configured to determine validity of the image processing result according to the neighboring frame image, the location information of the pest, and the image to be processed, where the validity is used to indicate whether the pest exists in the image to be processed at the location information.
In one example, the determining unit 504 includes:
a first determining module 5041 configured to determine an image location at the location information of the pest in the adjacent frame images as a first image;
a second determining module 5042, configured to determine an image location at the location information of the pest in the image to be processed as a second image;
A third determining module 5043 for determining a similarity between the first image and the second image;
a fourth determining module 5044 is configured to determine that there is a pest at the location information in the image to be processed if the similarity is determined to be less than the preset threshold.
In one example, the third determination module 5043 includes:
a first extraction submodule 50431 for extracting first histogram information of the first image;
a second extraction sub-module 50432 for extracting second histogram information of the second image;
the first determining submodule 50433 is configured to determine a similarity according to the first histogram information and the second histogram information.
In one example, processing unit 502 includes:
the processing module 5021 is used for inputting an image to be processed into the biological detection model for image processing to obtain a plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one by one;
and a fifth determining module 5022, configured to determine an image processing result according to the plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one by one.
In one example, the fifth determining module 5022 includes:
a second determining submodule 50221 for determining an initial set; the initial set includes a plurality of candidate detection boxes;
A third determining submodule 50222, configured to determine that the candidate detection frame corresponding to the confidence information with the largest value in the initial set is a detection frame corresponding to the pest;
a first deleting submodule 50223, configured to delete a detection frame corresponding to the pest in the initial set;
a fourth determining submodule 50224, configured to determine overlapping area information of the candidate detection frames in the initial set and the detection frames corresponding to the pests;
the second deleting submodule 50225 is used for deleting candidate detection frames corresponding to the overlapping area information with the value larger than the preset value in the initial set;
and a fifth determining submodule 50226, configured to determine, as an image processing result, position information of the detection frame corresponding to the pest when the preset condition is reached.
In one example, the first obtaining unit 501 includes:
an obtaining module 5011, configured to obtain video stream information to be detected;
a sixth determining module 5012, configured to determine the frame frequency information according to the moving speed of the living being to be monitored; the living things to be monitored are harmful living things which are frequently appeared in a monitoring area corresponding to the video stream information;
and the extraction module 5013 is configured to perform frame extraction processing on the video stream information to be detected according to the frame extraction frequency information, so as to obtain an image to be processed.
In one example, the apparatus further comprises:
the first conversion unit 505 is configured to perform color space mode conversion on the image to be processed to obtain a converted image to be processed before the processing unit 502 inputs the image to be processed into the biological detection model to perform image processing to obtain an image processing result.
In one example, the apparatus further comprises:
the second converting unit 506 is configured to perform format conversion processing on the image to be processed to obtain a format-converted image to be processed before the processing unit 502 inputs the image to be processed into the biological detection model to perform image processing, where the format conversion processing includes: at least one of normalization processing and image resizing processing.
In one example, the apparatus further comprises:
and the prompting unit 507 is configured to, if it is determined that the validity indicates that the pest exists in the image at the position information in the image to be processed, push prompting information to the user, where the prompting information is at least one of text information and the image to be processed marked with the pest position.
In one example, the biological detection model is also used to identify categories of pests; the image processing result also comprises category information of the harmful organisms.
The device of the present embodiment may execute the technical solution in the above method, and the specific implementation process and the technical principle of the technical solution are the same and are not described herein again.
Fig. 6 is a schematic diagram according to a sixth embodiment of the present disclosure, as shown in fig. 6, an electronic device 600 in the present disclosure may include: a processor 601 and a memory 602.
A memory 602 for storing a program; the memory 602 may include a volatile memory (english: volatile memory), such as a random-access memory (RAM), such as a static random-access memory (SRAM), a double data rate synchronous dynamic random-access memory (DDR SDRAM), etc.; the memory may also include a non-volatile memory (English) such as a flash memory (English). The memory 602 is used to store computer programs (e.g., application programs, functional modules, etc. that implement the methods described above), computer instructions, etc., which may be stored in one or more of the memories 602 in a partitioned manner. And the above-described computer programs, computer instructions, data, etc. may be called upon by the processor 601.
The computer programs, computer instructions, etc., described above may be stored in one or more of the memories 602 in partitions. And the above-described computer programs, computer instructions, etc. may be invoked by the processor 601.
A processor 601 for executing a computer program stored in a memory 602 to implement the steps of the method according to the above embodiment.
Reference may be made in particular to the description of the embodiments of the method described above.
The processor 601 and the memory 602 may be separate structures or may be integrated structures integrated together. When the processor 601 and the memory 602 are separate structures, the memory 602 and the processor 601 may be coupled by a bus 603.
The electronic device in this embodiment may execute the technical scheme in the above method, and the specific implementation process and the technical principle are the same, which are not described herein again.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the solution provided by any one of the above embodiments.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 701 performs the respective methods and processes described above, for example, an image detection method of a pest. For example, in some embodiments, the method of image detection of pests may be implemented as a computer software program, which is tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the above-described pest image detection method may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the image detection method of the pest by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (23)

1. A method of image detection of pests, comprising:
acquiring an image to be processed, and inputting the image to be processed into a biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used for identifying harmful organisms in the image;
if the image processing result represents that the harmful organisms exist, acquiring an adjacent frame image adjacent to the image to be processed;
And determining the validity of the image processing result according to the adjacent frame image, the position information of the pest and the image to be processed, wherein the validity is used for indicating whether the pest exists in the position information in the image to be processed.
2. The method of claim 1, wherein determining the validity of the image processing result from the neighboring frame image, the location information of the pest, and the image to be processed comprises:
determining an image part at the position information of the pest in the adjacent frame images as a first image; determining an image part at the position information of the pest in the image to be processed as a second image;
determining a similarity between the first image and the second image;
and if the similarity is smaller than a preset threshold, determining that harmful organisms exist in the position information in the image to be processed.
3. The method of claim 2, wherein determining a similarity between the first image and the second image comprises:
extracting first histogram information of the first image and extracting second histogram information of the second image;
And determining the similarity according to the first histogram information and the second histogram information.
4. A method according to any one of claims 1-3, wherein inputting the image to be processed into a biological detection model for image processing, to obtain an image processing result, comprises:
inputting the image to be processed into the biological detection model for image processing to obtain a plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one by one;
and determining the image processing result according to the plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one by one.
5. The method of claim 4, wherein determining the image processing result according to the plurality of candidate detection frames and confidence information for the one-to-one correspondence of the candidate detection frames comprises:
determining an initial set; the initial set includes a plurality of candidate detection boxes;
determining candidate detection frames corresponding to confidence information with the maximum value in the initial set as detection frames corresponding to pests; deleting a detection frame corresponding to the pest from the initial set; determining overlapping area information of candidate detection frames in the initial set and detection frames corresponding to pests; deleting candidate detection frames corresponding to the overlapping area information with the value larger than a preset value from the initial set;
And when the preset condition is met, determining the position information of the detection frame corresponding to the pest as the image processing result.
6. The method of any of claims 1-5, wherein acquiring the image to be processed comprises:
acquiring video stream information to be detected;
determining extraction frequency information according to the moving speed of the living things to be monitored; the living things to be monitored are harmful living things which are frequently appeared in a monitoring area corresponding to the video stream information;
and performing frame extraction processing on the video stream information to be detected according to the frame extraction frequency information to obtain an image to be processed.
7. The method according to any one of claims 1-6, further comprising, before inputting the image to be processed into a biological detection model for image processing, obtaining an image processing result:
and performing color space mode conversion on the image to be processed to obtain a converted image to be processed.
8. The method according to any one of claims 1-7, further comprising, before inputting the image to be processed into a biological detection model for image processing, obtaining an image processing result:
performing format conversion processing on the image to be processed to obtain the image to be processed after format conversion, wherein the format conversion processing comprises: at least one of normalization processing and image resizing processing.
9. The method of any of claims 1-8, further comprising:
if the validity is determined to characterize that the image at the position information in the image to be processed has harmful organisms, prompt information is pushed to a user, and the prompt information is at least one of text information and the image to be processed marked with the positions of the harmful organisms.
10. The method of any one of claims 1-9, wherein the biological detection model is further used to identify a category of pest; the image processing result also comprises category information of the pest.
11. An image detection device of a pest, comprising:
the first acquisition unit is used for acquiring an image to be processed;
the processing unit is used for inputting the image to be processed into the biological detection model for image processing to obtain an image processing result; wherein the image processing result includes location information of the pest; the biological detection model is used for identifying harmful organisms in the image;
the second acquisition unit is used for acquiring an adjacent frame image adjacent to the image to be processed if the image processing result represents that the harmful organism exists;
and the determining unit is used for determining the validity of the image processing result according to the adjacent frame image, the position information of the pest and the image to be processed, wherein the validity is used for indicating whether the pest exists in the position information in the image to be processed.
12. The apparatus of claim 11, wherein the determining unit comprises:
a first determining module, configured to determine an image location at the location information of the pest in the adjacent frame images as a first image;
the second determining module is used for determining an image part at the position information of the pest in the image to be processed as a second image;
a third determining module, configured to determine a similarity between the first image and the second image;
and the fourth determining module is used for determining that the harmful organisms exist in the position information in the image to be processed if the similarity is determined to be smaller than a preset threshold value.
13. The apparatus of claim 12, wherein the third determination module comprises:
a first extraction sub-module for extracting first histogram information of the first image;
a second extraction sub-module for extracting second histogram information of the second image;
and the first determining submodule is used for determining the similarity according to the first histogram information and the second histogram information.
14. The apparatus of any of claims 11-13, wherein the processing unit comprises:
The processing module is used for inputting the image to be processed into the biological detection model for image processing to obtain a plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one by one;
and a fifth determining module, configured to determine the image processing result according to the plurality of candidate detection frames and confidence information corresponding to the candidate detection frames one to one.
15. The apparatus of claim 14, wherein the fifth determination module comprises:
a second determination submodule for determining an initial set; the initial set includes a plurality of candidate detection boxes;
a third determining submodule, configured to determine that a candidate detection frame corresponding to the confidence information with the largest value in the initial set is a detection frame corresponding to the pest;
the first deleting submodule is used for deleting the detection frame corresponding to the pest in the initial set;
a fourth determining submodule, configured to determine overlapping area information of a candidate detection frame in the initial set and a detection frame corresponding to the pest;
the second deleting submodule is used for deleting candidate detection frames corresponding to the overlapping area information with the value larger than the preset value in the initial set;
And a fifth determining sub-module, configured to determine, when a preset condition is reached, position information of a detection frame corresponding to the pest, as the image processing result.
16. The apparatus according to any one of claims 11-15, wherein the first acquisition unit comprises:
the acquisition module is used for acquiring video stream information to be detected;
a sixth determining module, configured to determine the frame extraction frequency information according to a movement speed of the living being to be monitored; the living things to be monitored are harmful living things which are frequently appeared in a monitoring area corresponding to the video stream information;
and the extraction module is used for carrying out frame extraction processing on the video stream information to be detected according to the frame extraction frequency information to obtain an image to be processed.
17. The apparatus of any of claims 11-16, further comprising:
the first conversion unit is used for carrying out color space mode conversion on the image to be processed before the processing unit inputs the image to be processed into the biological detection model for image processing to obtain an image processing result, and obtaining a converted image to be processed.
18. The apparatus of any of claims 11-17, further comprising:
the second conversion unit is configured to perform format conversion processing on the image to be processed before the processing unit inputs the image to be processed into the biological detection model to perform image processing, to obtain an image processing result, to obtain a format-converted image to be processed, where the format conversion processing includes: at least one of normalization processing and image resizing processing.
19. The apparatus of any of claims 11-18, further comprising:
and the prompt unit is used for pushing prompt information to a user if the validity represents that the image at the position information in the image to be processed has harmful organisms, wherein the prompt information is at least one of text information and the image to be processed marked with the positions of the harmful organisms.
20. The apparatus of any one of claims 11-19, wherein the biological detection model is further for identifying a category of pests; the image processing result also comprises category information of the pest.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-10.
CN202211698966.8A 2022-12-28 2022-12-28 Image detection method, device and equipment for harmful organisms Pending CN116343076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211698966.8A CN116343076A (en) 2022-12-28 2022-12-28 Image detection method, device and equipment for harmful organisms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211698966.8A CN116343076A (en) 2022-12-28 2022-12-28 Image detection method, device and equipment for harmful organisms

Publications (1)

Publication Number Publication Date
CN116343076A true CN116343076A (en) 2023-06-27

Family

ID=86888281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211698966.8A Pending CN116343076A (en) 2022-12-28 2022-12-28 Image detection method, device and equipment for harmful organisms

Country Status (1)

Country Link
CN (1) CN116343076A (en)

Similar Documents

Publication Publication Date Title
CN113191256B (en) Training method and device of lane line detection model, electronic equipment and storage medium
CN113705425B (en) Training method of living body detection model, and method, device and equipment for living body detection
CN112597837B (en) Image detection method, apparatus, device, storage medium, and computer program product
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN112861885B (en) Image recognition method, device, electronic equipment and storage medium
CN112949767B (en) Sample image increment, image detection model training and image detection method
CN112989995B (en) Text detection method and device and electronic equipment
CN113869449A (en) Model training method, image processing method, device, equipment and storage medium
CN113177469A (en) Training method and device for human body attribute detection model, electronic equipment and medium
CN112270745B (en) Image generation method, device, equipment and storage medium
CN113378712B (en) Training method of object detection model, image detection method and device thereof
CN113947188A (en) Training method of target detection network and vehicle detection method
CN112528858A (en) Training method, device, equipment, medium and product of human body posture estimation model
CN113326773A (en) Recognition model training method, recognition method, device, equipment and storage medium
CN114359932B (en) Text detection method, text recognition method and device
US20230245429A1 (en) Method and apparatus for training lane line detection model, electronic device and storage medium
CN113705381B (en) Target detection method and device for foggy days, electronic equipment and storage medium
CN115482436B (en) Training method and device for image screening model and image screening method
CN113379592B (en) Processing method and device for sensitive area in picture and electronic equipment
US20220392192A1 (en) Target re-recognition method, device and electronic device
CN114724144B (en) Text recognition method, training device, training equipment and training medium for model
CN115937993A (en) Living body detection model training method, living body detection device and electronic equipment
CN113656629B (en) Visual positioning method and device, electronic equipment and storage medium
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN116343076A (en) Image detection method, device and equipment for harmful organisms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination