CN116416232A - Target detection method, target detection device, electronic equipment and computer readable storage medium - Google Patents

Target detection method, target detection device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN116416232A
CN116416232A CN202310359927.3A CN202310359927A CN116416232A CN 116416232 A CN116416232 A CN 116416232A CN 202310359927 A CN202310359927 A CN 202310359927A CN 116416232 A CN116416232 A CN 116416232A
Authority
CN
China
Prior art keywords
target
image
detection frame
detection
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310359927.3A
Other languages
Chinese (zh)
Inventor
胡天昊
尹东富
于非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen
Original Assignee
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen filed Critical Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen
Priority to CN202310359927.3A priority Critical patent/CN116416232A/en
Publication of CN116416232A publication Critical patent/CN116416232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of visual algorithms, and provides a target detection method, a target detection device, electronic equipment and a computer readable storage medium, wherein the target detection method comprises the following steps: obtaining a first image and a second image; detecting a target area containing the same target in the first image and the second image; determining a rate of change of pixels in the target area; and determining a target detection result according to the change rate. According to the method, the collected product images are processed, the change rate of the detected target area in the two images is determined, the detection result of the product in the images is determined according to the change rate of the target area, the purpose of detecting the product performance in batches by using the collected images is achieved, and compared with an existing method for detecting the product performance by carrying out software testing on the product of the spot inspection, the efficiency of detecting the product performance is improved.

Description

Target detection method, target detection device, electronic equipment and computer readable storage medium
Technical Field
The application belongs to the technical field of visual algorithms, and particularly relates to a target detection method, a target detection device, electronic equipment and a computer readable storage medium.
Background
Along with the continuous development of manufacturing industry, in order to ensure the production quality of an automatic production line, a manual monitoring link is generally arranged in the production process of products, and the products possibly having abnormality are timely identified through the change of human eye monitoring equipment or instruments.
At present, in order to solve the problems of high cost and low accuracy in the process of manually monitoring a production line, an artificial intelligence technology is introduced in the field of visual algorithms, and an automatic test system is provided, so that not only is the image identification carried out on monitored images by using the automatic monitoring system, products with abnormal appearance types identified, but also the selective inspection is carried out on the products on the production line, and the functional test is carried out on the products after the selective inspection, so that the purpose of detecting the performance of the products is realized.
However, the existing method for detecting the functions of the products needs to perform one-to-one software test on the products of the spot check, and has the problem of low detection efficiency.
Disclosure of Invention
The embodiment of the application provides a target detection method, a target detection device, electronic equipment and a computer readable storage medium, and the purpose of improving the product detection efficiency is achieved.
In a first aspect, an embodiment of the present application provides a target detection method, including:
obtaining a first image and a second image;
detecting a target area containing the same target in the first image and the second image;
determining a rate of change of pixels in the target region;
and determining a target detection result according to the change rate.
By way of example, a plurality of image acquisition devices are arranged on an automatic production line to shoot a plurality of products at the same time, the shot front and back images are used as a first image and a second image, the products on the production line are controlled to play videos, and the played videos are set to be videos of continuously switching display pictures. It should be understood that the time interval for acquiring the first image and the second image is set to be greater than the switching time interval of the product display picture, so that the target area belonging to the same product in the front image and the rear image can be identified, and whether the product corresponding to the target area has a fault for displaying the switching picture or not can be confirmed according to the calculated change rate of the pixels of the target area.
According to the embodiment, the change rate of the detected target area in the front image and the rear image is determined by processing the acquired product images on the production line, and the product with the display problem in the images is determined according to the change rate of the target area, so that the purpose of detecting the display picture performance of the product in batches by utilizing the acquired images is realized, and the efficiency of detecting the product performance is improved.
In a possible implementation manner of the first aspect, the detecting a target area in the first image and the second image, where the target area includes the same target, includes:
identifying at least one first detection frame contained in the first image and identifying at least one second detection frame contained in the second image;
obtaining at least one detection frame group, wherein each detection frame group comprises one first detection frame and one second detection frame;
determining the matching degree of each detection frame group;
determining a target detection frame group according to the matching degree;
and determining a target area corresponding to the target detection frame group.
It should be appreciated that the first image and the second image are captured to contain a plurality of products. For example, a detection frame of each product contained in the two images may be detected according to the target detection model. Because the front and rear images are shot by a plurality of products at the same position on the production line, the detection frames of the products contained in the front and rear images are all detection frames of the same batch of products. Further, the target detection frame group belonging to the same product can be identified according to the matching degree of the detection frames contained in the front image and the rear image, so that the target area belonging to the same product in the front image and the rear image is determined.
According to the embodiment, the matching degree of the detection frame groups in the front image and the rear image is calculated by identifying the detection frames corresponding to all products contained in the first image and the second image, the target area belonging to the same product in the front image and the rear image is determined according to the matching degree of the detection frame groups, and the target area corresponding to each product contained in the front image and the rear image is detected, so that the aim of detecting the display performance of the products on the production line in batches is fulfilled, and the detection efficiency of the product performance is improved.
In a possible implementation manner of the first aspect, the determining the matching degree of each detection frame group includes:
determining a coincidence region of the first detection frame and the second detection frame for each detection frame group;
taking the ratio of the overlapping area to the first detection frame as a first overlapping ratio, and taking the ratio of the overlapping area to the second detection frame as a second overlapping ratio;
and determining the matching degree of the detection frame group according to the first contact degree and the second contact degree.
It should be understood that after the detection frames of all the products contained in the front and rear images are identified, the first detection frame contained in the first image and the second detection frame contained in the second image are combined, and the target areas belonging to the same product are identified by comparing the matching degree of the combined detection frame groups.
According to the embodiment, in the process of calculating the matching degree of the first detection frame and the second detection frame contained in the detection frame group, the matching degree is calculated according to the overlapping area of the two detection frames, and compared with the conventional cross ratio calculation method adopted by the target recognition algorithm, when the front detection frame and the rear detection frame are shielded or missing, whether the two detection frames belong to the same target can be accurately identified, and the accuracy of detecting the target area belonging to the same product is improved.
In a possible implementation manner of the first aspect, the determining the target area corresponding to the target detection box group includes:
determining a first target overlapping region contained in the first detection frame and a second target overlapping region contained in the second detection frame aiming at the target detection frame group, wherein the first target overlapping region is overlapped with the second target overlapping region in position;
and taking the first target overlapping area and the second target overlapping area as target areas corresponding to the target detection frame group.
It should be understood that, when it is confirmed that the first detection frame and the second detection frame included in the target detection frame group belong to the same detected target, the overlapping portions of the two detection frames are regarded as target areas to be subjected to the pixel change rate detection, that is, the target areas include the first target overlapping area of the first detection frame and the second target overlapping area of the second detection frame.
In a possible implementation manner of the first aspect, the determining a rate of change of pixels in the target area includes:
obtaining a first pixel matrix corresponding to the first target overlapping region;
obtaining a second pixel matrix corresponding to the second target overlapping region;
and determining the change rate of the pixels in the target area according to the difference value between the first pixel matrix and the second pixel matrix.
It should be understood that after the target area belonging to the same tested product is identified, the video played by the set product will continuously switch the display. Therefore, the display picture can be identified according to the change rate of the pixels belonging to the target area of the same tested product in the front image and the rear image.
According to the embodiment, the pixel difference value between the first target overlapping area of the first detection frame and the second target overlapping area of the second detection frame contained in the target area is calculated to confirm whether the product corresponding to the current target area has the fault of switching the display image or not, so that the purpose of detecting the display performance of the product according to the acquired image is achieved.
In a possible implementation manner of the first aspect, the determining a rate of change of pixels in the target area according to a difference between the first pixel matrix and the second pixel matrix includes:
Obtaining a change value of the pixel points contained in the target area and the number of the pixel points contained in the target area according to the difference value between the first pixel matrix and the second pixel matrix;
taking the pixel point with the change value larger than a preset pixel threshold value as a target pixel point;
and taking the ratio of the number of the target pixel points to the number of the pixel points contained in the target area as the change rate of the pixels in the target area.
According to the embodiment, the pixel point with the change value larger than the preset pixel threshold value is taken as the target pixel point, the change rate of the target area is determined according to the duty ratio of the target pixel point in the target area, the interference of the environmental noise on the display image is eliminated, and the accuracy of the calculated change rate of the pixels of the target area is improved.
In a possible implementation manner of the first aspect, the identifying at least one first detection box included in the first image includes:
and identifying at least one first detection frame contained in the first image according to a target detection model, wherein the target detection model is obtained through training according to a labeling training set, the labeling training set contains at least one labeling sample image, and the rectangular labeling frame of each labeling sample image is a labeling frame which is larger than or equal to a preset detection frame.
According to the embodiment, the larger the corresponding detection frame of the product in the image is, the higher the accuracy of product performance identification by adopting the method of the application is when the detection frame is larger. Therefore, in the process of training the target detection model, the marking frame for marking the detected product in the sample image in the training set can be set to be larger than or equal to the marking frame of the preset detection frame, namely, the training set is optimized, the induction model only detects the target with the measuring frame larger than the preset detection frame in the image, and the detected target is the part of the product with the closer shooting distance, so that the accuracy of the performance result of the detected product in the application is ensured.
In a second aspect, an embodiment of the present application provides a target matching apparatus, including:
the acquisition module is used for acquiring a first image and a second image;
the detection module is used for detecting a target area containing the same target in the first image and the second image;
a first determining module for determining a rate of change of pixels in the target area;
and the second determining module is used for determining a target detection result according to the change rate.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the target detection method according to any one of the first aspects when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the object detection method according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on an electronic device, causes the electronic device to perform the object detection method of any one of the first aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a target detection method according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an image provided in an embodiment of the present application;
FIG. 3 is a second image diagram according to an embodiment of the present disclosure;
FIG. 4 is a third image schematic provided in an embodiment of the present application;
fig. 5 is a second flow chart of the target detection method according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of an object detection device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
Along with the continuous development of manufacturing industry, in order to solve the problems of high cost and low accuracy in the process of manually monitoring the production line, an artificial intelligence technology is introduced in the field of visual algorithms, and an automatic test system is provided. And (3) carrying out image recognition on the monitored image by using an automatic monitoring system, and recognizing products with abnormal appearance types. Aiming at the performance test of the products on the production line, the products are generally subjected to spot check, the products subjected to spot check are subjected to one-to-one software test, and the performance of the products produced by the current production line is detected according to the performance test result of the spot check products. However, the existing method for detecting the performance of the product by adopting test software has the problem of low detection efficiency.
In order to solve the technical problems, the method and the device for detecting the product performance in batches by using the acquired images can be used for processing the acquired product images, determining the change rate of the detected target area in the shot front and rear images and determining the detection result of the product in the images according to the change rate of the target area, and compared with the existing method for detecting the product performance by performing software testing on the product subjected to spot inspection, the method and the device for detecting the product performance improve the efficiency of detecting the product performance.
Referring to fig. 1, a flow chart of a target detection method according to an embodiment of the present application is shown. By way of example, and not limitation, the method may include the steps of:
s101: a first image and a second image are obtained.
In this embodiment, an automatic monitoring system composed of a plurality of image acquisition devices is arranged on a production line of a product, and the plurality of image acquisition devices of the automatic monitoring system are arranged to shoot a plurality of products according to a preset time interval.
In this embodiment, for a plurality of images in the same production area acquired by the same image acquisition device, two images before and after photographing are taken as a first image and a second image. It should be understood that the first image and the second image capture the same line area, i.e., the first image and the second image are captured images of the same batch of products at different times.
In this embodiment, the product is set to play the test video on the production line. Specifically, the test video is a video in which display frames are continuously switched. It will be appreciated that the time interval for acquiring the first image and the second image is set to be greater than the switching time interval of the product display.
As an example, the automatic monitoring system provided in this embodiment is used to detect the display function of the production equipment on the production line. Fig. 2 is a schematic diagram of an image according to an embodiment of the present application. As shown in fig. 2, an image acquisition device of an automated monitoring system is provided to capture a plurality of devices on a line. Specifically, after the image acquisition device is controlled to start, images are shot at 1 second intervals. Taking the image shot at the 1 st second as a first image and taking the image shot at the 2 nd second as a second image, wherein the first image and the second image comprise display interfaces of a plurality of devices at different moments.
S102: target areas containing the same target in the first image and the second image are detected.
In this embodiment, preset images included in the acquired first image and second image are identified. The preset image contained in the first image is identified by using a target detection model realized based on yolov5 algorithm. Specifically, in this embodiment, the preset image is set as the device display interface.
In the present embodiment, the process of obtaining the target detection model is as follows: firstly, collecting a plurality of shot product images on a production line as a training set, secondly, marking the images in the training set by utilizing a production data label, and training a single-stage model by utilizing the marked training set, so that a target detection model after training can identify and mark a target marking frame of a device display interface existing in the images.
Fig. 3 is a second image schematic diagram according to an embodiment of the present application. As shown in fig. 3, the trained object detection model is used to identify the device display interfaces in the first image and the second image, and at least one detection frame included in the first image and at least one detection frame included in the second image are identified, where it should be understood that each detection frame in the first image and the second image corresponds to one device display interface.
In one possible implementation manner, after identifying at least one detection frame included in the first image and at least one detection frame included in the second image, taking the at least one detection frame included in the first image as a first detection frame, taking the at least one detection frame included in the second image as a second detection frame, and then adopting a cross-over algorithm to process all the first detection frames included in the first image and the second detection frames included in the second image, so as to identify the detection frames belonging to the same object in the first image and the second image. It should be understood that the implementation process of identifying the detection frame by using the cross-correlation algorithm in the target detection task belongs to the prior art, and is not described herein.
As an example, after at least one detection frame included in the first image and at least one detection frame included in the second image are identified in fig. 3, as shown in fig. 3, the first image in fig. 3 includes a first detection frame A1, a first detection frame B1, and a first detection frame C1, the second image includes a first detection frame A2, a first detection frame B2, and a first detection frame C2, and an intersection algorithm is used for all the first detection frames included in the first image and all the second detection frames included in the second image, so that the first detection frame A1 and the second detection frame A2 include the same target a, the first detection frame B1 and the second detection frame B2 include the same target B, and the first detection frame C1 and the second detection frame C2 include the same target C.
In this embodiment, for the first detection frame and the second detection frame that belong to the same target, a portion where the area included in the first detection frame and the area included in the second detection frame overlap is used as the target area of the target corresponding to the current detection frame. It should be appreciated that this step identifies the target areas for all targets identified in the first image and the second image.
As an example, as shown in fig. 3, after recognizing that the first detection frame A1 and the second detection frame A2 include the same target a, a portion where the region included in the first detection frame A1 and the region included in the second detection frame A2 overlap is set as the target region of the target a. It should be appreciated that the use of the cross-over algorithm identifies the target areas corresponding to target a, target B, and target C, respectively.
S103: the rate of change of pixels in the target area is determined.
In this embodiment, it should be understood that, after the target area belonging to the same product in the front and rear images is identified in S102, the device under test is set to display the switched picture, so the rate of change of the pixels in the same target corresponding area in the front and rear images can be calculated to identify whether the device corresponding to the target has a picture switching failure.
In one possible implementation manner, for the target area corresponding to each target, the overlapping area in the first detection frame is taken as a first target overlapping area, and the overlapping area in the second detection frame is taken as a second target overlapping area. In this embodiment, a first pixel matrix corresponding to a first target overlapping region is obtained; obtaining a second pixel matrix corresponding to a second target overlapping region; and determining the change rate of the pixels in the target area according to the difference value between the first pixel matrix and the second pixel matrix.
For example, for the target a, a portion of the first detection frame A1 overlapping with the second detection frame A2 is referred to as a first target overlapping region A1, and a portion of the second detection frame A2 overlapping with the first detection frame A1 is referred to as a second target overlapping region A2. Further, a first pixel matrix corresponding to the first target overlapping area a1 is obtained according to the gray values of all the pixel points in the first target overlapping area a1, and a second pixel matrix corresponding to the second target overlapping area a2 is obtained according to the gray values of all the pixel points in the second target overlapping area a2. Therefore, the pixel change rate of the target A in the front image and the rear image can be judged by calculating the first pixel matrix and the second pixel matrix.
In one possible embodiment, the specific steps of determining the pixel change rate of the target a in the front and rear images by calculating the first pixel matrix and the second pixel matrix are as follows: obtaining a change value of the pixel points contained in the target area and the number of the pixel points contained in the target area according to the difference value between the first pixel matrix and the second pixel matrix; taking a pixel point with a change value larger than a preset pixel threshold value as a target pixel point; and taking the ratio of the number of the target pixel points to the number of the pixel points contained in the target area as the change rate of the pixels in the target area.
In this embodiment, in order to eliminate interference of ambient noise on a display image, a preset pixel threshold is set according to the ambient noise, a pixel point with a difference value greater than the preset pixel threshold between the first pixel matrix and the second pixel matrix is used as a target pixel point with pixel variation, and then a ratio of the number of the target pixel points to the number of the pixel points in the overlapping area is used as a variation rate of pixels in the target area.
As an example, after obtaining a first pixel matrix corresponding to the first target overlapping area a1 and a second pixel matrix corresponding to the second target overlapping area a2, a difference value between the first pixel matrix and the second pixel matrix is calculated, and a pixel difference matrix is obtained. Further, the number of the target pixel points with the pixel values larger than the preset pixel threshold in the pixel difference matrix is determined to be 20, and the number of the pixel points contained in the first target overlapping area a1 and the second target overlapping area a2 is determined to be 100, so that the change rate of the pixels in the target area is calculated finally to be 80%.
S104: and determining a target detection result according to the change rate.
In this embodiment, whether there is a change in the image displayed corresponding to the target area is determined according to the rate of change of the pixels in the target area. In one possible implementation manner, when the change rate of the pixels in the target area is smaller than the preset minimum change rate, it is determined that the display screen switching failure exists in the device corresponding to the current target area as a target detection result corresponding to the current target area. In this embodiment, a preset minimum change rate is set according to the switched frames, and when the change rate of the pixels in the target area is smaller than the preset minimum change rate, it may be determined that the change rate of the current target area is lower, that is, that there is a frame switching failure in the frames displayed by the current target area corresponding to the target.
As an example, the preset minimum change rate is set to 20%. For example, after the first pixel matrix corresponding to the first target overlapping area a1 and the second pixel matrix corresponding to the second target overlapping area a2 are obtained, and the change rate of the pixels in the target area is calculated to be 80%, the change rate of the pixels in the target area is greater than 20% of the preset minimum change rate, so that the target detection result corresponding to the target a is determined to be successful in testing.
For example, after calculating the change rate of 15% of the pixels in the target area corresponding to the target B, the change rate of 15% of the pixels in the target area corresponding to the target B is smaller than the preset minimum change rate of 20%, so that the target detection result corresponding to the target B is determined to be that there is a fault.
In one possible implementation manner, when the above steps are adopted to confirm that the target detection result with the fault appears in the first image and the second image, the first detection frame and the second detection frame of the target area corresponding to the fault are set as the highlight labeling frame, and the first image and the second image after the highlight labeling are transmitted to the background, so that the equipment with the picture switching fault is timely checked.
As can be seen from the above embodiments, according to the obtained first image and second image, the target area containing the same target in the first image and second image is detected, the rate of change of the pixels in the target area is determined, and finally the target detection result is determined according to the rate of change. According to the method, the collected product images are processed, the change rate of the detected target area in the two images is determined, and the detection result of the product in the image is determined according to the change rate of the target area, so that the purpose of detecting the product performance in batches by utilizing the collected images is achieved.
It should be appreciated that the display area of the device under test in the production line in the first image as well as in the second image conforms to the characteristics of near-far size. Fig. 4 is a schematic image diagram III provided in an embodiment of the present application. As shown in fig. 4, when the distances between the object a, the object B, and the object C and the image capturing device are from near to far, the labeling frames corresponding to the object a, the object B, and the object C in the first image and the second image are near to far, and the previous object may block the following object. At this time, the object region belonging to the same object in the first image and the second image can be identified according to the cross-correlation algorithm proposed in the embodiment of fig. 1.
However, when there is a case where the object photographed by the first image or the second image is blocked or missing, an erroneous result that the identified object region does not belong to the same object may occur using the overlap-and-add algorithm proposed in the embodiment of fig. 1. As shown in fig. 4, the first collected image includes a first detection frame A1, a first detection frame B1, and a first detection frame C1, and the second collected image includes a first detection frame A2 and a first detection frame C2, that is, a detection frame in which the target B is not collected in the second image. At this time, according to the intersection ratio algorithm in the embodiment of fig. 1, since there is no shielding of the target B on the target a in the second image, the first detection frame B1 and the first detection frame B2 have no overlapping area, that is, the area of the overlapping area of the first detection frame B1 and the second detection frame A2 is larger than the area of the overlapping area of the first detection frame B1 and the first detection frame B2, the intersection ratio of the first detection frame B1 and the first detection frame C2 is larger than the intersection ratio of the first detection frame B1 and the first detection frame B2, and it is recognized that the first detection frame B1 and the first detection frame A2 belong to the same target, that is, an error occurs in the recognized target area, so that the subsequently determined target detection result is inaccurate.
In order to ensure that the identified target areas belong to the same target, in one possible implementation manner, after the first image and the second image are obtained, the implementation process for determining the target area provided in this embodiment may specifically include:
s501: at least one first detection frame contained in the first image is identified, and at least one second detection frame contained in the second image is identified.
S502: at least one detection frame group is obtained, wherein each detection frame group comprises a first detection frame and a second detection frame.
The method and effect achieved in S501 to S502 are the same as the method and effect achieved in S102 in the embodiment of fig. 1, and are not described here again.
S503: and determining the matching degree of each detection frame group.
In one possible implementation, two detection frames belonging to the same target may be determined by calculating the matching degree of each detection frame group. In the present embodiment, for each detection frame group, the overlapping region of the first detection frame and the second detection frame is determined; taking the ratio of the overlapping area to the first detection frame as a first overlapping ratio, and taking the ratio of the overlapping area to the second detection frame as a second overlapping ratio; and determining the matching degree of the detection frame group according to the first overlapping degree and the second overlapping degree.
Illustratively, the IOEA (intersection over each area) method is used to calculate the degree of matching for each set of test frames. Specifically, the calculation formula of the IOEA method is shown in (1):
Figure BDA0004165291650000131
the intersection is the area of the first detection frame in the first image and the second detection frame in the second image, and area1 is the area of the first detection frame and area2 is the area of the second detection frame.
In particular, the method comprises the steps of,
Figure BDA0004165291650000132
for the upper left corner of the first detection frame in the first image,/for the upper left corner of the first detection frame in the first image>
Figure BDA0004165291650000133
Is the lower right corner of the first detection frame in the first image. />
Figure BDA0004165291650000134
Is the upper left corner of the second detection frame in the second image. />
Figure BDA0004165291650000141
Is the lower right corner of the second detection frame in the second image. (x) 0 ,y 0 ) Upper left corner coordinates of area overlapping region intersection, (x) 1 ,y 1 ) The coordinates of the lower right corner of the area overlapping region intersection can be obtained by the calculation formulas of intersection, area and area1 as shown in (2), (3) and (4), respectively:
intersection=(x 1 -x 0 )(y 1 -y 0 ) (2)
Figure BDA0004165291650000142
Figure BDA0004165291650000143
in this embodiment, when the detected target is blocked or absent in the first image or the second image, the IOEA method provided in the present application can accurately detect the target area belonging to the same target.
As an example, as shown in fig. 4, there is a positional overlapping relationship between the second detection frame A2 and the second detection frame A2, and there is a positional overlapping relationship between the first detection frame B1 and the second detection frame A2, and the matching degree between the first detection frame A1 and the second detection frame A2 and the matching degree between the first detection frame B1 and the second detection frame A2 are calculated by using the IOEA method, respectively.
S504: and determining the target detection frame group according to the matching degree.
In this embodiment, for each first detection frame, after the matching degree of all the detection frame groups formed by the current first detection frame is obtained, the detection frame group with the highest matching degree is used as the target detection frame group of the current first detection frame, so the second detection frame and the first detection frame included in the target detection frame group are detection frames including the same target.
As an example, according to the positional relationship illustrated in fig. 4, it is determined that IOEA (A1, A2) is larger than IOEA (B1, A2), and therefore, the first detection frame A1 and the second detection frame A2 are taken as the target detection frame group.
In one possible implementation manner, after the matching value corresponding to each detection frame group is determined, before the target detection frame group is determined according to the matching degree, the matching value corresponding to each detection frame group is screened according to a preset detection threshold value, the detection frame groups with the matching values larger than the preset detection threshold value are screened, and then the target detection frame group is determined according to the matching degree.
As an example, the preset detection threshold value is set to 20%. Illustratively, the IOEA (A1, A2) and the IOEA (B1, A2) are both greater than a preset detection threshold. At this time, IOEA (A1, A2) is larger than IOEA (B1, A2), i.e., the first detection frame A1 and the second detection frame A2 are set as target detection frame groups.
S505: and determining a target area corresponding to the target detection frame group.
In this embodiment, for the target detection frame group, a first target overlapping area included in the first detection frame and a second target overlapping area included in the second detection frame are determined, where the first target overlapping area overlaps with the second target overlapping area in position; and taking the first target overlapping area and the second target overlapping area as target areas corresponding to the target detection frame group.
As an example, as shown in fig. 4, when the first detection frame A1 and the second detection frame A2 are set as the target detection frame groups, a region overlapping with the presence of the second detection frame A2 in the first detection frame A1 is set as a first target overlapping region, a region overlapping with the presence of the first detection frame A1 in the second detection frame A2 is set as a second target overlapping region, and the first target overlapping region and the second target overlapping region are set as target regions corresponding to the first detection frame A1 and the second detection frame A2.
In the target detection method provided by the embodiment, in the process of calculating the matching degree corresponding to the first detection frame and the second detection frame contained in the detection frame group, the matching degree is calculated according to the overlapping area of the two detection frames, and compared with the conventional intersection ratio calculation method adopted by the target identification algorithm, when the front detection frame and the rear detection frame are shielded or missing, whether the two detection frames belong to the same target can be accurately identified, so that the accuracy of detecting the target area belonging to the same product is improved.
In one possible implementation manner, at least one first detection frame included in the first image is identified according to a target detection model, wherein the target detection model is obtained through training according to a labeling training set, the labeling training set includes at least one labeling sample image, and a rectangular labeling frame of each labeling sample image is a labeling frame greater than or equal to a preset detection frame.
In this embodiment, a target detection model implemented based on a single-stage target detection algorithm may be used to detect detection frames corresponding to a plurality of products included in an image of a photographed production line. It should be understood that, because the closer the distance of the product is in the shot image, the larger the corresponding detection frame of the product in the image, and when the larger the detection frame is, the higher the accuracy of product performance identification by adopting the method of the application is.
In one possible implementation, in the process of acquiring the sample image, the shooting angle of the image acquisition device may be set so as to simulate the scene of actually shooting the product on the production line. The training set is obtained by collecting the slightly inclined sample image, marking is carried out on the training set, yolov5s is selected as a target detection algorithm to carry out migration learning on the marked training set, self-adaptive learning rate is adopted to carry out training, the batch size is set to be 2, the initial moment learning rate is set to be 0.01, the momentum size is set to be 0.937 and the weight attenuation is set to be 0.0005 in the training process, and a model obtained after training is used as the target detection model of the embodiment.
According to the embodiment, in the process of training the target detection model, the marking frame for marking the detected product in the sample image in the training set can be set to be larger than or equal to the marking frame of the preset detection frame, namely, the training set is optimized, the induction model only detects the target with the detection frame larger than the preset detection frame in the image, and the detected targets are all part of products with relatively short shooting distance, so that the accuracy of the performance result of the detected product in the application is ensured.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 6 is a block diagram of the object detection apparatus according to the embodiment of the present application, corresponding to the object detection method described in the above embodiment, and only the portion related to the embodiment of the present application is shown for convenience of explanation.
Referring to fig. 6, the object detection apparatus includes: the acquisition module 601, the detection module 602, the first determination module 603 and the second determination module 604.
An obtaining module 601 is configured to obtain a first image and a second image.
The detection module 602 is configured to detect a target area including the same target in the first image and the second image.
A first determining module 603 is configured to determine a rate of change of pixels in the target area.
A second determining module 604, configured to determine a target detection result according to the rate of change.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
The object detection device shown in fig. 6 may be a software unit, a hardware unit, or a combination of hardware and software unit built into an existing electronic device, may be integrated into the electronic device as a separate pendant, or may exist as a separate electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic apparatus of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, the processor 70 implementing the steps in any of the various target detection method embodiments described above when executing the computer program 72.
The electronic equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The electronic device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that fig. 7 is merely an example of an electronic device and is not meant to be limiting, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU) and the processor 70 may be other general purpose processors, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory 71 may in other embodiments also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device. Further, the memory 71 may also include both an internal storage unit and an external storage device of the electronic device. The memory 71 is used for storing an operating system, application programs, boot Loader (Boot Loader), data, other programs, etc., such as program codes of the computer program. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program, which when executed by a processor, may implement the steps in the above-described method embodiments.
Embodiments of the present application provide a computer program product which, when run on an electronic device, causes the electronic device to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an apparatus/electronic device, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A method of detecting an object, comprising:
obtaining a first image and a second image;
detecting a target area containing the same target in the first image and the second image;
Determining a rate of change of pixels in the target region;
and determining a target detection result according to the change rate.
2. The method of claim 1, wherein the detecting the target region in the first image and the second image that contains the same target comprises:
identifying at least one first detection frame contained in the first image and identifying at least one second detection frame contained in the second image;
obtaining at least one detection frame group, wherein each detection frame group comprises one first detection frame and one second detection frame;
determining the matching degree of each detection frame group;
determining a target detection frame group according to the matching degree;
and determining a target area corresponding to the target detection frame group.
3. The method of claim 2, wherein said determining the degree of matching for each of said groups of detection frames comprises:
determining a coincidence region of the first detection frame and the second detection frame for each detection frame group;
taking the ratio of the overlapping area to the first detection frame as a first overlapping ratio, and taking the ratio of the overlapping area to the second detection frame as a second overlapping ratio;
And determining the matching degree of the detection frame group according to the first contact degree and the second contact degree.
4. The method of claim 2, wherein determining the target area corresponding to the target detection frame group comprises:
determining a first target overlapping region contained in the first detection frame and a second target overlapping region contained in the second detection frame aiming at the target detection frame group, wherein the first target overlapping region is overlapped with the second target overlapping region in position;
and taking the first target overlapping area and the second target overlapping area as target areas corresponding to the target detection frame group.
5. The method of claim 4, wherein the determining the rate of change of pixels in the target region comprises:
obtaining a first pixel matrix corresponding to the first target overlapping region;
obtaining a second pixel matrix corresponding to the second target overlapping region;
and determining the change rate of the pixels in the target area according to the difference value between the first pixel matrix and the second pixel matrix.
6. The method of claim 5, wherein determining the rate of change of pixels in the target area based on the difference between the first pixel matrix and the second pixel matrix comprises:
Obtaining a change value of the pixel points contained in the target area and the number of the pixel points contained in the target area according to the difference value between the first pixel matrix and the second pixel matrix;
taking the pixel point with the change value larger than a preset pixel threshold value as a target pixel point;
and taking the ratio of the number of the target pixel points to the number of the pixel points contained in the target area as the change rate of the pixels in the target area.
7. The method of claim 2, wherein said identifying at least one first detection box contained in said first image comprises:
and identifying at least one first detection frame contained in the first image according to a target detection model, wherein the target detection model is obtained through training according to a labeling training set, the labeling training set contains at least one labeling sample image, and the rectangular labeling frame of each labeling sample image is a labeling frame which is larger than or equal to a preset detection frame.
8. An object matching apparatus, comprising:
the acquisition module is used for acquiring a first image and a second image;
the detection module is used for detecting a target area containing the same target in the first image and the second image;
A first determining module for determining a rate of change of pixels in the target area;
and the second determining module is used for determining a target detection result according to the change rate.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the object detection method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the object detection method according to any one of claims 1 to 7.
CN202310359927.3A 2023-03-28 2023-03-28 Target detection method, target detection device, electronic equipment and computer readable storage medium Pending CN116416232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310359927.3A CN116416232A (en) 2023-03-28 2023-03-28 Target detection method, target detection device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310359927.3A CN116416232A (en) 2023-03-28 2023-03-28 Target detection method, target detection device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116416232A true CN116416232A (en) 2023-07-11

Family

ID=87057656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310359927.3A Pending CN116416232A (en) 2023-03-28 2023-03-28 Target detection method, target detection device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116416232A (en)

Similar Documents

Publication Publication Date Title
CN109670383B (en) Video shielding area selection method and device, electronic equipment and system
CN112883819A (en) Multi-target tracking method, device, system and computer readable storage medium
CN110705405A (en) Target labeling method and device
CN110533654A (en) The method for detecting abnormality and device of components
CN110723432A (en) Garbage classification method and augmented reality equipment
US20230214989A1 (en) Defect detection method, electronic device and readable storage medium
CN111325717B (en) Mobile phone defect position identification method and equipment
CN112328822B (en) Picture pre-marking method and device and terminal equipment
CN111444555B (en) Temperature measurement information display method and device and terminal equipment
CN110826646A (en) Robot vision testing method and device, storage medium and terminal equipment
CN112465871A (en) Method and system for evaluating accuracy of visual tracking algorithm
CN112559341A (en) Picture testing method, device, equipment and storage medium
CN114821274A (en) Method and device for identifying state of split and combined indicator
CN113158773B (en) Training method and training device for living body detection model
CN113553992A (en) Escalator-oriented complex scene target tracking method and system
CN115690747B (en) Vehicle blind area detection model test method and device, electronic equipment and storage medium
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
JP7396076B2 (en) Number recognition device, method and electronic equipment
CN116993654B (en) Camera module defect detection method, device, equipment, storage medium and product
CN111967529A (en) Identification method, device, equipment and system
CN116416232A (en) Target detection method, target detection device, electronic equipment and computer readable storage medium
CN111325731A (en) Installation detection method and device of remote control device
CN111935480B (en) Detection method for image acquisition device and related device
CN114140751B (en) Examination room monitoring method and system
CN112629828B (en) Optical information detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination