CN114764773A - Article detection method and device and electronic equipment - Google Patents

Article detection method and device and electronic equipment Download PDF

Info

Publication number
CN114764773A
CN114764773A CN202110037266.3A CN202110037266A CN114764773A CN 114764773 A CN114764773 A CN 114764773A CN 202110037266 A CN202110037266 A CN 202110037266A CN 114764773 A CN114764773 A CN 114764773A
Authority
CN
China
Prior art keywords
detection
detection target
image
positioning
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110037266.3A
Other languages
Chinese (zh)
Inventor
蒋剑俊
郭贤捷
关鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Shanghai Co ltd
Original Assignee
Omron Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Shanghai Co ltd filed Critical Omron Shanghai Co ltd
Priority to CN202110037266.3A priority Critical patent/CN114764773A/en
Priority to PCT/JP2021/047883 priority patent/WO2022153827A1/en
Publication of CN114764773A publication Critical patent/CN114764773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an article detection method and device and electronic equipment. The method comprises the following steps: positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target; calculating a characteristic value of the detection target according to the positioning data; and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result. Therefore, the requirement of production takt can be met in practical application, and whether the detection target has defects or not can be judged quickly.

Description

Article detection method and device and electronic equipment
Technical Field
The present disclosure relates to the field of article detection technologies, and in particular, to an article detection method and apparatus, and an electronic device.
Background
When articles are produced and manufactured, the articles need to be detected, and whether defects exist can be judged through article detection, so that the yield is improved. The traditional vision technology is difficult to detect the object to be detected in a complex environment (such as the object is positioned in a packing box or a transparent plastic bag), and further cannot detect the appearance of the object. At present, the scheme of dividing the network based on deep learning can detect the appearance form of an article.
It should be noted that the above background description is provided only for the sake of clarity and complete description of the technical solutions of the present application, and for the understanding of those skilled in the art. Such solutions are not considered to be known to the person skilled in the art merely because they have been set forth in the background section of the present application.
Disclosure of Invention
However, the inventors found that: the article detection scheme based on the deep learning segmentation network needs a large amount of calculation, needs to consume a certain time, causes poor real-time performance, is difficult to meet the requirement of production rhythm in practical application, and cannot quickly judge whether a detection target has defects.
In order to solve at least one of the above problems, embodiments of the present application provide an article detection method and apparatus, and an electronic device.
According to an aspect of an embodiment of the present application, there is provided an article detection method, including:
positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
calculating a characteristic value of the detection target according to the positioning data; and
and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
According to another aspect of embodiments of the present application, there is provided an article detection apparatus including:
the positioning unit is used for positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
a calculation unit that calculates a feature value of the detection target from the positioning data; and
and the comparison unit compares the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
According to another aspect of embodiments of the present application, there is provided an electronic device comprising a memory and a processor, the memory storing a computer program, the processor being configured to perform the following operations:
positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
calculating a characteristic value of the detection target according to the positioning data; and
and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
One of the beneficial effects of the embodiment of the application lies in: positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target; calculating a characteristic value of the detection target according to the positioning data; and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result. Therefore, the requirement of production takt can be met in practical application, and whether the detection target has defects or not can be judged quickly.
Specific embodiments of the present application are disclosed in detail with reference to the following description and drawings, indicating the manner in which the principles of the application may be employed. It should be understood that the embodiments of the present application are not so limited in scope. The embodiments of the application include many modifications, variations and equivalents within the spirit and scope of the appended claims.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the application, are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic view of an article inspection method according to an embodiment of the present application;
FIG. 2 is a diagram of an exemplary positioning data according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an exemplary data statistics of an embodiment of the present application;
FIG. 4 is a schematic diagram of distributed detection in accordance with embodiments of the present application;
FIG. 5 is a schematic view of an article detection apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The foregoing and other features of the present application will become apparent from the following description, taken in conjunction with the accompanying drawings. In the description and drawings, particular embodiments of the application are disclosed in detail as being indicative of some of the embodiments in which the principles of the application may be employed, it being understood that the application is not limited to the embodiments described, but, on the contrary, is intended to cover all modifications, variations, and equivalents falling within the scope of the appended claims.
In the embodiments of the present application, the terms "first", "second", and the like are used for distinguishing different elements by reference, but do not indicate a spatial arrangement or a temporal order of the elements, and the elements should not be limited by the terms. The term "and/or" includes any and all combinations of one or more of the associated listed terms. The terms "comprising," "having," and the like, refer to the presence of stated features, elements, components, and do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof.
In the embodiments of the present application, the singular forms "a", "an", and the like may include the plural forms and should be interpreted broadly as "a" or "an" and not limited to the meaning of "a" or "an"; furthermore, the term "comprising" should be understood to include both the singular and the plural, unless the context clearly dictates otherwise. Further, the term "according to" should be understood as "at least partially according to … …" unless the context clearly dictates otherwise.
Embodiments of the present application will be described below with reference to the drawings.
Embodiments of the first aspect
The embodiment of the application provides an article detection method. Fig. 1 is a schematic diagram of an article inspection method according to an embodiment of the present application, and as shown in fig. 1, the method includes:
101, positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
102, calculating a characteristic value of the detection target according to the positioning data; and
and 103, comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
It should be noted that fig. 1 above is only a schematic illustration of the embodiment of the present application, but the present application is not limited thereto. For example, the execution sequence of the steps may be adjusted as appropriate, and other steps may be added or some of the steps may be reduced. Those skilled in the art can appropriately modify the above description without being limited to the description of fig. 1.
In some embodiments, the items on the production line may be photographed to form item images, each of which may include multiple detection targets. For example, the inspection targets are parts in the manufacturing process, each 48 parts is photographed to form an image of the article, and so on.
In some embodiments, a deep learning positioning network model may be provided in an AI (Artificial Intelligence) device, into which an item image is input. The AI device with the deep learning positioning network model may be an actual hardware device, such as a server or a PC (Personal Computer), etc.; or may be a virtual software device such as a process or thread.
In some embodiments, the deep learning positioning Network model may be implemented based on a Convolutional Neural Network (CNN), e.g., may contain one or more Convolutional layers, pooling layers, fully-connected layers, and the like; for another example, a YOLO algorithm or the like may be used. The network model can also refer to related technologies, such as Google Net model, and the like, in relation to deep learning positioning.
In some embodiments, the positioning data of the detection target can be obtained by positioning the detection target in the article image through a deep learning positioning network model. For example, detecting positioning data of an object includes: the detection target comprises the category of the detection target, the coordinate of the central point of the rectangular frame where the detection target is located, the width of the rectangular frame where the detection target is located and the height of the rectangular frame where the detection target is located. In addition, detecting the positioning data of the object may further include a confidence representing a confidence probability for the outputted positioning data.
Fig. 2 is a diagram of an example of positioning data according to an embodiment of the present application. As shown in fig. 2, for one data sample (article image) of the N collected data samples, one or more positioning data [ class, confidence, x, y, w, h ] may be obtained through the deep learning positioning network model, where class represents the category of the detected object, confidence represents confidence, x and y represent coordinates of a center point of a rectangular frame in which the detected object is located, w represents the width of the rectangular frame in which the detected object is located, and h represents the height of the rectangular frame in which the detected object is located.
For example, for the article image 201, data [ part 1, 0.9, 20,30, 15, 16] corresponding to the detection target 202 may be obtained; the category is 'part 1', the confidence coefficient is 0.9, the coordinates of the center point of the rectangular frame are (20,30), the width of the rectangular frame is 15, and the height of the rectangular frame is 16. For the article image 201, data [ part 1, 0.95, 100,40, 15, 19] corresponding to the detection target 203 can also be obtained; the category is "part 1", the confidence coefficient is 0.95, the coordinates of the center point of the rectangular frame are (100,40), the width of the rectangular frame is 15, and the height of the rectangular frame is 19.
However, the application is not limited to this, and specific data can be determined according to actual needs; for example, the positioning data may not include a category and confidence; for another example, the positioning data may be three-dimensional data, including the length l of the rectangular frame in which the detection target is located, the width w of the rectangular frame in which the detection target is located, and the height h of the rectangular frame in which the detection target is located; and so on.
In some embodiments, a characteristic value of the detected object may be calculated from the positioning data. For example, the feature values of the detection target include: the ratio of the width to the height of the rectangular frame where the detection target is located, or the deviation of the central point of the rectangular frame where the detection target is located, or the deviation of the area of the rectangular frame where the detection target is located. However, the present application is not limited thereto, and specific data may be determined according to actual needs.
The present application will be further described below by taking the ratio of the width to the height of the rectangular frame in which the detection target is located as an example.
For example, if the corresponding positioning data is [ part 1, 0.9, 20, 30, 15, 16] for the detection target 202 in the item image 201, the characteristic value 15/16 of the detection target 202 may be calculated to be 0.9375; if the corresponding positioning data for the detected object 203 in the item image 201 is [ part 1, 0.95, 200, 300, 15, 19], the characteristic value of the detected object 203 can be calculated to be 15/19-0.7895.
In some embodiments, the calculated feature value may be compared with a preset detection threshold value to determine whether the detection target has a defect according to the comparison result. Wherein the detection threshold may be predetermined based on empirical values or product requirements.
For example, assuming that the detection threshold is set to 0.9, for the detection target 202 in the article image 201, the feature value of 0.9375>0.9, it can be determined that the detection target 202 has no defect; for an inspected object 203 in the item image 201, whose characteristic value 0.7895 is <0.9, it can be determined that the inspected object 203 has a defect.
Therefore, the detection target in the article image is positioned through the deep learning positioning network model, and the positioning data of the detection target is obtained; calculating a characteristic value of the detection target according to the positioning data; and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result. The method has the advantages that a large amount of calculation such as calculation of the edge shape of the target is not needed, real-time performance is good, the requirement of production rhythm can be met in practical application, and whether the detected target has defects or not can be judged quickly.
In some embodiments, statistics can also be performed on the characteristic values of a plurality of detection targets; and determining or updating a detection threshold based on the statistical result. For example, a data distribution map may be calculated, and the detection threshold may be determined by dividing the normal interval and the abnormal interval according to the data distribution in the distribution map.
In some embodiments, the counting the feature values of the plurality of detection targets includes: calculating an average value of the plurality of characteristic values; the standard deviation of the plurality of feature values is calculated based on the mean. The detection threshold may be determined or updated based on the standard deviation, thereby enabling more accurate detection.
For example, suppose XiI is an integer greater than 0; an average value can be calculated for n feature values:
Figure BDA0002893720030000061
and, the standard deviation of the n feature values may be calculated based on the average:
Figure BDA0002893720030000062
FIG. 3 is a diagram of an example of data statistics for an embodiment of the present application, as shown in FIG. 3, for example, 68.3% of the values fall within the range of [ -1 σ,1 σ ], 95.5% of the values fall within the range of [ -2 σ,2 σ ], and 99.73% of the values fall within the range of [ -3 σ,3 σ ]. For example, if the production process requires the [ -3 σ,3 σ ] interval to be a normal interval and the other intervals to be abnormal intervals, the detection threshold value may be determined with 3 σ as a reference. For another example, if the production process requirement [ -6 σ,6 σ ] interval is a normal interval, and the other intervals are abnormal intervals, the detection threshold may be determined with 6 σ as a reference.
Taking 6 σ as an example, the scheme for determining whether the detected target has defects can be shown in table 1:
TABLE 1
Figure BDA0002893720030000063
The above schematically illustrates how article detection is performed, and further acceleration of article detection is described below.
In some embodiments, the item image may also be cropped into a plurality of sub-images; and respectively inputting the plurality of sub-images into a plurality of deep learning positioning network models so as to detect the detection target in the sub-images.
Fig. 4 is a schematic diagram of distributed detection according to an embodiment of the present application, and as shown in fig. 4, for example, an article image may be cut into 4 sub-images (sub-images 1, 2, 3, and 4) and then input into 4 AI devices, each AI device having a deep learning positioning network model and being capable of performing the article detection method shown in fig. 1.
For example, an article image including 48 detection targets may be cut out into 4 sub-images including 12 detection targets in each sub-image, and each sub-image is input into 1 AI device, respectively, so that article detection can be performed in parallel.
In some embodiments, in a case where the detection of each of the plurality of sub-images is completed within a predetermined time, the detection results of the plurality of sub-images are synchronized. For example, the predetermined time may be started after the first sub-image among the plurality of sub-images is detected, but the application is not limited thereto.
For example, an empty list [ Nan, Nan ] may be established corresponding to AI devices #1 to #4, respectively, and each time an AI device completes detection, a completed timestamp may be recorded at a corresponding location; when the list is not empty (i.e., the last AI device completes the detection), 4 data are synchronized.
For example, as shown in fig. 4, AI device #2 completes detection of sub-image 2 first, then AI device #3 completes detection of sub-image 3, then AI device #1 completes detection of sub-image 1, and then AI device #4 completes detection of sub-image 4, at this time, data synchronization may be performed on 4 detection results, that is, the 4 detection results may be merged together to serve as the detection result of the article image.
In some embodiments, in the case where the detection of at least one sub-image is not completed within a predetermined time, it is determined that the detection of the image of the article is abnormal. For example, the predetermined time may be started after the first sub-image among the plurality of sub-images is detected, but the application is not limited thereto.
For example, a timeout timer may be started when the first AI device completes detection (as shown at 401 in fig. 4), the duration T of the timer being the predetermined time. When the timer is overtime, if the data is not synchronized, an error is reported, and the detection of the article image is determined to be abnormal.
Thus, article detection can be performed by a plurality of AI devices partially in parallel, and the detection speed is further increased to improve the real-time performance.
In some embodiments, the item image or sub-image may be grayed out to reduce the image size before being input into the deep learning positioning network model. For example, an 800 × 600 × 3 RGB image may be converted into a 400 × 300 × 1 grayscale image.
Therefore, the image size is reduced by preprocessing before entering the model, so that the calculation amount can be further reduced, the real-time performance is improved, and the detection speed is increased.
In some embodiments, before an item image or sub-image is input into the deep learning positioning network model, the item image or sub-image may be cropped to reduce the detection range. For example, a cut image mainly containing a detection target may be obtained by cutting an image of an article having a wide field of view.
Therefore, the detection range is reduced by preprocessing before entering the model, the calculation amount can be further reduced, the real-time performance is improved, and the detection speed is accelerated.
The above embodiments are merely illustrative of the embodiments of the present application, but the present application is not limited thereto, and appropriate modifications may be made on the basis of the above embodiments. For example, the above-described embodiments may be used alone, or one or more of the above-described embodiments may be combined.
According to the embodiment, the detection target in the article image is positioned through the deep learning positioning network model, and the positioning data of the detection target is obtained; calculating a characteristic value of the detection target according to the positioning data; and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result. Therefore, the requirement of production takt can be met in practical application, and whether the detection target has defects or not can be judged quickly.
Embodiments of the second aspect
An embodiment of the present application provides an article detection apparatus, which may be, for example, an electronic device, or may be a component or an assembly configured on one or some components of the electronic device, and details of the same contents as those in the embodiment of the first aspect are not repeated.
Fig. 5 is a schematic diagram of an article detection apparatus according to an embodiment of the present application, and as shown in fig. 5, the article detection apparatus 500 includes:
a positioning unit 501, which positions a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
a calculating unit 502 that calculates a feature value of the detection target from the positioning data; and
A comparing unit 503, which compares the calculated characteristic value with a preset detection threshold value to determine whether the detection target has a defect according to the comparison result.
In some embodiments, the detecting the positioning data of the object comprises: the detection target comprises the category of the detection target, the coordinate of the central point of the rectangular frame where the detection target is located, the width of the rectangular frame where the detection target is located and the height of the rectangular frame where the detection target is located.
In some embodiments, the detecting the characteristic value of the target includes: the ratio of the width to the height of the rectangular frame where the detection target is located, or the deviation of the central point of the rectangular frame where the detection target is located, or the deviation of the area of the rectangular frame where the detection target is located.
In some embodiments, as shown in fig. 5, the article detection apparatus 500 may further include:
a counting unit 504 that counts feature values of a plurality of detection targets; and determining or updating the detection threshold based on the statistical result.
In some embodiments, the statistical unit 504 performs statistics on the feature values of the plurality of detection targets, including: calculating an average value of the plurality of characteristic values; calculating a standard deviation of the plurality of feature values based on the mean.
In some embodiments, as shown in fig. 5, the article detection apparatus 500 may further include:
a preprocessing unit 505, which crops the article image into a plurality of sub-images; and respectively inputting the plurality of sub-images into a plurality of deep learning positioning network models so as to detect the detection target in the sub-images.
In some embodiments, the preprocessing unit 505 synchronizes the detection results of the plurality of sub-images when the plurality of sub-images are all detected within a predetermined time; and determining that the detection of the article image is abnormal under the condition that at least one sub-image does not finish detection within a preset time.
In some embodiments, the preprocessing unit 505 may also graying out the item image or the sub-image to reduce the image size and/or crop to reduce the detection range before inputting the item image into the deep learning positioning network model.
The above embodiments are merely illustrative of the embodiments of the present application, but the present application is not limited thereto, and appropriate modifications may be made on the basis of the above embodiments. For example, the above embodiments may be used alone, or one or more of the above embodiments may be combined.
It should be noted that the above description only describes the components or modules related to the present application, but the present application is not limited thereto. The article detection apparatus 500 may also include other components or modules, and reference may be made to the related art regarding the details of the components or modules.
In addition, for the sake of simplicity, fig. 5 only illustrates the connection relationship or signal direction between the respective components or modules, but it should be clear to those skilled in the art that various related technologies such as bus connection and the like may be adopted. The above components or modules may be implemented by hardware facilities such as processors, memories, transmitters, receivers, etc.; the present application is not so limited.
According to the embodiment, the detection target in the article image is positioned through the deep learning positioning network model, and the positioning data of the detection target is obtained; calculating a characteristic value of the detection target according to the positioning data; and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result. Therefore, the requirement of production takt can be met in practical application, and whether the detection target has defects or not can be judged quickly.
Examples of the third aspect
Embodiments of the present application provide an electronic device, including an article detection apparatus as described in embodiments of the second aspect, the contents of which are incorporated herein. The electronic device may be, for example, a computer, a server, a workstation, a laptop, a smartphone, or the like; the embodiments of the present application are not limited thereto.
Fig. 6 is a schematic diagram of an electronic device according to an embodiment of the application. As shown in fig. 6, the electronic device 600 may include: a processor (e.g., Central Processing Unit (CPU)) 610 and a memory 620; a memory 620 is coupled to the processor 610. Wherein the memory 620 may store various data; further, a program 621 of information processing is stored, and the program 621 is executed under the control of the processor 610.
In some embodiments, the functionality of the item detection apparatus 500 is implemented integrated into the processor 610. Wherein the processor 610 is configured to implement the item detection method as described in embodiments of the first aspect.
In some embodiments, the item detection apparatus 500 is configured separately from the processor 610, for example, the item detection apparatus 500 may be configured as a chip connected to the processor 610, and the function of the item detection apparatus 500 is realized by the control of the processor 610.
In some embodiments, the processor 610 is configured to control: positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target; calculating a characteristic value of the detection target according to the positioning data; and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
In some embodiments, the detecting the positioning data of the object comprises: the detection target comprises the category of the detection target, the coordinate of the central point of the rectangular frame where the detection target is located, the width of the rectangular frame where the detection target is located and the height of the rectangular frame where the detection target is located.
In some embodiments, the detecting the characteristic value of the target includes: the ratio of the width to the height of the rectangular frame where the detection target is located, or the deviation of the central point of the rectangular frame where the detection target is located, or the deviation of the area of the rectangular frame where the detection target is located.
In some embodiments, the processor 610 is configured to control: counting the characteristic values of a plurality of detection targets; and determining or updating the detection threshold based on the statistical result.
In some embodiments, the processor 610 is configured to control: calculating an average value of a plurality of characteristic values; calculating a standard deviation of the plurality of feature values based on the mean.
In some embodiments, the processor 610 is configured to control: cropping the item image into a plurality of sub-images; and respectively inputting the plurality of sub-images into a plurality of deep learning positioning network models to detect the detection target in the sub-images.
In some embodiments, the processor 610 is configured to control: under the condition that the sub-images are detected within preset time, synchronizing the detection results of the sub-images; and determining that the detection of the article image is abnormal under the condition that at least one sub-image does not finish detection within a preset time.
In some embodiments, the processor 610 is configured to control: graying the object image or the sub-image to reduce the image size and/or cropping to reduce the detection range before inputting the object image or the sub-image into the deep learning positioning network model.
The apparatus and method of the present application may be implemented by hardware, or may be implemented by hardware in combination with software. The present application relates to a computer-readable program which, when executed by a logic component, enables the logic component to implement the above-described apparatus or constituent components, or to implement various methods or steps described above. The present application also relates to a storage medium such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, or the like, for storing the above program.
The methods/apparatus described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. For example, one or more of the functional block diagrams and/or one or more combinations of the functional block diagrams illustrated in the figures may correspond to individual software modules, or may correspond to individual hardware modules of a computer program flow. These software modules may correspond to various steps shown in the figures, respectively. These hardware modules may be implemented, for example, by solidifying these software modules using a Field Programmable Gate Array (FPGA).
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium; or the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The software module may be stored in the memory of the device or in a memory card that is insertable into the device. For example, if a larger capacity MEGA-SIM card or a larger capacity flash memory device is used, the software module may be stored in the MEGA-SIM card or the larger capacity flash memory device.
One or more of the functional blocks and/or one or more combinations of the functional blocks described in the figures can be implemented as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any suitable combination thereof designed to perform the functions described herein. One or more of the functional blocks and/or one or more combinations of the functional blocks described in connection with the figures may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP communication, or any other such configuration.
The present application has been described in conjunction with specific embodiments, but it should be understood by those skilled in the art that these descriptions are intended to be illustrative, and not limiting. Various modifications and adaptations of the present application may occur to those skilled in the art based on the spirit and principles of the application and are within the scope of the application.
Preferred embodiments of the present application are described above with reference to the accompanying drawings. The many features and advantages of the embodiments are apparent from the detailed specification, and thus, it is intended by the appended claims to cover all such features and advantages of the embodiments that fall within the true spirit and scope thereof. Further, since numerous modifications and variations will readily occur to those skilled in the art, it is not desired to limit the embodiments of the present application to the exact construction and operation illustrated and described, and accordingly, all suitable modifications, variations and equivalents may be resorted to, falling within the scope thereof.

Claims (10)

1. An item detection method, comprising:
positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
calculating a characteristic value of the detection target according to the positioning data; and
and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
2. The method of claim 1, wherein the detecting the location data of the object comprises: the detection target comprises the category of the detection target, the coordinate of the central point of the rectangular frame where the detection target is located, the width of the rectangular frame where the detection target is located and the height of the rectangular frame where the detection target is located.
3. The method of claim 1, wherein detecting the feature value of the target comprises: the ratio of the width to the height of the rectangular frame where the detection target is located, or the deviation of the central point of the rectangular frame where the detection target is located, or the deviation of the area of the rectangular frame where the detection target is located.
4. The method of claim 1, further comprising:
counting the characteristic values of a plurality of detection targets; and
The detection threshold is determined or updated based on statistical results.
5. The method of claim 4, wherein the counting the feature values of the plurality of detection targets comprises:
calculating an average value of a plurality of characteristic values;
calculating a standard deviation of the plurality of feature values based on the mean.
6. The method of claim 1, further comprising:
cropping the item image into a plurality of sub-images; and
and respectively inputting the plurality of sub-images into a plurality of deep learning positioning network models so as to detect the detection target in the sub-images.
7. The method of claim 6, further comprising:
under the condition that the detection of the plurality of sub-images is finished within the preset time, synchronizing the detection results of the plurality of sub-images; and determining that the detection of the article image is abnormal under the condition that at least one sub-image does not finish detection within a preset time.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
graying out the item image or the sub-image to reduce the image size and/or cropping to reduce the detection range before inputting the item image or the sub-image into the deep learning positioning network model.
9. An article detection device, the device comprising:
the positioning unit is used for positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
a calculation unit that calculates a feature value of the detection target from the positioning data; and
and the comparison unit compares the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
10. An electronic device, comprising a memory storing a computer program and a processor configured to:
positioning a detection target in an article image through a deep learning positioning network model to obtain positioning data of the detection target;
calculating a characteristic value of the detection target according to the positioning data; and
and comparing the calculated characteristic value with a preset detection threshold value to judge whether the detection target has defects according to the comparison result.
CN202110037266.3A 2021-01-12 2021-01-12 Article detection method and device and electronic equipment Pending CN114764773A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110037266.3A CN114764773A (en) 2021-01-12 2021-01-12 Article detection method and device and electronic equipment
PCT/JP2021/047883 WO2022153827A1 (en) 2021-01-12 2021-12-23 Article detection method, device, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110037266.3A CN114764773A (en) 2021-01-12 2021-01-12 Article detection method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114764773A true CN114764773A (en) 2022-07-19

Family

ID=82364481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110037266.3A Pending CN114764773A (en) 2021-01-12 2021-01-12 Article detection method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN114764773A (en)
WO (1) WO2022153827A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5678595B2 (en) * 2010-11-15 2015-03-04 株式会社リコー INSPECTION DEVICE, INSPECTION METHOD, INSPECTION PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
JP6333871B2 (en) * 2016-02-25 2018-05-30 ファナック株式会社 Image processing apparatus for displaying an object detected from an input image
JP6972757B2 (en) * 2017-08-10 2021-11-24 富士通株式会社 Control programs, control methods, and information processing equipment
JP7208480B2 (en) * 2018-10-12 2023-01-19 富士通株式会社 Learning program, detection program, learning device, detection device, learning method and detection method
JP7212247B2 (en) * 2018-11-02 2023-01-25 富士通株式会社 Target detection program, target detection device, and target detection method
JP6869490B2 (en) * 2018-12-28 2021-05-12 オムロン株式会社 Defect inspection equipment, defect inspection methods, and their programs
JP2020187657A (en) * 2019-05-16 2020-11-19 株式会社キーエンス Image inspection device

Also Published As

Publication number Publication date
WO2022153827A1 (en) 2022-07-21

Similar Documents

Publication Publication Date Title
CN111241947B (en) Training method and device for target detection model, storage medium and computer equipment
CN107358149B (en) Human body posture detection method and device
CN104573614B (en) Apparatus and method for tracking human face
CN109726658B (en) Crowd counting and positioning method and system, electronic terminal and storage medium
CN112801050B (en) Intelligent luggage tracking and monitoring method and system
CN105809651B (en) Image significance detection method based on the comparison of edge non-similarity
CN108875542B (en) Face recognition method, device and system and computer storage medium
Srivatsa et al. Salient object detection via objectness measure
CN114667540A (en) Article identification and tracking system
CN111738344A (en) Rapid target detection method based on multi-scale fusion
CN111652085A (en) Object identification method based on combination of 2D and 3D features
CN111612841A (en) Target positioning method and device, mobile robot and readable storage medium
CN112509011B (en) Static commodity statistical method, terminal equipment and storage medium thereof
CN109816634B (en) Detection method, model training method, device and equipment
CN115115825B (en) Method, device, computer equipment and storage medium for detecting object in image
CN113706579A (en) Prawn multi-target tracking system and method based on industrial culture
CN113947770B (en) Method for identifying object placed in different areas of intelligent cabinet
CN111160450A (en) Fruit and vegetable weighing method based on neural network, storage medium and device
WO2022142416A1 (en) Target tracking method and related device
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN114764773A (en) Article detection method and device and electronic equipment
CN112199984B (en) Target rapid detection method for large-scale remote sensing image
CN110147755B (en) Context cascade CNN-based human head detection method
CN109598793B (en) Manufacturing method and device for quickly modifying vegetation and water body based on oblique photogrammetry
CN110717406A (en) Face detection method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination