CN116579978A - Product detection method and device - Google Patents

Product detection method and device Download PDF

Info

Publication number
CN116579978A
CN116579978A CN202310269427.0A CN202310269427A CN116579978A CN 116579978 A CN116579978 A CN 116579978A CN 202310269427 A CN202310269427 A CN 202310269427A CN 116579978 A CN116579978 A CN 116579978A
Authority
CN
China
Prior art keywords
image
edge
product
template
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310269427.0A
Other languages
Chinese (zh)
Inventor
张文文
杨英豪
姚毅
包振健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Original Assignee
Luster LightTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN202310269427.0A priority Critical patent/CN116579978A/en
Publication of CN116579978A publication Critical patent/CN116579978A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses a product detection method and device, and belongs to the technical field of industrial detection. The product detection method comprises the following steps: extracting image edge characteristics of at least partial areas of a template image and a first image corresponding to the obtained target product respectively, and obtaining first edge characteristic points corresponding to the template image and second edge characteristic points corresponding to the first image; correcting the first image based on the first edge characteristic points and the second edge characteristic points to obtain a second image, wherein all areas of the product area in the second image are overlapped with at least part of the first detection area; the product area in the second image is detected based on the first detection area. The product detection method can normally detect the product area under the condition of the position change of the first image, realizes the reuse of the templates, reduces the workload and the burden of storing the templates, further improves the product detection efficiency, is not easy to cause missed detection, and improves the product detection precision.

Description

Product detection method and device
Technical Field
The application belongs to the technical field of industrial detection, and particularly relates to a product detection method and device.
Background
In the process of industrial detection of products by using templates, under the condition of switching product varieties, the images to be detected are offset due to mechanical positioning, and the conventional product detection method needs to redetermine the positions of detection areas so as to reestablish the templates for product detection. In the process of re-modeling, normal detection cannot be performed, missed detection is easy to cause, after templates are established for many times, the templates are complex to use and manage, workload is increased, the burden of storing the templates is increased, and the product detection efficiency is low.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art. Therefore, the application provides the product detection method and the device, and in the actual detection process, under the condition that the product variety is changed, only the image position is correspondingly adjusted without changing the template, so that the reuse of the template is realized, the workload is reduced, the burden of storing the template is reduced, the product detection efficiency is further improved, meanwhile, the omission is not easy to cause, and the product detection precision is improved.
In a first aspect, the present application provides a method of detecting a product, the method comprising:
Extracting image edge characteristics of at least partial areas of a template image and a first image corresponding to an obtained target product respectively, and obtaining first edge characteristic points corresponding to the template image and second edge characteristic points corresponding to the first image, wherein the template image comprises a first detection area; the first image comprises a product area corresponding to the target product;
correcting the first image based on the first edge characteristic points and the second edge characteristic points to obtain a second image, wherein all areas of the product area in the second image are overlapped with at least part of the first detection area;
the product area in the second image is detected based on the first detection area.
According to the product detection method provided by the embodiment of the application, the position of the first image is adjusted to the difference between the extracted template image and part of edge characteristics in the first image until all areas of the product area are overlapped with at least part of the first detection area, so that detection is performed based on the overlapped images, and in the actual detection process, the position of the image is correspondingly adjusted without changing the template under the condition that the product variety is changed, thereby realizing the multiplexing of the template, reducing the workload and the burden of storing the template, further improving the product detection efficiency, simultaneously being not easy to cause missed detection and improving the product detection precision.
According to the product detection method of one embodiment of the present application, the first image is corrected based on the first edge feature point and the second edge feature point, and a second image is obtained, including:
acquiring correction parameters between the first image and the template image based on the first edge feature points and the second edge feature points;
and correcting the first image based on the correction parameters to obtain a second image.
According to the product detection method of one embodiment of the present application, the obtaining correction parameters between the first image and the template image based on the first edge feature point and the second edge feature point includes:
determining a first edge slope of the template image based on the first edge feature points; determining a second edge slope of the first image based on the second edge feature points;
determining a lateral and/or longitudinal offset corresponding to an image edge of the at least partial region between the first image and the template image based on the first edge feature point and the second edge feature point;
the correction parameter is determined based on the first edge inclination, the second edge inclination and the lateral and/or longitudinal offset.
In the product detection method according to an embodiment of the present application, when an image edge of the at least partial region of the template image and an image edge of the at least partial region of the first image are parallel to each other, the acquiring correction parameters between the first image and the template image based on the first edge feature point and the second edge feature point includes:
acquiring a first average pixel position of the first edge feature point and a second average pixel position of the second edge feature point;
the correction parameter is acquired based on a deviation between the first average pixel position and the second average pixel position.
The product detection method according to an embodiment of the present application performs image edge feature extraction of at least a partial region on a template image and a first image corresponding to an acquired target product, and acquires a first edge feature point corresponding to the template image and a second edge feature point corresponding to the first image, including:
and respectively performing at least one of Sobel processing and threshold processing on the image edges of the at least partial areas of the template image and the first image, and acquiring a first edge characteristic point corresponding to the template image and a second edge characteristic point corresponding to the first image.
In the product detection method according to an embodiment of the present application, at least one of Sobel processing and threshold processing is performed on image edges of the at least partial areas of the template image and the first image, respectively, to obtain a first edge feature point corresponding to the template image and a second edge feature point corresponding to the first image, including:
respectively carrying out Sobel processing on the image edges of at least partial areas of the template image and the first image to obtain a first edge image corresponding to the template image and a second edge image corresponding to the first image;
threshold processing is carried out on the first edge image and the second edge image respectively, and a plurality of third edge characteristic points corresponding to the template image and a plurality of fourth edge characteristic points corresponding to the first image are obtained;
taking a target third edge feature point in the plurality of third edge feature points as a center, and replacing the target third edge feature point with a pixel point corresponding to the maximum gray value in the target range to obtain the first edge feature point;
and taking a target fourth edge characteristic point in the fourth edge characteristic points as a center, and replacing the target fourth edge characteristic point with a pixel point corresponding to the maximum gray value in the target range to obtain the second edge characteristic point.
In a second aspect, the present application provides a product testing device comprising:
the first processing module is used for extracting image edge characteristics of at least partial areas of a template image and a first image corresponding to the obtained target product respectively, and obtaining first edge characteristic points corresponding to the template image and second edge characteristic points corresponding to the first image, wherein the template image comprises a first detection area; the first image comprises a product area corresponding to the target product;
the second processing module is used for correcting the first image based on the first edge characteristic points and the second edge characteristic points to obtain a second image, and all areas of the product area in the second image are overlapped with at least part of the areas of the first detection area;
and the third processing module is used for detecting the product area in the second image based on the first detection area.
According to the product detection device, the position of the first image is adjusted to the difference between the extracted template image and part of edge characteristics in the first image, and the whole area of the product area is overlapped with at least part of the first detection area, so that detection is performed based on the overlapped image, and in the actual detection process, the position of the image is correspondingly adjusted without changing the template under the condition that the product variety is changed, thereby realizing the multiplexing of the template, reducing the workload and the burden of storing the template, further improving the product detection efficiency, simultaneously being not easy to cause missed detection and improving the product detection precision.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the product detection method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a product detection method as described in the first aspect above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements a product detection method as described in the first aspect above.
The above technical solutions in the embodiments of the present application have at least one of the following technical effects:
the position of the first image is adjusted to the difference between the extracted template image and part of edge characteristics in the first image, and the whole area of the product area is overlapped with at least part of the first detection area, so that the detection is carried out based on the overlapped image, and in the actual detection process, the position of the image is correspondingly adjusted without changing the template under the condition that the product variety is changed, thereby realizing the multiplexing of the template, reducing the workload and the burden of storing the template, further improving the product detection efficiency, simultaneously being not easy to cause missed detection and improving the product detection precision.
Further, correction parameters between the first image and the template image are obtained based on the first edge feature points and the second edge feature points, then the first image is corrected based on the correction parameters to obtain a second image, in actual application, when the product variety is switched, a new template image is not needed, and when the obtained position change between each image to be detected is smaller, correction can be directly carried out based on the correction parameters between the first image and the template image; under the condition that the position change between each image to be detected is large, correction parameters between each image to be detected and the template image can be calculated respectively so as to carry out position correction on each image to be detected, multiplexing of the template is realized, the workload is reduced, and further the efficiency of product detection is improved.
Furthermore, by acquiring the first average pixel position of the first edge feature point and the second average pixel position of the second edge feature point and then acquiring the correction parameters based on the deviation between the first average pixel position and the second average pixel position, the correction parameters can be directly acquired based on the first average pixel position and the second average pixel position under the condition that the image edges of at least part of the template image and the image edges of at least part of the first image are parallel to each other, the calculation process is simpler, the workload is reduced, and the product detection efficiency is further improved.
Still further, at least one of Sobel processing and threshold processing is performed on the image edges of at least partial areas of the template image and the first image, so that the first edge feature points corresponding to the template image and the second edge feature points corresponding to the first image are obtained, the image edge information of the template image and the first image can be accurately obtained, the first image can be detected based on the edge information, the data quantity involved in calculation is reduced, the processing time is greatly saved, and the product detection efficiency and the product detection accuracy are improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a product detection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a product detection method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a product detection method according to a second embodiment of the present application;
FIG. 4 is a schematic diagram of a third embodiment of a product detection method;
FIG. 5 is a schematic diagram of a product detection method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a product detection device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The following describes a product detection method according to an embodiment of the present application with reference to fig. 1 to 5.
It should be noted that the execution body of the product detection method may be a server, or may be a product detection device, or may also be a terminal of a user, including, but not limited to, a mobile terminal and a non-mobile terminal.
For example, mobile terminals include, but are not limited to, cell phones, PDA smart terminals, tablet computers, vehicle-mounted smart terminals, and the like; non-mobile terminals include, but are not limited to, PC-side and the like.
As shown in fig. 1, the product detection method includes: step 110, step 120 and step 130.
Step 110, extracting image edge characteristics of at least partial areas of a template image and a first image corresponding to the obtained target product respectively, and obtaining first edge characteristic points corresponding to the template image and second edge characteristic points corresponding to the first image, wherein the template image comprises a first detection area; the first image includes a product region corresponding to the target product.
In this step, the template image is a pre-trained image, as shown in fig. 2.
The partial region is a custom region, for example, may be an ROI region, as shown in a rectangular frame corresponding to the ROI in fig. 2, and the ROI region may be obtained based on Python, MATLAB, etc. during the execution, which is not limited by the present application;
The partial regions may alternatively be other feature regions, for example, the partial regions may be corner points in the image, or rectangles and circles with obvious features, etc.;
it will be appreciated that the template image and the partial region in the first image are corresponding.
The first detection area is a detection area set in advance in the template image, as shown by a dotted rectangular frame area in fig. 2.
The target product is the product to be measured.
The first image corresponding to the target product is an image acquired by the image sensor, as shown in fig. 3.
In the related art, in the process of acquiring the first image, under the condition of switching the product types to be detected, due to the fact that the first image and the template image may be offset caused by mechanical positioning, the same template image is often not used for detecting the product.
The first image may include: the area corresponding to the product itself and the background image area.
The product area is the area corresponding to the product itself, as shown by the oval area in fig. 3.
The first edge feature points are used to characterize image edge information of the template image.
The second edge feature points are used to characterize image edge information of the first image.
The image edge feature extraction may be performed based on MATLAB or on a pre-trained neural network model, or may be performed in any realizable manner, which is not limited herein.
In some embodiments, step 110 may include:
and respectively performing at least one of Sobel processing and threshold processing on the image edges of at least partial areas of the template image and the first image, and acquiring a first edge characteristic point corresponding to the template image and a second edge characteristic point corresponding to the first image.
In this embodiment, sobel processing is used to acquire edge images of the template image and the first image, respectively.
Thresholding is the process of uniformly processing pixels that are greater than or less than a target threshold based on user-defined target thresholds.
In the actual execution process, sobel processing can be performed on the image edges of at least partial areas of the template image and the first image so as to respectively acquire edge images of the template image and the first image, and then threshold processing is performed on the edge images of the template image and the first image respectively based on a target threshold value so as to acquire a first edge feature point corresponding to the template image and a second edge feature point corresponding to the first image.
According to the product detection method provided by the embodiment of the application, at least one of Sobel processing and threshold processing is carried out on the image edges of at least partial areas of the template image and the first image, and the first edge characteristic points corresponding to the template image and the second edge characteristic points corresponding to the first image are obtained, so that the image edge information of the template image and the first image can be accurately obtained, the first image can be detected based on the edge information, the data quantity involved in calculation is reduced, the processing time is greatly saved, and the product detection efficiency and the product detection accuracy are further improved.
In some embodiments, at least one of Sobel processing and thresholding is performed on image edges of at least partial areas of the template image and the first image, respectively, to obtain a first edge feature point corresponding to the template image and a second edge feature point corresponding to the first image may include:
respectively carrying out Sobel processing on the image edges of at least partial areas of the template image and the first image to obtain a first edge image corresponding to the template image and a second edge image corresponding to the first image;
threshold processing is carried out on the first edge image and the second edge image respectively, and a plurality of third edge characteristic points corresponding to the template image and a plurality of fourth edge characteristic points corresponding to the first image are obtained;
taking a target third edge feature point in the plurality of third edge feature points as a center, and replacing the target third edge feature point with a pixel point corresponding to the maximum gray value in the target range to obtain a first edge feature point;
and replacing the target fourth edge characteristic point by a pixel point corresponding to the maximum gray value in the target range by taking the target fourth edge characteristic point in the plurality of fourth edge characteristic points as the center, and acquiring a second edge characteristic point.
In this embodiment, the first edge image is image edge information for characterizing the template image, which is obtained after Sobel processing is performed on the template image.
The second edge image is obtained after Sobel processing is carried out on the first image and is used for representing image edge information of the first image.
The plurality of third edge feature points are obtained by thresholding the first edge image.
The plurality of fourth edge feature points are obtained by thresholding the second edge image.
The target third edge feature point may be customized based on a user, for example, the target pixel may be set to be 50, 60 or 70, and then the third edge feature point corresponding to the target pixel is selected as the target third edge feature point, which is not limited in the present application.
The fourth edge feature point of the target may also be customized based on a user, and the customization mode of the fourth edge feature point is consistent with that of the third edge feature point of the target, which is not described herein.
The pixel points in the target range can take the target third edge characteristic point and/or the target fourth edge point as the center, and all the pixel points obtained by moving the target quantity pixels leftwards and rightwards;
the target number may be 5, 6 or 7, etc., and may be customized based on the user, which is not limited by the present application.
The first edge feature points are pixel points corresponding to the maximum gray value in the target range in the plurality of third edge feature points.
The second edge feature points are pixel points corresponding to the maximum gray values in the target range in the fourth edge feature points.
In the actual implementation process, the region with obvious image edges in the template image and the first image can be selected by using the ROI, or other features in the template image and the first image can be used as positioning information, for example, corner points in the template image and the first image, or rectangles, circles and the like with obvious features can be customized based on users, the application is not limited, and in the embodiment, the region with obvious image edges in the template image and the first image can be selected by using the ROI;
performing Sobel processing on the image edge of the ROI to obtain a first edge image corresponding to the template image and a second edge image corresponding to the first image;
then, threshold processing is carried out on the first edge image and the second edge image respectively, and a plurality of third edge characteristic points corresponding to the template image and a plurality of fourth edge characteristic points corresponding to the first image are obtained;
taking the acquisition mode of the first edge feature point as an example, the target pixel may be set to 50, the target number is set to 5, the edge feature point with the pixel of 50 is selected as the target third edge feature point, such as the 7 th pixel in fig. 5, then the 7 th pixel is taken as the center, the 5 pixels are moved leftwards and rightwards, the pixel point corresponding to the maximum gray value in the target range is replaced with the target third edge feature point, and the first edge feature point, that is, the 9 th pixel in fig. 5 is acquired.
The second edge feature point is obtained in the same manner as the first edge feature point, and will not be described in detail herein.
According to the product detection method provided by the embodiment of the application, the Sobel processing and the threshold processing are carried out on the image edges of at least partial areas of the template image and the first image to obtain a plurality of edge feature points, then the pixel point corresponding to the maximum gray value in the target range is used as the first edge feature point and/or the second edge feature point, so that more real template image and image edge information corresponding to the first image can be obtained, the first image can be detected based on the edge information later, the processing area of the image is reduced, the processing time is saved, and the product detection efficiency is improved.
And 120, correcting the first image based on the first edge characteristic points and the second edge characteristic points to obtain a second image, wherein all areas of the product area in the second image are overlapped with at least part of the first detection area.
In this step, the second image is an image obtained after the correction of the first image.
The second image comprises a product area corresponding to the target product, and all areas of the product area in the second image are overlapped with at least part of the first detection area.
As shown in fig. 4, the entire area of the product area in the second image overlaps with at least a partial area of the first detection area.
In the actual execution process, the first image may be corrected based on the effective area of the template image and the positioning information of the effective area in the first image, or the first image may be corrected based on the correction parameter between the first image and the template image, or may be customized based on a user, which is not limited in the present application.
For example, in some embodiments, step 120 may include:
acquiring correction parameters between the first image and the template image based on the first edge feature points and the second edge feature points;
and correcting the first image based on the correction parameters to obtain a second image.
In this embodiment, the first edge feature points are used to characterize image edge information of the template image.
The second edge feature points are used to characterize image edge information of the first image.
The correction parameter is used for correcting the first image so that all areas of the product area in the corrected first image overlap at least part of the first detection area.
The correction parameter is determined based on the first edge feature point and the second edge feature point.
The second image is an image obtained after the first image is corrected, and the whole area of the product area in the second image is overlapped with at least part of the area of the first detection area.
According to the product detection method provided by the embodiment of the application, the correction parameters between the first image and the template image are obtained based on the first edge feature points and the second edge feature points, then the first image is corrected based on the correction parameters to obtain the second image, in actual application, under the condition of switching product varieties, a new template image is not needed, and under the condition that the position change between each obtained image to be detected is smaller, the correction can be directly carried out based on the correction parameters between the first image and the template image; under the condition that the position change between each image to be detected is large, correction parameters between each image to be detected and the template image can be calculated respectively so as to carry out position correction on each image to be detected, multiplexing of the template is realized, the workload is reduced, and further the efficiency of product detection is improved.
In some embodiments, acquiring correction parameters between the first image and the template image based on the first edge feature point and the second edge feature point may include:
Determining a first edge slope of the template image based on the first edge feature points; determining a second edge slope of the first image based on the second edge feature points;
determining a lateral and/or longitudinal offset corresponding to an image edge of at least a partial region between the first image and the template image based on the first edge feature point and the second edge feature point;
the correction parameters are determined based on the first edge inclination, the second edge inclination, and the lateral and/or longitudinal offset.
In this embodiment, the first edge inclination is an inclination of an image edge of the template image, and may be determined based on the first edge feature point.
The second edge inclination is an inclination of an image edge of the first image, and may be determined based on the second edge feature point.
The offset corresponding to the image edge of at least a partial region between the first image and the template image may be a lateral offset or may be a longitudinal offset.
In the actual implementation, the first edge feature point may include a first start point P st =(x st ,y st ) And a first termination point P ed =(x ed ,y ed )。
Can be based on the first starting point P st =(x st ,y st ) And a first termination point P ed =(x ed ,y ed ) Determining a first edge inclination θ of a template image 1 The implementation can be represented by the following formula:
Wherein θ 1 For the first edge inclination, y ed Is the ordinate of the first end point, y st Is the ordinate, x of the first starting point ed X is the abscissa of the first end point st Is the abscissa of the first starting point.
The second edge feature point may include a second start point P Mst =(x Mst ,y Mst ) And a second termination point P Med =(x Med ,y Med )。
Can be based on the second starting point P Mst =(x Mst ,y Mst ) And a second termination point P Med =(x Med ,y Med ) Determining a second edge inclination θ of the first image 2 The implementation can be represented by the following formula:
wherein θ 2 For a second edge inclination, y Med Is the ordinate of the second termination point, y Mst Is the ordinate of the second starting point, x Med X is the abscissa of the second termination point Mst Is the abscissa of the second starting point.
Based on the first edge feature point and the second edge feature point, determining a lateral and/or longitudinal offset θ corresponding to an image edge of at least a partial region between the first image and the template image may be implemented by the following formula:
wherein θ is the lateral and/or longitudinal offset, θ 2 For a second edge inclination, y Med Is the ordinate of the second termination point, y Mst Is the ordinate of the second starting point, x Med X is the abscissa of the second termination point Mst Is the abscissa of the second starting point, θ 1 For the first edge inclination, y ed Is the ordinate of the first end point, y st Is the ordinate, x of the first starting point ed X is the abscissa of the first end point st Is the abscissa of the first starting point.
The correction parameter M is then determined based on the first edge inclination, the second edge inclination and the lateral and/or longitudinal offset.
The correction parameter M may be determined based on the following formula:
wherein M is a correction parameter, θ is a lateral and/or longitudinal offset, y Med Is the ordinate of the second termination point, y Mst Is the ordinate of the second starting point, x Med X is the abscissa of the second termination point Mst Is the abscissa of the second starting point, y ed Is the ordinate of the first end point, y st Is the ordinate, x of the first starting point ed X is the abscissa of the first end point st Is the abscissa of the first starting point.
The inventor finds that the offset is obtained based on the center of the mark point of the image to be detected and the standard position in the related technology in the research and development process, so as to detect the error of the image to be detected.
In the application, the image edge can be directly used as a positioning feature for positioning, so that the correction parameter between the first image and the template image is calculated, the mark points on the image are not needed, and the application scene is wider.
According to the product detection method provided by the embodiment of the application, the first edge gradient of the template image, the second edge gradient of the first image and the corresponding transverse and/or longitudinal offset of the image edge of at least part of the area between the first image and the template image are determined based on the first edge characteristic points and the second edge characteristic points, and the correction parameters are determined based on the first edge gradient, the second edge gradient and the offset, so that the edge of the image can be used as a positioning feature to position, the correction parameters between the first image and the template image are calculated, mark points on the image are not needed, the application scene is wider, pose information between the template image and the first image can be obtained more comprehensively, and the final detection precision and effect are improved.
In some embodiments, in a case where an image edge of at least a partial region of the template image and an image edge of at least a partial region of the first image are parallel to each other, acquiring a correction parameter between the first image and the template image based on the first edge feature point and the second edge feature point includes:
acquiring a first average pixel position of a first edge feature point and a second average pixel position of a second edge feature point;
A correction parameter is obtained based on a deviation between the first average pixel position and the second average pixel position.
In this embodiment, the first edge feature point may include a plurality of first feature points for characterizing edge information of the template image.
The first average pixel position is an average pixel value between the plurality of first feature points.
The second edge feature point may include a plurality of second feature points for characterizing edge information of the first image.
The second average pixel position is an average pixel value between the plurality of second feature points.
The deviation between the first average pixel position and the second average pixel position includes at least one of a lateral deviation and a longitudinal deviation.
The correction parameter between the first image and the template image may be determined based on a deviation between the first average pixel position and the second average pixel position.
In actual implementation, the first average pixel position may be represented as P p =(x p ,y p ) The second average pixel position may be denoted as P m =(x m ,y m )。
In case the image edges of at least part of the areas of the template image and the image edges of at least part of the areas of the first image are parallel to each other, the lateral deviation between the first average pixel position and the second average pixel position may be expressed by the following formula:
d V =|x m -x p |
Wherein d V For lateral deviation, x m X is the abscissa of the second average pixel position p Is the abscissa of the first average pixel position.
The longitudinal deviation between the first average pixel position and the second average pixel position may be expressed by the following formula:
d H =|y m -y p |
wherein d H For longitudinal deviation, y m Is the ordinate, y, of the second average pixel position p Is the ordinate of the first average pixel position.
According to the product detection method provided by the embodiment of the application, the correction parameters are obtained by obtaining the first average pixel position of the first edge feature point and the second average pixel position of the second edge feature point and then based on the deviation between the first average pixel position and the second average pixel position, so that the correction parameters can be directly obtained based on the first average pixel position and the second average pixel position under the condition that the image edge of at least part of the template image and the image edge of at least part of the first image are parallel to each other, the calculation process is simpler, the workload is reduced, and the product detection efficiency is further improved.
Step 130, detecting the product area in the second image based on the first detection area.
In this step, the first detection region is a detection region set in advance in the template image.
The product area is the area to be detected in the target product.
The entire area of the product area in the second image overlaps at least a portion of the area of the first detection area.
In the actual implementation process, the first detection area in the template image can be directly used for detecting the product area in the second image.
The inventor finds that in the research and development process, in the related technology, under the condition of switching product varieties, the positions of all detection areas need to be determined again to reestablish the template.
In the application, under the condition of switching product varieties, a new template image is not needed, and under the condition of small position change between each acquired image to be detected, correction can be directly carried out based on correction parameters between the first image and the template image; under the condition that the position change between each image to be detected is large, correction parameters between each image to be detected and the template image can be calculated respectively so as to correct the position of each image to be detected, normal detection can be carried out on a product area under the condition of the position change of the first image, multiplexing of the template is realized, workload and the burden of storing the template are reduced, further, the efficiency of product detection is improved, meanwhile, omission is not easy to cause, and the precision of product detection is improved.
According to the product detection method provided by the embodiment of the application, the position of the first image is adjusted to the difference between the extracted template image and part of edge characteristics in the first image until all areas of the product area are overlapped with at least part of the first detection area, so that detection is performed based on the overlapped images, and in the actual detection process, the position of the image is correspondingly adjusted without changing the template under the condition that the product variety is changed, thereby realizing the multiplexing of the template, reducing the workload and the burden of storing the template, further improving the product detection efficiency, simultaneously being not easy to cause missed detection and improving the product detection precision.
The product detection device provided by the application is described below, and the product detection device described below and the product detection method described above can be referred to correspondingly.
According to the product detection method provided by the embodiment of the application, the execution main body can be a product detection device. In the embodiment of the application, a product detection device executes a product detection method as an example, and the product detection device provided by the embodiment of the application is described.
The embodiment of the application also provides a product detection device.
As shown in fig. 6, the product detection apparatus includes: a first processing module 610, a second processing module 620, and a third processing module 630.
The first processing module 610 is configured to extract image edge features of at least a part of areas of a template image and a first image corresponding to the obtained target product, obtain a first edge feature point corresponding to the template image and a second edge feature point corresponding to the first image, where the template image includes a first detection area; the first image comprises a product area corresponding to the target product;
a second processing module 620, configured to modify the first image based on the first edge feature point and the second edge feature point, and obtain a second image, where all areas of the product area in the second image overlap at least a part of the first detection area;
the third processing module 630 is configured to detect a product area in the second image based on the first detection area.
According to the product detection device provided by the embodiment of the application, the position of the first image is adjusted to the difference between the extracted template image and part of edge characteristics in the first image until all areas of the product area are overlapped with at least part of the first detection area, so that detection is performed based on the overlapped images, and in the actual detection process, the position of the image is correspondingly adjusted without changing the template under the condition that the product variety is changed, thereby realizing the multiplexing of the template, reducing the workload and the burden of storing the template, further improving the product detection efficiency, simultaneously being not easy to cause missed detection and improving the product detection precision.
In some embodiments, the second processing module 620 may be further configured to obtain a correction parameter between the first image and the template image based on the first edge feature point and the second edge feature point;
and correcting the first image based on the correction parameters to obtain a second image.
According to the product detection device provided by the embodiment of the application, the correction parameters between the first image and the template image are obtained based on the first edge characteristic points and the second edge characteristic points, then the first image is corrected based on the correction parameters to obtain the second image, in actual application, under the condition of switching product varieties, a new template image is not needed, and under the condition that the position change between each obtained image to be detected is smaller, the correction can be directly carried out based on the correction parameters between the first image and the template image; under the condition that the position change between each image to be detected is large, correction parameters between each image to be detected and the template image can be calculated respectively so as to carry out position correction on each image to be detected, multiplexing of the template is realized, the workload is reduced, and further the efficiency of product detection is improved.
In some embodiments, the apparatus may further include a fourth processing module for determining a first edge slope of the template image based on the first edge feature points; determining a second edge slope of the first image based on the second edge feature points;
Determining a lateral and/or longitudinal offset corresponding to an image edge of at least a partial region between the first image and the template image based on the first edge feature point and the second edge feature point;
the correction parameters are determined based on the first edge inclination, the second edge inclination, and the lateral and/or longitudinal offset.
According to the product detection device provided by the embodiment of the application, the first edge gradient of the template image, the second edge gradient of the first image and the transverse and/or longitudinal offset corresponding to the image edge of at least part of the area between the first image and the template image are determined based on the first edge characteristic points and the second edge characteristic points, the correction parameters are determined based on the first edge gradient, the second edge gradient and the offset, the edge of the image can be used as a positioning feature for positioning, the correction parameters between the first image and the template image are calculated, mark points on the image are not needed, the application scene is wider, pose information between the template image and the first image can be obtained more comprehensively, and the final detection precision and effect are improved.
In some embodiments, the apparatus may further include a fifth processing module configured to obtain a first average pixel position of the first edge feature point and a second average pixel position of the second edge feature point;
A correction parameter is obtained based on a deviation between the first average pixel position and the second average pixel position.
According to the product detection device provided by the embodiment of the application, the correction parameters are obtained by obtaining the first average pixel position of the first edge feature point and the second average pixel position of the second edge feature point and then based on the deviation between the first average pixel position and the second average pixel position, so that the correction parameters can be directly obtained based on the first average pixel position and the second average pixel position under the condition that the image edge of at least part of the template image and the image edge of at least part of the first image are parallel to each other, the calculation process is simpler, the workload is reduced, and the product detection efficiency is further improved.
In some embodiments, the first processing module 610 may be further configured to perform at least one of Sobel processing and thresholding on the image edges of at least a partial area of the template image and the first image, to obtain a first edge feature point corresponding to the template image and a second edge feature point corresponding to the first image.
According to the product detection device provided by the embodiment of the application, at least one of Sobel processing and threshold processing is carried out on the image edges of at least partial areas of the template image and the first image, and the first edge characteristic points corresponding to the template image and the second edge characteristic points corresponding to the first image are obtained, so that the image edge information of the template image and the first image can be accurately obtained, the first image can be detected based on the edge information, the data quantity involved in calculation is reduced, the processing time is greatly saved, and the product detection efficiency and the product detection accuracy are further improved.
In some embodiments, the apparatus may further include a sixth processing module, configured to perform Sobel processing on image edges of at least partial areas of the template image and the first image, respectively, to obtain a first edge image corresponding to the template image, and a second edge image corresponding to the first image;
threshold processing is carried out on the first edge image and the second edge image respectively, and a plurality of third edge characteristic points corresponding to the template image and a plurality of fourth edge characteristic points corresponding to the first image are obtained;
taking a target third edge feature point in the plurality of third edge feature points as a center, and replacing the target third edge feature point with a pixel point corresponding to the maximum gray value in the target range to obtain a first edge feature point;
and replacing the target fourth edge characteristic point by a pixel point corresponding to the maximum gray value in the target range by taking the target fourth edge characteristic point in the plurality of fourth edge characteristic points as the center, and acquiring a second edge characteristic point.
According to the product detection device provided by the embodiment of the application, the Sobel processing and the threshold processing are carried out on the image edges of at least partial areas of the template image and the first image to obtain a plurality of edge feature points, then the pixel points corresponding to the maximum gray values in the target range are used as the first edge feature points and/or the second edge feature points, so that more real template image and image edge information corresponding to the first image can be obtained, the first image can be detected based on the edge information later, meanwhile, the local image obtained based on the ROI area replaces the edge information of the whole image, the processing area of the image is reduced, the processing time is saved, and the product detection efficiency is improved.
The product detection device in the embodiment of the application can be electronic equipment or a component in the electronic equipment, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The product detection device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an IOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The product detection device provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a detailed description is omitted here.
In some embodiments, as shown in fig. 7, an electronic device 700 is further provided in the embodiments of the present application, which includes a processor 701, a memory 702, and a computer program stored in the memory 702 and capable of running on the processor 701, where the program when executed by the processor 701 implements the respective processes of the above product detection method embodiments, and the same technical effects are achieved, and for avoiding repetition, a detailed description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
In another aspect, the present application further provides a computer program product, where the computer program product includes a computer program stored on a non-transitory computer readable storage medium, where the computer program includes program instructions, when the program instructions are executed by a computer, can execute each process of the above product detection method embodiment, and achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
In yet another aspect, the present application further provides a non-transitory computer readable storage medium, on which a computer program is stored, where the computer program is implemented when executed by a processor to perform the processes of the above product detection method embodiment, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
In still another aspect, an embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction, to implement each process of the above product detection method embodiment, and achieve the same technical effect, and to avoid repetition, details are not repeated herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of detecting a product, comprising:
extracting image edge characteristics of at least partial areas of a template image and a first image corresponding to an obtained target product respectively, and obtaining first edge characteristic points corresponding to the template image and second edge characteristic points corresponding to the first image, wherein the template image comprises a first detection area; the first image comprises a product area corresponding to the target product;
correcting the first image based on the first edge characteristic points and the second edge characteristic points to obtain a second image, wherein all areas of the product area in the second image are overlapped with at least part of the first detection area;
the product area in the second image is detected based on the first detection area.
2. The product detection method of claim 1, wherein the correcting the first image based on the first edge feature point and the second edge feature point, obtaining a second image, comprises:
acquiring correction parameters between the first image and the template image based on the first edge feature points and the second edge feature points;
And correcting the first image based on the correction parameters to obtain a second image.
3. The product detection method as claimed in claim 2, wherein the acquiring correction parameters between the first image and the template image based on the first edge feature point and the second edge feature point includes:
determining a first edge slope of the template image based on the first edge feature points; determining a second edge slope of the first image based on the second edge feature points;
determining a lateral and/or longitudinal offset corresponding to an image edge of the at least partial region between the first image and the template image based on the first edge feature point and the second edge feature point;
the correction parameter is determined based on the first edge inclination, the second edge inclination and the lateral and/or longitudinal offset.
4. The product detection method according to claim 2, wherein, in a case where the image edge of the at least partial region of the template image and the image edge of the at least partial region of the first image are parallel to each other, the acquiring correction parameters between the first image and the template image based on the first edge feature point and the second edge feature point includes:
Acquiring a first average pixel position of the first edge feature point and a second average pixel position of the second edge feature point;
the correction parameter is acquired based on a deviation between the first average pixel position and the second average pixel position.
5. The method for detecting a product according to any one of claims 1 to 4, wherein the extracting the image edge feature of at least a part of the region of the template image and the first image corresponding to the obtained target product respectively, and obtaining the first edge feature point corresponding to the template image and the second edge feature point corresponding to the first image, includes:
and respectively performing at least one of Sobel processing and threshold processing on the image edges of the at least partial areas of the template image and the first image, and acquiring a first edge characteristic point corresponding to the template image and a second edge characteristic point corresponding to the first image.
6. The product detection method according to claim 5, wherein the performing at least one of Sobel processing and thresholding on image edges of the at least partial areas of the template image and the first image, respectively, obtains a first edge feature point corresponding to the template image and a second edge feature point corresponding to the first image, includes:
Respectively carrying out Sobel processing on the image edges of at least partial areas of the template image and the first image to obtain a first edge image corresponding to the template image and a second edge image corresponding to the first image;
threshold processing is carried out on the first edge image and the second edge image respectively, and a plurality of third edge characteristic points corresponding to the template image and a plurality of fourth edge characteristic points corresponding to the first image are obtained;
taking a target third edge feature point in the plurality of third edge feature points as a center, and replacing the target third edge feature point with a pixel point corresponding to the maximum gray value in the target range to obtain the first edge feature point;
and taking a target fourth edge characteristic point in the fourth edge characteristic points as a center, and replacing the target fourth edge characteristic point with a pixel point corresponding to the maximum gray value in the target range to obtain the second edge characteristic point.
7. A product testing device, comprising:
the first processing module is used for extracting image edge characteristics of at least partial areas of a template image and a first image corresponding to the obtained target product respectively, and obtaining first edge characteristic points corresponding to the template image and second edge characteristic points corresponding to the first image, wherein the template image comprises a first detection area; the first image comprises a product area corresponding to the target product;
The second processing module is used for correcting the first image based on the first edge characteristic points and the second edge characteristic points to obtain a second image, and all areas of the product area in the second image are overlapped with at least part of the areas of the first detection area;
and the third processing module is used for detecting the product area in the second image based on the first detection area.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the product detection method according to any one of claims 1-6 when the program is executed by the processor.
9. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the product detection method according to any one of claims 1-6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the product detection method according to any one of claims 1-6.
CN202310269427.0A 2023-03-15 2023-03-15 Product detection method and device Pending CN116579978A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310269427.0A CN116579978A (en) 2023-03-15 2023-03-15 Product detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310269427.0A CN116579978A (en) 2023-03-15 2023-03-15 Product detection method and device

Publications (1)

Publication Number Publication Date
CN116579978A true CN116579978A (en) 2023-08-11

Family

ID=87538393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310269427.0A Pending CN116579978A (en) 2023-03-15 2023-03-15 Product detection method and device

Country Status (1)

Country Link
CN (1) CN116579978A (en)

Similar Documents

Publication Publication Date Title
CN108122208B (en) Image processing apparatus and method for foreground mask correction for object segmentation
US20110211233A1 (en) Image processing device, image processing method and computer program
US10853927B2 (en) Image fusion architecture
US11138709B2 (en) Image fusion processing module
CN111738321A (en) Data processing method, device, terminal equipment and storage medium
CN111325798B (en) Camera model correction method, device, AR implementation equipment and readable storage medium
CN111083456A (en) Projection correction method, projection correction device, projector and readable storage medium
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN111368587A (en) Scene detection method and device, terminal equipment and computer readable storage medium
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
CN116168082A (en) Positioning method and positioning device for tab
CN107357422B (en) Camera-projection interactive touch control method, device and computer readable storage medium
CN105049706A (en) Image processing method and terminal
CN113838151B (en) Camera calibration method, device, equipment and medium
CN114494058A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116579978A (en) Product detection method and device
CN111080683A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111127529B (en) Image registration method and device, storage medium and electronic device
CN110874814A (en) Image processing method, image processing device and terminal equipment
CN107103321B (en) The generation method and generation system of road binary image
CN111598943B (en) Book in-place detection method, device and equipment based on book auxiliary reading equipment
CN114418848A (en) Video processing method and device, storage medium and electronic equipment
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium
CN106934812A (en) Image-signal processor and its image-signal processing method
CN112733667A (en) Face alignment method and device based on face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination