CN114331951A - Image detection method, image detection device, computer, readable storage medium, and program product - Google Patents

Image detection method, image detection device, computer, readable storage medium, and program product Download PDF

Info

Publication number
CN114331951A
CN114331951A CN202111166764.4A CN202111166764A CN114331951A CN 114331951 A CN114331951 A CN 114331951A CN 202111166764 A CN202111166764 A CN 202111166764A CN 114331951 A CN114331951 A CN 114331951A
Authority
CN
China
Prior art keywords
pixel
image
edge
matching
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111166764.4A
Other languages
Chinese (zh)
Inventor
孙婷
刘永
吴凯
汪铖杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111166764.4A priority Critical patent/CN114331951A/en
Publication of CN114331951A publication Critical patent/CN114331951A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application discloses an image detection method, an image detection device, a computer, a readable storage medium and a program product, wherein the method comprises the following steps: acquiring image gradient information of an image to be detected, and performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image; performing pixel matching identification processing on the mask image to obtain a first matching pixel and a second matching pixel, acquiring first position information of the first matching pixel in the mask image and second position information of the second matching pixel in the mask image from the mask image, and determining a position difference value between the first matching pixel and the second matching pixel according to the first position information and the second position information; and identifying the object edge line of the target object from the mask image, and determining the target object as a standard object if the position difference value and the object edge line meet the standard object condition. By the aid of the method and the device, image detection efficiency and accuracy can be improved.

Description

Image detection method, image detection device, computer, readable storage medium, and program product
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image detection method, an image detection apparatus, a computer, a readable storage medium, and a program product.
Background
Metal Injection Molding (MIM) is a relatively new method for manufacturing Metal parts, and is a composite technology combining the conventional plastic Injection Molding method and Metal powder metallurgy. Parts such as fine and precise parts, complex shapes, and three-dimensional shapes, which are difficult to machine, can be manufactured by injection molding using a mold, and the use of the MIM is increasing, and for example, the MIM product covers a plurality of industries including the transportation industry, the medical industry, and 3c electronics such as Computer (Computer), Communication (Communication), and consumer electronics (consumer electronics), and is therefore extremely important for detection and identification of a deformation defect of the MIM product. At present, generally, a Convolutional Neural Network (CNN) is used to detect an MIM product, so as to obtain a deformation defect detection result of the MIM product. However, when the amount of data of the deformed defect is small, the CNN network is easily over-fitted, and has no good generalization capability and recognition capability, resulting in low efficiency and accuracy of image detection.
Disclosure of Invention
The embodiment of the application provides an image detection method, an image detection device, a computer, a readable storage medium and a program product, which can improve the accuracy and the detection efficiency of image detection.
An embodiment of the present application provides an image detection method, which includes:
performing edge difference processing on an image to be detected to obtain image gradient information of the image to be detected, and performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image;
performing pixel matching identification processing on the mask image to obtain a first matching pixel and a second matching pixel; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than the object scale threshold;
pixel point position extraction is carried out on the mask image to obtain first position information of a first matching pixel point in the mask image and second position information of a second matching pixel point in the mask image, pixel position matching is carried out according to the first position information and the second position information, and a position difference value between the first matching pixel point and the second matching pixel point is determined;
and performing edge recognition on the target object from the mask image to obtain an object edge line of the target object, and determining the target object as a standard object if the position difference value and the object edge line meet the standard object condition.
An aspect of an embodiment of the present application provides an image detection apparatus, including:
the gradient acquisition module is used for carrying out edge difference processing on the image to be detected to obtain image gradient information of the image to be detected;
the image filtering module is used for performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image;
the position acquisition module is used for carrying out pixel point matching identification processing on the mask image to obtain a first matching pixel point and a second matching pixel point; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than the object scale threshold;
the position acquisition module is also used for extracting the position of a pixel point of the mask image to obtain first position information of a first matching pixel point in the mask image and second position information of a second matching pixel point in the mask image;
the difference acquisition module is used for carrying out pixel position matching according to the first position information and the second position information and determining a position difference value between the first matching pixel point and the second matching pixel point;
the edge line acquisition module is used for carrying out edge recognition on the target object from the mask image to obtain an object edge line of the target object;
and the standard detection module is used for determining the target object as the standard object if the position difference value and the object edge line meet the standard object condition.
Wherein, this gradient acquisition module includes:
the image denoising unit is used for carrying out filtering denoising processing on the image to be detected to obtain a denoised image;
the amplitude acquisition unit is used for acquiring an edge difference operator, and performing edge difference processing on the de-noised image by adopting the edge difference operator to obtain a pixel gradient amplitude of the de-noised image;
and the gradient generating unit is used for forming the image gradient information of the image to be detected by the pixel gradient amplitude.
Wherein, this image denoising unit includes:
the pixel convolution subunit is used for acquiring a convolution kernel template, acquiring an image pixel value of the image to be detected, and performing pixel convolution processing on the convolution kernel template and the image pixel value to obtain a de-noised pixel value;
and the denoising generating subunit is used for forming the denoised pixel values into a denoised image.
The edge difference operator comprises a longitudinal difference operator and a transverse difference operator;
the amplitude acquisition unit includes:
the operator acquiring subunit is used for acquiring a longitudinal difference operator and a transverse difference operator;
the difference processing subunit is used for carrying out edge difference processing on the denoised image by adopting a longitudinal difference operator to obtain a longitudinal gradient amplitude value, and carrying out edge difference processing on the denoised image by adopting a transverse difference operator to obtain a transverse gradient amplitude value;
and the gradient fusion subunit is used for performing gradient fusion processing on the longitudinal gradient amplitude and the transverse gradient amplitude to obtain a pixel gradient amplitude of the de-noised image.
Wherein, this image filtering module includes:
the boundary filtering unit is used for carrying out boundary filtering processing on the image to be detected according to the image gradient information to obtain a filtered image;
the binary conversion unit is used for carrying out edge binary conversion processing on the filtered image to obtain a binary image;
the region identification unit is used for identifying an object region where a target object is located in the binary image;
and the pixel filtering unit is used for carrying out pixel filtering on the binary image based on the object region to obtain a mask image.
Wherein, this border filter unit includes:
the amplitude obtaining subunit is used for obtaining pixel gradient amplitudes of N to-be-detected pixel points forming the to-be-detected image from the image gradient information; n is a positive integer;
the pixel updating subunit is used for acquiring the pixel gradient amplitude of a neighborhood pixel point of the ith pixel point to be detected, and updating the image pixel value of the ith pixel point to be detected into an invalid pixel value if the pixel gradient amplitude of the ith pixel point to be detected is smaller than the pixel gradient amplitude of the neighborhood pixel point; i is a positive integer, i is less than or equal to N;
and the filtering determining subunit is used for determining the updated image to be detected as a filtering image.
Wherein, this binary conversion unit includes:
the pixel acquisition subunit is used for acquiring filtering pixel values respectively corresponding to N filtering pixel points included in the filtering image; n is a positive integer;
the first updating subunit is used for updating the filtering pixel value of the filtering pixel point with the filtering pixel value smaller than the first pixel threshold value to a first default pixel value; updating the filtering pixel value of the filtering pixel point with the filtering pixel value larger than the second pixel threshold value to be a second default pixel value;
the second updating subunit is used for marking the filtering pixel point with the filtering pixel value greater than or equal to the first pixel threshold value and less than or equal to the second pixel threshold value as an intermediate pixel point, acquiring a connected neighborhood pixel point of the intermediate pixel point, and updating the pixel value of the intermediate pixel point according to the connected neighborhood pixel point; the connected neighborhood pixel points refer to pixel points adjacent to the intermediate pixel points;
and the binary determining subunit is used for determining the updated filtering image as a binary image.
Wherein, regional identification element includes:
the transverse projection subunit is used for projecting the binary image in a pixel row-by-row manner to obtain a row projection value corresponding to each image row, determining the image row with the maximum row projection value as a reference row, and performing row number expansion on the reference row to obtain a transverse area;
the longitudinal projection subunit is used for projecting the binary image in a pixel-by-pixel manner to obtain a column projection value corresponding to each image column, determining the image column with the maximum column projection value as a reference column, and performing column number expansion on the reference column to obtain a longitudinal area;
an image dividing sub-unit for dividing the binary image into M sub-regions based on the horizontal region and the vertical region;
the area determining subunit is used for acquiring the distribution zero-divergence of the area pixel points corresponding to the second default pixel value in each sub-area, and determining the sub-area with the distribution zero-divergence larger than or equal to the object distribution threshold as the object area where the target object is located; m is a positive integer, and the second default pixel value refers to a pixel value greater than the second pixel threshold.
Wherein, this regional identification element includes:
and the candidate detection subunit is used for identifying the target object in the image to be detected by adopting the object detection model, obtaining an object candidate frame corresponding to the target object, and determining the object area where the target object is located in the binary image based on the object candidate frame.
The pixel filtering unit is specifically configured to:
updating pixel values of pixel points located in a background area in the binary image to be first default pixel values, and determining the updated binary image as a mask image; the background region refers to a region other than the object region in the binary image.
Wherein, this position acquisition module includes:
the edge point acquisition unit is used for acquiring at least two edge pixel points of which the pixel values belong to an edge pixel range in the mask image;
the matching point obtaining unit is used for carrying out coordinate screening processing on at least two edge pixel points and determining a first matching pixel point and a second matching pixel point in the at least two edge pixel points; the first matching pixel points refer to the pixel points with the minimum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points, and the second matching pixel points refer to the pixel points with the maximum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points;
the first position determining unit is used for sequentially acquiring K first extended pixel points along a first direction by taking the first matched pixel points as starting points, respectively performing transverse coordinate matching processing on first pixel columns where the K first extended pixel points are located, and determining first position information of the first matched pixel points in the mask image;
the second position determining unit is used for sequentially acquiring K second extended pixel points along a second direction by taking the second matched pixel points as starting points, respectively performing transverse coordinate matching processing on second pixel columns where the K second extended pixel points are located, and determining second position information of the second matched pixel points in the mask image; the first direction and the second direction are opposite directions, and K is a positive integer.
Wherein, this margin line obtains the module, includes:
the edge identification unit is used for acquiring at least two edge pixel points of which the pixel values belong to an edge pixel range in the mask image;
the target acquisition unit is used for performing pixel distribution analysis on at least two edge pixel points to obtain third pixel rows corresponding to the at least two edge pixel points respectively, and performing edge detection on each third pixel row to obtain a target edge point corresponding to each third pixel row; the target edge point is an edge pixel point with the minimum transverse coordinate in the third pixel row;
the detection determining unit is used for carrying out linear detection on the target edge line, determining the detected edge points from the target edge points and forming the detected edge points into the object edge line of the target object; the slope of the edge line of the object is less than or equal to an edge slope threshold.
Wherein, this detection determining unit includes:
the curve determining subunit is used for acquiring a polar coordinate curve corresponding to each target edge point and determining a target polar coordinate curve from the polar coordinate curves; intersection points exist between the target polar coordinate curve and the d polar coordinate curves, and d is a positive integer which is greater than or equal to the straight line determination threshold;
the reference determining subunit is used for determining an edge reference line based on the target polar coordinate curve and the target edge point corresponding to the target polar coordinate curve;
and the edge connecting subunit is used for determining the target edge points on the edge reference line as the detected edge points, and connecting the detected edge points to obtain the object edge lines of the target object.
Wherein, this detection determining unit includes:
a slope obtaining subunit, configured to perform edge point combination on the target edge point, obtain an adjacent edge point pair in the target edge point, calculate an edge slope corresponding to the adjacent edge point pair, and determine a detected edge point based on the edge slope; detecting that edge slopes corresponding to adjacent edge point pairs formed by edge points are smaller than an edge slope threshold;
and the line generation subunit is used for forming the detected edge points into object edge lines of the target object.
Wherein the standard object condition comprises a position difference threshold and a standard object length threshold; the device also includes:
a standard determination module, configured to determine that the position difference value and the object edge line satisfy a standard object condition if the position difference value is less than or equal to a position difference threshold and the length of the object edge line is greater than or equal to a standard object length threshold;
and the abnormity determining module is used for determining that the position difference value and the object edge line do not meet the standard object condition if the position difference value is greater than the position difference threshold value or the length of the object edge line is less than the standard object length threshold value.
Wherein the standard object condition comprises an object anomaly threshold; the device also includes:
the quality acquisition module is used for generating a detection function according to the first detection parameter and the second detection parameter, determining the value of the first detection parameter in the detection function as a position difference value, and determining the second detection parameter in the detection function as the length of an object edge line to obtain the object detection quality corresponding to the detection function;
the anomaly determination module is used for determining that the position difference value and the object edge line do not meet the standard object condition if the object detection quality is greater than the object anomaly threshold;
the standard determination module is used for determining that the position difference value and the object edge line meet the standard object condition if the object detection quality is less than or equal to the object abnormal threshold.
One aspect of the embodiments of the present application provides a computer device, including a processor, a memory, and an input/output interface;
the processor is respectively connected with the memory and the input/output interface, wherein the input/output interface is used for receiving data and outputting data, the memory is used for storing a computer program, and the processor is used for calling the computer program so as to enable the computer device comprising the processor to execute the image detection method in one aspect of the embodiment of the application.
An aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, where the computer program is adapted to be loaded and executed by a processor, so that a computer device having the processor executes an image detection method in an aspect of the embodiments of the present application.
An aspect of an embodiment of the present application provides a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternatives in one aspect of the embodiments of the application. In other words, the computer instructions, when executed by a processor, implement the methods provided in the various alternatives in one aspect of the embodiments of the present application.
The embodiment of the application has the following beneficial effects:
in the embodiment of the application, the image to be detected of the target object is detected, the image gradient information of the image to be detected is obtained, the image to be detected is filtered, preliminary filtering and screening can be carried out on edge points in the image to be detected, pixel points of the image to be detected are filtered, the filtering process of the image to be detected can be quantized and objective, interference of semantic information and the like of the image cannot be received, and the accuracy and the efficiency of image detection can be improved to a certain extent. And based on the characteristics of the target object, identifying the first matching pixel point and the second matching pixel point from the obtained mask image, using the first matching pixel point and the second matching pixel point as reference points for detecting the target object, detecting the relative positions of the first matching pixel point and the second matching pixel point, and detecting the edge line of the target object, thereby realizing the detection of the flatness and the contour line of the target object, enabling the target object to be detected based on the object characteristics of the target object, and improving the accuracy and the efficiency of image detection by adding the objectivity of the detection process.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of a network interaction architecture for image detection according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image detection scene provided in an embodiment of the present application;
FIG. 3 is a flowchart of a method for image detection according to an embodiment of the present disclosure;
FIG. 4a is a schematic diagram illustrating an image position direction according to an embodiment of the present application;
FIG. 4b is a schematic diagram illustrating an orientation of another image position provided by an embodiment of the present application;
FIG. 5 is a flowchart of an alternative image detection method provided by an embodiment of the present application;
fig. 6 is a schematic view of a scene determined based on a projected object region according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a region adjustment scenario provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a closed operation processing scenario provided in an embodiment of the present application;
fig. 9 is a schematic diagram of a position acquisition scenario provided in an embodiment of the present application;
fig. 10 is a schematic view of a target edge point obtaining scene according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an image detection apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the present application, please refer to fig. 1, where fig. 1 is a network interaction architecture diagram of image detection provided in the embodiment of the present application, and the embodiment of the present application may be implemented by a computer device. The computer device 103 may obtain an image to be detected 102 of a target object to be detected, where the image to be detected 102 may refer to an image which is acquired by the acquisition device 101 and used for representing the target object, the computer device 103 may perform image detection on the image to be detected 102, obtain a first matching pixel point and a second matching pixel point which are used for representing a reference point of the target object, determine a position difference value between the reference points of the target object based on a relative position relationship between the first matching pixel point and the second matching pixel point, where the reference point refers to a pixel point which may be used for representing a landmark meaning in the target object, for example, the target object is a straight bar product, and the reference point may be an end point of the straight bar product, that is, an end point of the target object, and the like. Further, the computer device 103 may further obtain an object edge line of the target object from the mask image, and detect the contour and the like of the target object based on the object edge line, where the first matching pixel point, the second matching pixel point, and the object edge line are identified based on the pixel points in the image to be detected 102, and have a smaller association with semantic information and the like included in the image to be detected 102, so that the detection of the image to be detected 102 is more objective. The computer device 103 can perform standardized detection on the target object through the position difference value and the object edge line, and determine an object detection result of the target object. Optionally, the computer device 103 may send the object detection result to the terminal device 104, so that the terminal device 104 determines an object processing result of the target object indicated by the image to be detected 102, where the object processing result includes an object retention result, an object discarding result, an object repairing result, and the like. Any two devices of the acquisition device 101, the computer device 103 and the terminal device 104 may be the same device or different devices, for example, the acquisition device 101 and the computer device 103 may be the same device or different devices; the computer device 103 and the terminal device 104 may be the same device or different devices; the acquisition device 101 and the terminal device 104 may be the same device or different devices. In other words, the acquisition device 101, the computer device 103, and the terminal device 104 may be the same device.
Specifically, please refer to fig. 2, and fig. 2 is a schematic view of an image detection scene according to an embodiment of the present disclosure. As shown in fig. 2, the computer device obtains an image 201 to be detected, obtains image gradient information of the image 201 to be detected, performs filtering processing on the image 201 to be detected based on the image gradient information to obtain a mask image 202, and identifies a first matching pixel 2021 and a second matching pixel 2022 from the mask image 202. First position information of the first matching pixel 2021 in the mask image 202 is obtained, second position information of the second matching pixel 2022 in the mask image 202 is obtained, and a position difference value between the first matching pixel 2021 and the second matching pixel 2022 is determined according to the first position information and the second position information. The object edge line 2023 of the target object is acquired from the mask image 202. The order of acquiring the position difference value and the object edge line 2023 is not limited, and the position difference value and the object edge line 2023 may be acquired in parallel; or the position difference value may be obtained first, and then the object edge line 2023 is obtained; or the object edge line 2023 is acquired first, and then the position difference value is acquired. Further, the computer device may detect whether the position difference value and the object edge line 2023 satisfy a standard object condition, determine that the target object is a standard object if the standard object condition is satisfied, and determine that the target object is an abnormal object if the standard object condition is not satisfied. Because the difference between the target object and the surrounding environment of the target object is generally large, the change of the pixel points in the image 201 to be detected is determined by acquiring the image gradient information of the image 201 to be detected, so as to filter the image 201 to be detected, to primarily filter part of the edge points of the target object, reduce the identification interference information of the target object, and further improve the accuracy of image detection.
It is understood that the computer device mentioned in the embodiments of the present application includes, but is not limited to, a terminal device or a server; the acquisition device can be a terminal device or a server with an image acquisition function, or a system consisting of the terminal device and the server. In other words, the computer device may be a server or a terminal device, or may be a system of a server and a terminal device. The above-mentioned terminal device may be an electronic device, including but not limited to a mobile phone, a tablet computer, a desktop computer, a notebook computer, a palm computer, a vehicle-mounted device, an Augmented Reality/Virtual Reality (AR/VR) device, a helmet display, a smart television, a wearable device, a smart speaker, a digital camera, a camera, and other Mobile Internet Devices (MID) with network access capability, or a terminal device in a scene such as a train, a ship, or a flight, and the like. The above-mentioned server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, vehicle-road cooperation, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Optionally, the data related to the embodiment of the present application may be stored in a computer device, or the data may be stored based on a cloud storage technology or a blockchain network, which is not limited herein.
Further, please refer to fig. 3, fig. 3 is a flowchart of an image detection method according to an embodiment of the present disclosure. As shown in fig. 3, an image to be detected is taken as an example for description, in other words, in the embodiment of the method described in fig. 3, the image detection process includes the following steps:
step S301, performing edge difference processing on the image to be detected to obtain image gradient information of the image to be detected, and performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image.
In the embodiment of the application, the computer device may obtain image gradient information of an image to be detected, filter the image to be detected according to the image gradient information to obtain a mask image, specifically, perform edge difference processing on the image to be detected to obtain image gradient information of the image to be detected, and perform mask filtering on the image to be detected according to the image gradient information to obtain the mask image. Or, the computer device may perform edge difference processing on the image to be detected to obtain image gradient information of the image to be detected, and perform boundary filtering processing on the image to be detected according to the image gradient information to obtain a filtered image; and performing edge binary conversion processing on the filtered image to obtain a binary image, identifying an object region where a target object in the binary image is located, and performing pixel filtering on the binary image based on the object region to obtain a mask image. Or, the computer device may perform filtering and denoising processing on the image to be detected to obtain a denoised image, perform edge difference processing on the denoised image to obtain image gradient information of the denoised image, and perform mask filtering processing on the denoised image according to the image gradient information to obtain a mask image.
The computer equipment can carry out filtering and denoising processing on the image to be detected to obtain a denoised image; and acquiring an edge difference operator, carrying out edge difference processing on the de-noised image by adopting the edge difference operator to obtain a pixel gradient amplitude of the de-noised image, and forming image gradient information of the image to be detected by the pixel gradient amplitude. The operator is a mapping from a function space to a function space, and performing a certain operation on any function can be regarded as an operator, and the edge difference operator is an operator for detecting an edge of an object in an image, and includes, but is not limited to, roberts (Robert) operator, Sobel (Sobel) operator, Prewitt operator, Canny operator, and the like. The Robert operator is an operator for searching an edge by using a local difference operator, and the edge is detected by adopting the difference between two adjacent pixels in the diagonal direction to approximate a gradient amplitude value; the Sobel operator is used for detecting edges by adding the gray value weighting difference of the upper, lower, left and right fields of each pixel in the image and reaching an extreme value at the edges, and is used for calculating the approximate value of the gradient of the image brightness function, and the Sobel operator comprises a transverse difference operator for detecting horizontal edges, a longitudinal difference operator for detecting vertical edges and the like; the Prewitt operator is an edge detection of a first-order differential operator, the edge is detected by using the gray difference of upper, lower, left and right adjacent points of a pixel point, the edge is detected by reaching an extreme value at the edge, and two direction templates are used for carrying out neighborhood convolution with an image in an image space to obtain the Prewitt operator, wherein the two direction templates are respectively used for detecting a horizontal edge, a vertical edge and the like; the Canny operator is a multi-stage edge detection algorithm, and the generated edge is thin and high in accuracy.
Optionally, the computer device may obtain a target object type to which a target object corresponding to the image to be detected belongs, obtain an operator type corresponding to the target object type, obtain an edge difference operator based on the operator type, perform edge difference processing on the denoised image by using the edge difference operator to obtain a pixel gradient amplitude of the denoised image, and form the pixel gradient amplitude into image gradient information of the image to be detected. Optionally, one target object type may correspond to one or at least two operator types, if the target object type corresponds to one operator type, the computer device may determine the operator type as the target operator type, obtain an edge difference operator based on the target operator type, perform edge difference processing on the denoised image by using the edge difference operator, obtain a pixel gradient amplitude of the denoised image, and form the pixel gradient amplitude into image gradient information of the image to be detected; if the target object type corresponds to at least two operator types, the computer equipment can obtain any one or more of the at least two operator types as the target operator type corresponding to the target object type, obtain an edge difference operator based on the target operator type, perform edge difference processing on the denoised image by using the edge difference operator to obtain a pixel gradient amplitude of the denoised image, and form the pixel gradient amplitude into image gradient information of the image to be detected. The operator types include, but are not limited to, the Robert operator type, the Sobel operator type, the Prewitt operator type, and the Canny operator type. The object type is used to represent edge characteristics of a corresponding object, that is, the target object type is used to represent edge characteristics of a target object, for example, when the edge characteristics of the target object are gray scale gradual change or have much noise, the target object type may be a gray scale gradual change type or a noise interference type, and the computer device may acquire a target operator type corresponding to the target object type, so that edge detection on an image with the target object type may be better achieved by an edge difference operator belonging to the target operator type.
Optionally, if the computer device directly obtains the image gradient information of the image to be detected, the computer device may obtain an edge difference operator, perform edge difference processing on the image to be detected by using the edge difference operator to obtain a pixel gradient amplitude of the image to be detected, and combine the pixel gradient amplitude into the image gradient information of the image to be detected. The process of obtaining the pixel gradient amplitude of the image to be detected by using the edge difference operator can be referred to the process of obtaining the pixel gradient amplitude of the de-noised image by using the edge difference operator.
Further, the computer device may perform a mask filtering process on the image to be detected based on the image gradient information to obtain a mask image. Specifically, the image gradient information includes a pixel gradient amplitude of each pixel point to be detected, the computer device may obtain a magnitude relationship between the pixel gradient amplitude of each pixel point to be detected in the image to be detected and a pixel gradient amplitude of a neighborhood pixel point of the pixel point to be detected, and mask filtering processing is performed on the image to be detected based on the magnitude relationship to obtain a mask image. Optionally, if the computer device performs filtering processing on the denoised image and the like based on the image gradient information, reference may also be made to a filtering processing process and the like of the image to be detected based on the image gradient information, which is not described herein again.
Optionally, the computer device may perform edge difference processing on the image to be detected to obtain image gradient information of the image to be detected, and perform boundary filtering processing on the image to be detected according to the image gradient information to obtain a filtered image; and performing edge binary conversion processing on the filtered image to obtain a binary image, identifying an object region where a target object in the binary image is located, and performing pixel filtering on the binary image based on the object region to obtain a mask image. The computer equipment can obtain pixel gradient amplitudes of N to-be-detected pixel points forming the to-be-detected image from the image gradient information; n is a positive integer; acquiring the pixel gradient amplitude of a neighborhood pixel point of an ith pixel point to be detected, and updating the image pixel value of the ith pixel point to be detected into an invalid pixel value if the pixel gradient amplitude of the ith pixel point to be detected is smaller than the pixel gradient amplitude of the neighborhood pixel point; i is a positive integer, i is less than or equal to N; and determining the updated image to be detected as a filtering image. Optionally, the invalid pixel value may be a pixel value smaller than the first pixel threshold, where the invalid pixel value may be the same as the first default pixel value or different from the first default pixel value.
Step S302, pixel matching identification processing is carried out on the mask image to obtain a first matching pixel and a second matching pixel, pixel position extraction is carried out on the mask image to obtain first position information of the first matching pixel in the mask image and second position information of the second matching pixel in the mask image.
In this embodiment, the computer device may obtain reference point information of the target object, and perform pixel point matching identification processing on the mask image based on the reference point information to obtain a first matching pixel point and a second matching pixel point, where the first matching pixel point and the second matching pixel point are equivalent to the reference point of the target object. The reference point information is used to indicate information of a reference point of the target object, where the reference point refers to an edge point having a position requirement in the target object, for example, if the target object needs to be a linear product, the reference point of the target object may be an end point of the edge point of the target object, and when the target object is in a standard viewing angle, the reference point needs to be maintained in the same ordinate or the same abscissa, and the like. In other words, when acquiring the image to be detected of the target object, the image to be detected may be acquired with a standard view angle. Optionally, the computer device may also obtain a collection view angle corresponding to the image to be detected, obtain reference point information corresponding to the collection view angle, and identify the first matching pixel point and the second matching pixel point from the mask image based on the reference point information. Further, the computer device may extract the pixel position of the mask image to obtain first position information of the first matching pixel in the mask image and second position information of the second matching pixel in the mask image. The pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is larger than the object scale threshold.
Step S303, pixel position matching is carried out according to the first position information and the second position information, and a position difference value between the first matching pixel point and the second matching pixel point is determined.
In this embodiment, the computer device may perform pixel position matching on the first position information and the second position information to obtain a target difference value between the first position information and the second position information, and determine the target difference value as a position difference value between the first matching pixel point and the second matching pixel point. Optionally, the computer device may obtain a position matching direction corresponding to the collecting view angle, obtain a target difference value of the first position information and the second position information in the position matching direction, and determine the target difference value as a position difference value between the first matching pixel point and the second matching pixel point. Optionally, the position matching direction may be a transverse direction, a longitudinal direction, a transverse clockwise rotation of 45 ° or the like, where the transverse direction, the longitudinal direction, or the like is determined based on the image to be detected, and the position matching direction may include a reference direction, a matching angle, or the like.
For example, please refer to fig. 4a to 4B, wherein fig. 4a is a schematic diagram of an image position direction provided in the embodiment of the present application, and as shown in fig. 4a, it is assumed that the size of the mask image 401 is 160 × 90, and four sides of the mask image 401 are a side 401A, a side 401B, a side 401C, and a side 401D, respectively. As shown in fig. 4a, an intersection of the sides 401A and 401D in the mask image 401 is taken as a coordinate origin (0, 0) of the image coordinate system, the side 401D is taken as a horizontal coordinate axis, and the side 401A is taken as a vertical coordinate axis, where a horizontal direction may refer to a direction indicated by the horizontal coordinate axis, and a vertical direction may refer to a direction indicated by the vertical coordinate axis, where other vertexes of the image 401 to be detected may also be taken as coordinate origins, which is not described herein again. Alternatively, as shown in fig. 4b, fig. 4b is another schematic diagram of the image position direction provided by the embodiment of the present application, and an image coordinate system composed of a horizontal coordinate axis and a vertical coordinate axis as shown in fig. 4b is created with the center position of the mask image 401 as the origin of coordinates (0, 0) of the image coordinate system, where the horizontal direction may refer to the direction indicated by the horizontal coordinate axis and the vertical direction may refer to the direction indicated by the vertical coordinate axis. The determination manner of the transverse direction and the longitudinal direction is not limited to the manner of fig. 4a to 4 b.
Step S304, performing edge recognition on the target object from the mask image to obtain an object edge line of the target object.
In this embodiment, the computer device may identify an object edge line of the target object from the mask image, and specifically, the computer device may obtain at least two edge pixel points whose pixel values belong to an edge pixel range from the mask image, determine a detected edge point from the at least two edge pixel points based on a relative position relationship between the edge pixel points, and form the detected edge point into the object edge line of the target object, where a slope of the object edge line is less than or equal to an edge slope threshold.
In step S305, if the position difference value and the object edge line satisfy the standard object condition, the target object is determined to be the standard object.
In this embodiment, the computer device may obtain a standard object condition, and if the position difference value and the object edge line satisfy the standard object condition, determine that the target object is the standard object, that is, may consider that the standard object is an object detection result for the target object, or may determine that the object detection result of the target object is an object standard result; if the position difference value and the object edge line do not satisfy the standard object condition, the target object is determined to be an abnormal object, that is, the abnormal object may be considered to be an object detection result for the target object, or the object detection result of the target object may be determined to be an object abnormal result. Optionally, the computer device may determine an object processing result for the target object based on the object detection result; alternatively, the computer device may transmit the object detection result of the target object to the terminal device, so that the terminal device may determine the object processing result for the target object based on the object detection result. Optionally, if the object detection result of the target object is the object standard result, determining that the object processing result for the target object is the object retention result; and if the object detection result of the target object is an object abnormal result, determining that the object processing result aiming at the target object is an object discarding result or an object repairing result. Optionally, if the object detection result of the target object is an object abnormality result, the computer device may obtain an object abnormality degree of the target object; if the object abnormality degree is larger than or equal to the discarding abnormality threshold, determining an object processing result aiming at the target object as an object discarding result; and if the object abnormality degree is smaller than the discarding abnormality threshold, determining the object processing result aiming at the target object as an object repairing result and the like.
In the embodiment of the application, the computer device can perform image detection on the image to be detected of the target object, acquire the image gradient information of the image to be detected, perform filtering processing on the image to be detected based on the image gradient information to obtain the mask image, so as to realize the filtering processing on the image to be detected based on the pixel information of the image to be detected, better identify the pixel points for representing the target object, and improve the identification efficiency and accuracy of the target object. Further, based on the object characteristics of the target object, a first matching pixel point and a second matching pixel point which can be used as reference points of the target object are obtained, and based on a position difference value between the first matching pixel point and the second matching pixel point, the first matching pixel point and the second matching pixel point are used as a detection index for detecting the standard of the target object; the object edge line of the target object is identified based on the object characteristics of the target object, and the object edge line is used as another detection index for detecting the standard property of the target object, so that the detection of the contour and the like of the target object is realized. The method has the advantages that the target detection result of the target object is obtained by evaluating the multiple detection indexes of the target object, the standardized detection of the target object is realized, in the process, the relevance with the training, prediction and the like of the model is weak, the method can be better suitable for the situation with small data quantity, and the generalization performance is better.
Further, please refer to fig. 5, fig. 5 is a flowchart of an alternative image detection method according to an embodiment of the present disclosure. As shown in fig. 5, taking multiple filtering as an example, a filtered image is obtained, and the image detection process includes the following steps:
step S501, performing edge difference processing on the image to be detected to obtain image gradient information of the image to be detected.
In the embodiment of the application, the computer device can acquire image gradient information of an image to be detected. Or, the computer device may perform filtering and denoising processing on the image to be detected to obtain a denoised image; and acquiring an edge difference operator, carrying out edge difference processing on the de-noised image by adopting the edge difference operator to obtain a pixel gradient amplitude of the de-noised image, and forming image gradient information of the image to be detected by the pixel gradient amplitude. Specifically, the computer device may obtain a convolution kernel template, obtain an image pixel value of the image to be detected, and perform pixel convolution processing on the convolution kernel template and the image pixel value to obtain a denoised pixel value; and forming a denoised image by the denoised pixel values. When a target object is detected, the target object can be a metal part and the like, and due to the fact that the surface of the metal part may be unsmooth, dust or lens noise and the like may exist in the environment, a to-be-detected image can be subjected to denoising processing to obtain a denoised image. The specific computer device may obtain a denoising hyper-parameter, and generate a convolution kernel template according to the denoising hyper-parameter, where the denoising hyper-parameter may include, but is not limited to, a convolution kernel scale, a convolution parameter, and the like. The convolution kernel template refers to a convolution kernel used for denoising an image to be detected, such as a gaussian convolution kernel template, a median convolution kernel template, or a wiener convolution kernel template, and is not limited herein. Alternatively, the convolution kernel template may be denoted as G (x ', y'), for example, assuming that the convolution kernel template is a gaussian convolution kernel template, the gaussian convolution kernel template may be shown in formula (r):
Figure BDA0003291627350000161
in the formula, the denoising hyper-parameter may include a convolution kernel standard deviation σ and a convolution kernel scale, the computer device may generate a convolution kernel template based on the denoising hyper-parameter, perform convolution processing on the convolution kernel template and an image pixel value to obtain a denoised pixel value, specifically, it may be considered that the convolution kernel template is used as a pixel weight, an image pixel value of a jth pixel point to be detected and an image pixel value of a jth pixel point in the neighborhood of the jth pixel point are weighted based on the convolution kernel template to obtain a jth denoised pixel value, the pixel value of the jth pixel point to be detected is updated to the jth denoised pixel value, and further, the pixel value of each pixel point to be detected is updated to obtain a denoised image. Wherein j is a positive integer, and the neighborhood pixel point of the jth pixel point to be detected refers to the pixel point to be detected adjacent to the jth pixel point to be detected. The convolution processing process can be shown in formula (II):
F(j)=H(j)*G(x′,y′) ②
in the formula (ii), H (j) is used to represent an image pixel value of a jth pixel point to be detected in the image H to be detected, the convolution kernel template G (x ', y') is equivalent to a convolution window, the computer device can move the convolution kernel template in the image H to be detected, and convolve the image pixel value of each pixel point to be detected to obtain a denoised pixel value corresponding to each pixel point to be detected, where f (j) is used to represent a denoised pixel value corresponding to the jth pixel point to be detected. "" is used to indicate the convolution process. For example, if the convolution kernel scale of the convolution kernel template is 3 × 3, the computer device may obtain a neighborhood pixel corresponding to the jth pixel to be detected by using the jth pixel to be detected as a convolution center point, use the convolution kernel template as weights of the jth pixel to be detected and the neighborhood pixel corresponding to the jth pixel to be detected, and perform weighted summation on the image pixel value of the jth pixel to be detected and the image pixel values of the neighborhood pixels corresponding to the jth pixel to be detected, so as to obtain a denoised pixel value of the jth pixel to be detected.
Further, the computer device may obtain a target operator type corresponding to the target object, obtain an edge difference operator corresponding to the target operator type, and perform edge difference processing on the denoised image by using the edge difference operator to obtain a pixel gradient amplitude of the denoised image. The process of obtaining the gradient amplitude of the pixel can be shown in formula (c):
di(x,y)=F(x,y)*Opei(x,y) ③
wherein, in formula III, the subscript i is used to represent the i-th denoised pixel point in the denoised image, OpeiAnd (x, y) is used for representing an edge difference operator, and (x, y) is used for representing a denoising pixel point in the x row and the y column in the denoising image, namely, the ith denoising pixel point, and gradient calculation is carried out by adopting the edge difference operator and the denoising pixel value of the ith denoising pixel point to obtain the pixel gradient amplitude of the ith denoising pixel point.
Optionally, taking the Sobel operator type as an example, the edge difference operator includes a vertical difference operator and a horizontal difference operator. The computer equipment can obtain a longitudinal difference operator and a transverse difference operator; carrying out edge difference processing on the denoised image by adopting a longitudinal difference operator to obtain a longitudinal gradient amplitude, and carrying out edge difference processing on the denoised image by adopting a transverse difference operator to obtain a transverse gradient amplitude; and performing gradient fusion processing on the longitudinal gradient amplitude and the transverse gradient amplitude to obtain a pixel gradient amplitude of the denoised image, for example, determining the sum of the longitudinal gradient amplitude and the transverse gradient amplitude as the pixel gradient amplitude of the denoised image. Specifically, the computer device may perform edge difference processing on the denoised image by using a longitudinal difference operator based on the formula (c) to obtain a longitudinal gradient amplitude value denoted as dy(ii) a Based on the formula III, performing edge difference processing on the denoised image by adopting a transverse difference operator to obtain a transverse gradient amplitude value which is recorded as dx(ii) a Assuming that the sum of the longitudinal gradient amplitude and the transverse gradient amplitude is determined as the pixel gradient amplitude of the denoised image, the determination method of the pixel gradient amplitude can be shown in a formula (iv):
M(x,y)=dx(x,y)+dy(x,y) ④
wherein, in the formula IV, the longitudinal gradient amplitude d is measuredy(x, y) and transverse gradient magnitude dxAnd (x, y) carrying out amplitude fusion to obtain a pixel gradient amplitude M (x, y) of the de-noised image.
Optionally, if the number of the edge difference operators is one, the pixel gradient amplitude of the denoised image is directly obtained based on the formula (c).
For example, assume that the lateral difference operator is OpexThe transverse difference operator is assumed to be
Figure BDA0003291627350000171
Performing edge difference processing on the denoised image by adopting a transverse difference operator to obtain a transverse gradient amplitude value which is recorded as dx(x,y)=F(x,y)*Opex(x, y); assume that the vertical difference operator is OpeyThe vertical difference operator is assumed to be
Figure BDA0003291627350000172
Carrying out edge difference processing on the denoised image by adopting a longitudinal difference operator to obtain a longitudinal gradient amplitude value which is recorded as dy(x,y)=F(x,y)*Opey(x,y)。
And step S502, performing boundary filtering processing on the image to be detected according to the image gradient information to obtain a filtered image.
In the embodiment of the application, the image gradient information includes pixel gradient amplitudes of N to-be-detected pixel points constituting an image to be detected, and the computer device can perform non-maximum suppression on the pixel gradient amplitudes of the N to-be-detected pixel points to obtain a filtered image. Specifically, the computer equipment can obtain pixel gradient amplitudes of N to-be-detected pixel points forming the to-be-detected image from the image gradient information; n is a positive integer. Acquiring the pixel gradient amplitude of a neighborhood pixel point of an ith pixel point to be detected, and updating the image pixel value of the ith pixel point to be detected into an invalid pixel value if the pixel gradient amplitude of the ith pixel point to be detected is smaller than the pixel gradient amplitude of the neighborhood pixel point; i is a positive integer, i is less than or equal to N. And determining the updated image to be detected as a filtering image.
Optionally, if the computer device obtains the image gradient information of the denoised image in step S501, the image gradient information includes pixel gradient amplitudes of N denoising pixel points constituting the denoised image, and the pixel gradient amplitudes of the N denoising pixel points may be subjected to non-maximum suppression to obtain a filtered image. Specifically, reference may be made to the above process of performing boundary filtering processing on the image to be detected, which is not described herein again.
Step S503, the filtered image is processed by edge binary conversion to obtain a binary image.
In the embodiment of the application, the computer device may obtain filtering pixel values corresponding to N filtering pixel points included in the filtering image; n is a positive integer. Updating the filtering pixel value of the filtering pixel point with the filtering pixel value smaller than the first pixel threshold value to be a first default pixel value; and updating the filtering pixel value of the filtering pixel point with the filtering pixel value larger than the second pixel threshold value into a second default pixel value. Marking the filtering pixel point with the filtering pixel value larger than or equal to the first pixel threshold value and smaller than or equal to the second pixel threshold value as an intermediate pixel point, acquiring a connected neighborhood pixel point of the intermediate pixel point, and updating the pixel value of the intermediate pixel point according to the connected neighborhood pixel point; the connected neighborhood pixel point refers to a pixel point adjacent to the intermediate pixel point, wherein the connected neighborhood pixel point may be a four-connected neighborhood pixel point adjacent to the intermediate pixel point, or an eight-connected neighborhood pixel point adjacent to the intermediate pixel point, and the like, and the limitation is not made here. Optionally, the computer device may obtain the updated pixel value of the connected neighborhood pixel, and update the pixel value of the middle pixel based on the updated pixel value of the connected neighborhood pixel; specifically, if the number of pixels of which the updated pixel value is the first default pixel value is greater than the number of pixels of which the updated pixel value is the second default pixel value among the connected neighborhood pixels, the pixel value of the middle pixel is determined as the first default pixel value; if the number of the pixel points with the updated pixel values as the first default pixel values is less than the number of the pixel points with the updated pixel values as the second default pixel values in the connected neighborhood pixel points, determining the pixel value of the middle pixel point as the second default pixel value; if, among the connected neighborhood pixels, the number of pixels whose updated pixel values are the first default pixel values is equal to the number of pixels whose updated pixel values are the second default pixel values, the pixel values of the intermediate pixels may be updated based on a binary conversion configuration, where the binary conversion configuration is used to indicate, among the connected neighborhood pixels, the number of pixels whose pixel values are the first default pixel values, and if the number of pixels whose pixel values are the second default pixel values is the same, the pixel values of the intermediate pixels are updated to the first default pixel values or to the second default pixel values. And determining the updated filtering image as a binary image.
Step S504, identifying an object area where a target object in the binary image is located, and performing pixel filtering on the binary image based on the object area to obtain a mask image.
In an embodiment of the present application, in an object region determining manner, a computer device may perform pixel line-by-line projection on a binary image to obtain line projection values corresponding to each image line, determine an image line with a maximum line projection value as a reference line, and perform line number expansion on the reference line to obtain a horizontal region, where the line number expansion is to sequentially obtain adjacent image lines of the reference line based on the reference line to increase the number of the obtained image lines, so that the horizontal region includes and does not include the reference line; the method comprises the steps of projecting a binary image in a pixel-by-column mode to obtain column projection values corresponding to each image column, determining the image column with the maximum column projection value as a reference column, and performing column number expansion on the reference column to obtain a longitudinal area, wherein the column number expansion refers to the step of sequentially obtaining adjacent image columns of the reference column on the basis of the reference column to increase the number of the obtained image columns so that the longitudinal area comprises and does not comprise the reference column. Dividing the binary image into M sub-regions based on the transverse region and the longitudinal region, acquiring the distribution zero-divergence of the region pixel points corresponding to the second default pixel value in each sub-region, and determining the sub-region with the distribution zero-divergence larger than or equal to the object distribution threshold as the object region where the target object is located; m is a positive integer, and the second default pixel value refers to a pixel value greater than the second pixel threshold.
For example, please refer to fig. 6, fig. 6 is a schematic diagram of a scene determined based on a projected object region according to an embodiment of the present application. As shown in fig. 6, assuming that the binary image is an image with a width w and a height h, the computer device may perform pixel-by-pixel projection on the binary image 601 to obtain a line projection value corresponding to each image line, where the line projection value may be determined according to formula (v):
Figure BDA0003291627350000191
wherein, in formula (v), B (x, y) is used to represent a binary image, JxThe method is used for representing a line projection value corresponding to the x-th image line, that is, the pixel values of the pixel points from the 0 th column to the w-th column in the x-th image line can be summed to obtain the line projection value of the x-th image line. Similarly, the determination method of the projection value column can be shown in formula (sixty):
Figure BDA0003291627350000192
wherein, in the formula (c), B (x, y) is used to represent a binary image, JyThe method is used for representing the column projection value corresponding to the y-th image column, that is, the pixel values of the pixel points from the 0 th row to the h-th row in the y-th image column can be summed to obtain the column projection value of the y-th image column.
Further, the computer device may determine an image line with the largest line projection value as a reference line, expand the number of the reference line to obtain a horizontal region, and specifically, may obtain a first image line and a second image line with the reference line as a center line, and form the first image line, the reference line, and the second image line into the horizontal region 6021, where the line identifier of the first image line is smaller than the line identifier corresponding to the reference line, the line identifier of the second image line is larger than the line identifier of the reference line, and the line identifier is used to indicate the corresponding line position of the corresponding image line in the binary image, for example, the line identifier of the 5 th image line is 5, where the first image line is adjacent to the reference line, the first image line includes one or at least two image lines, the second image line is adjacent to the reference line, and the second image line includes one or at least two image lines.
Similarly, the computer device may determine an image column with the largest column projection value as a reference column, perform column number expansion on the reference column to obtain a horizontal region, specifically, may obtain a first image column and a second image column with the reference column as a central line, and form the first image column, the reference column, and the second image column into a vertical region 6022, where the column identifier of the first image column is smaller than the column identifier corresponding to the reference column, the column identifier of the second image column is larger than the column identifier of the reference column, and the column identifier is used to indicate a column position corresponding to the corresponding image column in the binary image, for example, the column identifier of the 5 th image column is 5, where the first image column is adjacent to the reference column, the first image column includes one or at least two image columns, the second image column is adjacent to the reference column, and the second image column includes one or at least two image columns. The binary image 601 is divided into M sub-regions based on the horizontal region 6021 and the vertical region 6022, the M sub-regions including the sub-region 602a, the sub-region 602b, the sub-region 602c, the sub-region 602d, an intersection region of the horizontal region 6021 and the vertical region 6022, a sub-region in which the horizontal region 6021 is cut by the intersection region, and a sub-region in which the vertical region 6022 is cut by the intersection region, and the like. Acquiring the distribution zero-divergence of the area pixel points corresponding to the second default pixel value in each sub-area, and determining the sub-area with the distribution zero-divergence larger than or equal to the object distribution threshold as the object area of the target object, wherein the distribution zero-divergence can be used for representing the distance or the number between the area pixel points corresponding to the second default pixel value in the sub-area, and the like.
Alternatively, a sub-region whose distribution zero-divergence is greater than or equal to the object distribution threshold may be determined as a candidate sub-region, and the object region of the target object may be determined from the candidate sub-region based on the distribution zero-divergence of the region pixel points in the candidate sub-region. Specifically, referring to fig. 7, fig. 7 is a schematic diagram of a region adjustment scene provided in the embodiment of the present application. As shown in fig. 7, the binary image 701 is divided into M sub-regions, where the M sub-regions include a sub-region 701a, a sub-region 701b, a sub-region 701c, a sub-region 701d, an intersection region of the horizontal region 7011 and the vertical region 7012, a sub-region obtained by cutting the horizontal region 7011 by the intersection region, a sub-region obtained by cutting the vertical region 7012 by the intersection region, and the like. The candidate sub-region 702 is determined on the assumption that the distribution of the region pixel points corresponding to the second default pixel value in each sub-region is zero-divergence, and the candidate sub-region 702 is a sub-region obtained by truncating the cross region through the lateral region 7011. Since there are area pixel points in the candidate sub-area 702, where part of the area does not correspond to the second default pixel value, the object area 703 of the target object may be determined from the candidate sub-area 702 based on the zero divergence of the distribution of the area pixel points in the candidate sub-area 702 corresponding to the second default pixel value.
In an object region determination mode, a computer device may perform object recognition detection on a target object in an image to be detected by using an object detection model to obtain an object candidate frame corresponding to the target object, and determine an object region where the target object is located in a binary image based on a region corresponding to the object candidate frame in the binary image.
Under an object region determination mode, computer equipment can acquire an object acquisition direction corresponding to a target object, acquire a transverse region and a longitudinal region from a binary image, determine a target acquisition region from the transverse region and the longitudinal region based on the object acquisition direction, and perform region screening on the target acquisition region based on zero divergence of distribution of pixel points corresponding to a second default pixel value in the target acquisition region to obtain an object region where the target object is located.
Further, the computer device may perform pixel filtering on the binary image based on the object region to obtain a mask image, specifically, may update a pixel value of a pixel point located in the background region in the binary image to a first default pixel value, and determine the updated binary image as the mask image; the background region refers to a region other than the object region in the binary image. Optionally, the computer device may update the pixel value of the pixel point located in the background region in the binary image to a first default pixel value, perform closing operation processing on the pixel point in the object region, and determine the updated binary image to be a mask image, for example, as shown in fig. 8, fig. 8 is a schematic view of a closing operation processing scene provided in this embodiment of the present application, as shown in fig. 8, the computer device may update the pixel value of the pixel point located in the background region 8011 in the binary image 801 to the first default pixel value, perform closing operation processing on the pixel point in the object region 8012, and determine the updated binary image to be the mask image 802. The closed operation processing refers to a processing process of firstly expanding and then corroding the image, and is used for filling fine holes in the object, connecting adjacent objects and smoothing the boundary.
Step S505, performing pixel matching identification processing on the mask image to obtain a first matching pixel and a second matching pixel, and performing pixel position extraction on the mask image to obtain first position information of the first matching pixel in the mask image and second position information of the second matching pixel in the mask image.
In this embodiment of the application, the computer device may perform pixel matching identification processing on the mask image to identify a first matching pixel and a second matching pixel from the mask image, where a pixel value of the first matching pixel and a pixel value of the second matching pixel both belong to an edge pixel range, and a distance between the first matching pixel and the second matching pixel is greater than an object scale threshold.
Specifically, the computer device may obtain at least two edge pixel points in the mask image, where the pixel values belong to an edge pixel range, where the edge pixel range may be a second default pixel value because the mask image is obtained based on a binary image. Performing coordinate screening processing on at least two edge pixel points, and determining a first matching pixel point and a second matching pixel point in the at least two edge pixel points; the first matching pixel points refer to the pixel points with the minimum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points, and the second matching pixel points refer to the pixel points with the maximum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points. Specifically, the obtaining manner of the first matching pixel point can be shown in formula (c-1):
Figure BDA0003291627350000221
in formula (c) -1, LyAnd representing a first matching pixel point, wherein Mask (x, y) is used for representing a Mask image, wherein in the formula, assuming that the edge pixel range is a second default pixel value, assuming that the second default pixel value is 1, and "Mask (x, y) ═ 1" is used for representing at least two edge pixel points belonging to the edge pixel range in the Mask image. argmin is used to represent the pixel with the smallest vertical coordinate y among the at least two edge pixels.
The obtaining mode of the second matching pixel point can be shown in formula (c-2):
Figure BDA0003291627350000222
in formula (c) -1, RyAnd representing a second matching pixel point, wherein Mask (x, y) is used to represent the Mask image, and in the formula, assuming that the second default pixel value is 1, "Mask (x, y) ═ 1" is used to represent at least two edge pixel points belonging to the edge pixel range in the Mask image. argmax is used to indicate the pixel with the largest vertical coordinate y among the at least two edge pixels.
Further optionally, the computer device may sequentially obtain K first extended pixel points along the first direction with the first matching pixel point as a starting point, perform horizontal coordinate matching processing on the first pixel column where the K first extended pixel points are located, and determine first position information of the first matching pixel point in the mask image. Specifically, K first extended pixel points may be sequentially obtained along a first direction with a first matching pixel point as a starting point, first pixel rows corresponding to the K first extended pixel points in the mask image respectively are obtained, an edge transverse coordinate of an edge pixel point included in each first pixel row is obtained, a minimum edge transverse coordinate corresponding to each first pixel row is determined as a first transverse coordinate, so as to implement transverse coordinate matching processing on the first pixel rows where the K first extended pixel points are located, and an average value of the first transverse coordinates corresponding to the K first pixel rows respectively is determined as first position information of the first matching pixel points in the mask image. Sequentially acquiring K second extended pixel points along a second direction by taking the second matched pixel points as starting points, respectively performing transverse coordinate matching processing on second pixel columns where the K second extended pixel points are located, and determining second position information of the second matched pixel points in the mask image; the first direction and the second direction are opposite directions, and K is a positive integer. Sequentially acquiring K second extension pixel points along a second direction by taking the second matching pixel points as a starting point, acquiring second pixel rows corresponding to the K second extension pixel points in the mask image respectively, acquiring edge transverse coordinates of edge pixel points included in each second pixel row, determining the minimum edge transverse coordinate corresponding to each second pixel row as a second transverse coordinate so as to realize transverse coordinate matching processing of the second pixel rows where the K second extension pixel points are located, and determining the mean value of the second transverse coordinates corresponding to the K second pixel rows respectively as second position information of the second matching pixel points in the mask image; the first direction and the second direction are opposite directions, and K is a positive integer. For example, the first direction may be a direction from small to large of the column identifier, and the second direction may be a direction from large to small of the column identifier.
For example, please refer to fig. 9, and fig. 9 is a schematic diagram of a position acquisition scenario provided in the embodiment of the present application. As shown in fig. 9, in the mask image 901, K first extended pixel points are sequentially obtained along a first direction 9021 by a first matching pixel point 9011, first pixel columns 9031 corresponding to the K first extended pixel points in the mask image 901 respectively are obtained, an edge lateral coordinate of an edge pixel point included in each first pixel column is obtained, a smallest edge lateral coordinate corresponding to each first pixel column is determined as a first lateral coordinate, and an average value of the first lateral coordinates corresponding to the K first pixel columns respectively is determined as first position information of the first matching pixel point in the mask image. Wherein the first position information is determined in a manner shown in formula (1):
Figure BDA0003291627350000231
wherein, in the formula r-1, the first matching pixel point L is usedyAnd acquiring the minimum edge transverse coordinate corresponding to each first pixel column in the K first pixel columns as a starting point, and recording the minimum edge transverse coordinate as the first transverse coordinate of the first pixel column. As shown in the formula (r-1), the K first pixel columns include the Lth pixel columnyColumn to (L)y+ K) column, at LyColumn, obtaining the minimum edge transverse coordinate, and determining as LyA first lateral coordinate of the column; in the (L) thy+1 column, the smallest edge lateral coordinate is obtained and determined as the (L) th columny+1) column first lateral coordinates; …, respectively; in the (L) thy+ K) column, the smallest edge lateral coordinate is obtained, and the (L) th column is determinedy+ K) row first transverse coordinates, obtaining first transverse coordinates corresponding to the K first pixel rows respectively through the above process, and determining the average value of the first transverse coordinates corresponding to the K first pixel rows respectively as first position information h1 of the first matching pixel point in the mask image.
In the mask image 901, K second extended pixel points are sequentially obtained by the second matching pixel points 9012 along the second direction 9022, second pixel rows 9032 corresponding to the K second extended pixel points in the mask image 901 respectively are obtained, the edge transverse coordinate of the edge pixel point included in each second pixel row is obtained, the smallest edge transverse coordinate corresponding to each second pixel row is determined as the second transverse coordinate, and the mean value of the second transverse coordinates corresponding to the K second pixel rows respectively is determined as the second position information of the second matching pixel point in the mask image. Wherein the second position information is determined in a manner shown in formula (r) -2:
Figure BDA0003291627350000241
wherein, in the formula R-2, the second matching pixel point R is usedyAs a starting point, acquiring the minimum edge transverse coordinate corresponding to each second pixel column in the K second pixel columns, and recording the minimum edge transverse coordinate as the transverse coordinate of the second pixel columnA second lateral coordinate. As shown in the formula (r) -2, the K second pixel columns include the lth pixel columnyColumn to (L)y-K) at the L-thvColumn, obtaining the minimum edge transverse coordinate, and determining as LyA second lateral coordinate of the column; in the (L) thy-1) column, taking the smallest edge lateral coordinate, determined as (L) thy-1) a second transverse coordinate of the column; ...; in the (L) thy-K) column, taking the smallest edge transverse coordinate, determined as (L) thy-K) second transverse coordinates of the columns, obtaining second transverse coordinates corresponding to the K second pixel columns respectively through the above process, and determining a mean value of the second transverse coordinates corresponding to the K second pixel columns respectively as second position information h2 of the second matching pixel point in the mask image.
Step S506, performing pixel position matching according to the first position information and the second position information, and determining a position difference value between the first matching pixel point and the second matching pixel point.
In this embodiment, the computer device may determine a difference between the first location information and the second location information as a location difference between the first matching pixel point and the second matching pixel point. The position difference value can be shown in formula ninthly:
score1=h1-h2 ⑨
in the formula ninthly, score1 is used to represent the position difference value. Alternatively, the computer device may determine an absolute value of a difference between the first position information and the second position information as a position difference value between the first matching pixel point and the second matching pixel point.
Step S507, performing edge recognition on the target object from the mask image to obtain an object edge line of the target object.
In this embodiment, the computer device may obtain at least two edge pixel points in the mask image, where the pixel values belong to an edge pixel range. Performing pixel distribution analysis on at least two edge pixel points to obtain third pixel columns corresponding to the at least two edge pixel points respectively, namely obtaining image columns of the at least two edge pixel points respectively distributed in the mask image to analyze, and determining the image columns where the at least two edge pixel points are located as the third pixel columns; performing edge detection on each third pixel column to obtain a target edge point corresponding to each third pixel column; the target edge point is the edge pixel point with the minimum transverse coordinate in the third pixel column. Carrying out linear detection on the target edge line, determining detected edge points from the target edge points, and forming the detected edge points into an object edge line of the target object; the slope of the edge line of the object is less than or equal to an edge slope threshold. Optionally, referring to fig. 10, fig. 10 is a schematic view of a target edge point obtaining scene provided in this embodiment of the present application, as shown in fig. 10, a computer device may obtain an edge pixel point with a minimum horizontal coordinate in a third pixel column, and record the edge pixel point as a candidate edge pixel point 1002, and expand P pixels of each candidate edge pixel point along a third direction 1003 to obtain a target pixel point corresponding to each third pixel column, where at this time, the target pixel point is an edge pixel point included in an area 1004, and the third direction 1003 may be a direction in which a row identifier is changed from small to large, and the situation that an edge is truncated may be reduced by expanding the candidate edge pixel points.
Further, the computer device may obtain a polar coordinate curve corresponding to each target edge point, and determine a target polar coordinate curve from the polar coordinate curves; and intersection points exist between the target polar coordinate curve and the d polar coordinate curves, and d is a positive integer which is greater than or equal to the straight line determination threshold. The polar curve may be considered as a sine curve, and is used to represent the representation of the straight line passing through the corresponding target edge point in polar coordinates, i.e., may represent all the straight lines passing through the corresponding target edge point. And acquiring the number of curve intersections between each polar coordinate curve and other polar coordinate curves, and determining the polar coordinate curves of which the number of curve intersections is greater than or equal to the straight line determination threshold as target polar coordinate curves. That is, if there are intersections between the target polar coordinate curve and the d polar coordinate curves, that is, the number of curve intersections is d, it indicates that there is at least one straight line at the target edge point corresponding to the target polar coordinate curve, and the target edge point may include d target edge points. Determining an edge reference line based on the target polar coordinate curve and a target edge point corresponding to the target polar coordinate curve; and determining the target edge points on the edge reference line as detected edge points, and connecting the detected edge points to obtain the object edge line of the target object. Optionally, the computer device may detect an object edge line corresponding to the target object from the target edge point directly based on the cumulative probability hough transform or the standard probability hough transform.
When the edge points are combined into the object edge line of the object, the computer device may further perform edge point combination on the object edge points to obtain adjacent edge point pairs in the object edge points, calculate edge slopes corresponding to the adjacent edge point pairs, and determine the edge points based on the edge slopes. Detecting that edge slopes corresponding to adjacent edge point pairs formed by edge points are smaller than an edge slope threshold; and forming the detected edge points into object edge lines of the target object.
Step S508, detecting whether the target object satisfies the standard object condition.
In an embodiment of the present application, in a standard detection manner, the standard object condition includes a position difference threshold and a standard object length threshold. If the position difference value is less than or equal to the position difference threshold and the length of the edge line of the object is greater than or equal to the standard object length threshold, determining that the position difference value and the edge line of the object meet the standard object condition, and executing step S509; if the position difference value is greater than the position difference threshold value, or the length of the edge line of the object is less than the standard object length threshold value, it is determined that the position difference value and the edge line of the object do not satisfy the standard object condition, and step S510 is performed.
In one standard detection mode, the standard object condition includes an object anomaly threshold. Generating a detection function according to a first detection parameter and a second detection parameter, determining a value of the first detection parameter in the detection function as a position difference value, determining a second detection parameter in the detection function as a length of an object edge line, and obtaining object detection quality corresponding to the detection function, that is, the first detection parameter is used for representing an index for detecting a target object, the second detection parameter is used for representing an index for detecting the target object, and the object detection quality is used for representing an abnormal degree of the target object. If the object detection quality is greater than the object abnormal threshold, determining that the position difference value and the object edge line do not meet the standard object condition, and executing step S510; if the object detection quality is less than or equal to the object abnormal threshold, it is determined that the position difference value and the object edge line satisfy the standard object condition, and step S509 is performed.
In step S509, if the position difference value and the object edge line satisfy the standard object condition, the target object is determined to be the standard object.
In the embodiment of the application, if the position difference value and the object edge line satisfy the standard object condition, the target object is determined to be the standard object, and the object processing result of the target object is determined to be the object reservation result.
In step S510, if the position difference value and the object edge line do not satisfy the standard object condition, the target object is determined to be an abnormal object.
In the embodiment of the application, if the position difference value and the object edge line do not meet the standard object condition, the target object is determined to be an abnormal object, and the object processing result of the target object is determined to be an object discarding result or an object repairing result.
The method can be applied to anomaly detection of the MIM product, namely, the target object is the MIM product, the object detection result of the MIM product is determined based on the process, and the object processing result of the MIM product is determined based on the object detection result.
In the embodiment of the application, image gradient information of an image to be detected is obtained, and the image to be detected is filtered according to the image gradient information to obtain a mask image; identifying a first matching pixel point and a second matching pixel point from the mask image, acquiring first position information of the first matching pixel point in the mask image, acquiring second position information of the second matching pixel point in the mask image, and determining a position difference value between the first matching pixel point and the second matching pixel point according to the first position information and the second position information; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than the object scale threshold; and acquiring an object edge line of the target object from the mask image, and determining the target object as a standard object if the position difference value and the object edge line meet the standard object condition. The image gradient information of the image to be detected is obtained by detecting the image to be detected of the target object, the image to be detected is filtered, so that the marginal points in the image to be detected can be preliminarily filtered and screened, and the pixel points of the image to be detected are filtered, so that the filtering process of the image to be detected can be quantized and objective, the interference of semantic information and the like of the image can be avoided, and the accuracy and the efficiency of image detection can be improved to a certain extent. And based on the characteristics of the target object, identifying the first matching pixel point and the second matching pixel point from the obtained mask image, using the first matching pixel point and the second matching pixel point as reference points for detecting the target object, detecting the relative positions of the first matching pixel point and the second matching pixel point, and detecting the edge line of the target object, thereby realizing the detection of the flatness and the contour line of the target object, enabling the target object to be detected based on the object characteristics of the target object, and improving the accuracy and the efficiency of image detection by adding the objectivity of the detection process.
Further, please refer to fig. 11, fig. 11 is a schematic diagram of an image detection apparatus according to an embodiment of the present application. The image detection means may be a computer program (including program code, etc.) running on a computer device, for example the image detection means may be an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application. As shown in fig. 11, the image detection apparatus 1100 may be used in the computer device in the embodiment corresponding to fig. 3, and specifically, the apparatus may include: a gradient acquisition module 11, an image filtering module 12, a position acquisition module 13, a difference acquisition module 14, an edge line acquisition module 15 and a standard detection module 16.
The gradient obtaining module 11 is configured to perform edge difference processing on an image to be detected to obtain image gradient information of the image to be detected;
the image filtering module 12 is configured to perform mask filtering processing on an image to be detected according to the image gradient information to obtain a mask image;
the position obtaining module 13 is configured to perform pixel matching identification processing on the mask image to obtain a first matching pixel and a second matching pixel in the mask image; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than the object scale threshold;
the position acquisition module is also used for extracting the position of a pixel point of the mask image to obtain first position information of a first matching pixel point in the mask image and second position information of a second matching pixel point in the mask image;
the difference obtaining module 14 is configured to perform pixel position matching according to the first position information and the second position information, and determine a position difference value between the first matching pixel point and the second matching pixel point;
an edge line obtaining module 15, configured to perform edge recognition on the target object from the mask image to obtain an object edge line of the target object;
and the standard detection module 16 is configured to determine that the target object is the standard object if the position difference value and the object edge line satisfy the standard object condition.
Wherein, the gradient obtaining module 11 includes:
the image denoising unit 111 is configured to perform filtering denoising processing on the image to be detected to obtain a denoised image;
the amplitude obtaining unit 112 is configured to obtain an edge difference operator, and perform edge difference processing on the denoised image by using the edge difference operator to obtain a pixel gradient amplitude of the denoised image;
and the gradient generating unit 113 is used for forming the pixel gradient amplitude into image gradient information of the image to be detected.
The image denoising unit 111 includes:
the pixel convolution subunit 1111 is configured to obtain a convolution kernel template, obtain an image pixel value of the image to be detected, and perform pixel convolution processing on the convolution kernel template and the image pixel value to obtain a denoised pixel value;
a de-noising generation subunit 1112, configured to compose the de-noised pixel values into a de-noised image.
The edge difference operator comprises a longitudinal difference operator and a transverse difference operator;
the amplitude obtaining unit 112 includes:
an operator obtaining subunit 1121, configured to obtain a longitudinal difference operator and a transverse difference operator;
the difference processing subunit 1122 is configured to perform edge difference processing on the denoised image by using a longitudinal difference operator to obtain a longitudinal gradient amplitude value, and perform edge difference processing on the denoised image by using a transverse difference operator to obtain a transverse gradient amplitude value;
and the gradient fusion subunit 1123 is configured to perform gradient fusion processing on the longitudinal gradient amplitude and the transverse gradient amplitude to obtain a pixel gradient amplitude of the denoised image.
The image filtering module 12 includes:
the boundary filtering unit 121 is configured to perform boundary filtering processing on the image to be detected according to the image gradient information to obtain a filtered image;
a binary conversion unit 122, configured to perform edge binary conversion processing on the filtered image to obtain a binary image;
a region identification unit 123, configured to identify a target region where a target object is located in the binary image;
and the pixel filtering unit 124 is configured to perform pixel filtering on the binary image based on the object region to obtain a mask image.
Wherein, the boundary filtering unit 121 includes:
an amplitude obtaining subunit 1211, configured to obtain, from the image gradient information, pixel gradient amplitudes of N to-be-detected pixel points that form an image to be detected; n is a positive integer;
a pixel updating subunit 1212, configured to obtain a pixel gradient amplitude of a neighboring pixel point of the ith pixel point to be detected, and update an image pixel value of the ith pixel point to be detected to an invalid pixel value if the pixel gradient amplitude of the ith pixel point to be detected is smaller than the pixel gradient amplitude of the neighboring pixel point; i is a positive integer, i is less than or equal to N;
a filter determining subunit 1213 configured to determine the updated image to be detected as a filtered image.
The binary conversion unit 122 includes:
a pixel obtaining subunit 1221, configured to obtain filter pixel values corresponding to N filter pixels included in the filter image; n is a positive integer;
a first updating subunit 1222, configured to update the filter pixel value of the filter pixel point with the filter pixel value smaller than the first pixel threshold to the first default pixel value; updating the filtering pixel value of the filtering pixel point with the filtering pixel value larger than the second pixel threshold value to be a second default pixel value;
a second updating subunit 1223, configured to mark, as an intermediate pixel point, a filtered pixel point whose filtered pixel value is greater than or equal to the first pixel threshold and is less than or equal to the second pixel threshold, obtain a connected neighborhood pixel point of the intermediate pixel point, and update the pixel value of the intermediate pixel point according to the connected neighborhood pixel point; the connected neighborhood pixel points refer to pixel points adjacent to the intermediate pixel points;
a binary determining subunit 1224, configured to determine the updated filtered image as a binary image.
The area identification unit 123 includes:
a transverse projection subunit 1231, configured to perform pixel line-by-line projection on the binary image to obtain line projection values corresponding to each image line, determine the image line with the largest line projection value as a reference line, and perform line number expansion on the reference line to obtain a transverse region;
a longitudinal projection subunit 1232, configured to perform pixel-by-column projection on the binary image to obtain column projection values corresponding to each image column, determine the image column with the largest column projection value as a reference column, and perform column number expansion on the reference column to obtain a longitudinal region;
an image dividing unit 1233, configured to divide the binary image into M sub-regions based on the horizontal region and the vertical region;
a region determining subunit 1234, configured to obtain a distribution zero-divergence of the region pixel points corresponding to the second default pixel value in each sub-region, and determine the sub-region where the distribution zero-divergence is greater than or equal to the object distribution threshold as an object region where the target object is located; m is a positive integer, and the second default pixel value refers to a pixel value greater than the second pixel threshold.
Wherein, the area identification unit 123 includes:
the candidate detection subunit 1235 is configured to identify a target object in the image to be detected by using the object detection model, obtain an object candidate frame corresponding to the target object, and determine an object region where the target object is located in the binary image based on the object candidate frame.
The pixel filtering unit 124 is specifically configured to:
updating pixel values of pixel points located in a background area in the binary image to be first default pixel values, and determining the updated binary image as a mask image; the background region refers to a region other than the object region in the binary image.
Wherein, the position obtaining module 13 includes:
an edge point obtaining unit 131, configured to obtain at least two edge pixel points in the mask image, where a pixel value belongs to an edge pixel range;
a matching point obtaining unit 132, configured to perform coordinate screening processing on at least two edge pixel points, and determine a first matching pixel point and a second matching pixel point of the at least two edge pixel points; the first matching pixel points refer to the pixel points with the minimum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points, and the second matching pixel points refer to the pixel points with the maximum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points;
the first position determining unit 133 is configured to sequentially obtain K first extended pixel points along a first direction with the first matching pixel point as a starting point, perform horizontal coordinate matching processing on first pixel columns where the K first extended pixel points are located, and determine first position information of the first matching pixel points in the mask image;
the second position determining unit 134 is configured to sequentially obtain K second extended pixel points along a second direction with the second matching pixel point as a starting point, perform horizontal coordinate matching processing on second pixel columns where the K second extended pixel points are located, and determine second position information of the second matching pixel points in the mask image; the first direction and the second direction are opposite directions, and K is a positive integer.
Wherein, this margin line obtains module 15, includes:
an edge identification unit 151, configured to obtain at least two edge pixel points in the mask image, where a pixel value belongs to an edge pixel range;
a target obtaining unit 152, configured to perform pixel distribution analysis on at least two edge pixel points to obtain third pixel rows corresponding to the at least two edge pixel points, and perform edge detection on each third pixel row to obtain a target edge point corresponding to each third pixel row; the target edge point is an edge pixel point with the minimum transverse coordinate in the third pixel row;
a detection determining unit 153, configured to perform linear detection on the target edge line, determine a detected edge point from the target edge point, and form the detected edge point into an object edge line of the target object; the slope of the edge line of the object is less than or equal to an edge slope threshold.
The detection determining unit 153 includes:
a curve determining subunit 1531, configured to obtain a polar coordinate curve corresponding to each target edge point, and determine a target polar coordinate curve from the polar coordinate curves; intersection points exist between the target polar coordinate curve and the d polar coordinate curves, and d is a positive integer which is greater than or equal to the straight line determination threshold;
a basis accurate stator unit 1532 configured to determine an edge reference line based on the target polar coordinate curve and a target edge point corresponding to the target polar coordinate curve;
the edge connection subunit 1533 is configured to determine a target edge point located on the edge reference line as a detected edge point, and connect the detected edge point to obtain an object edge line of the target object.
The detection determining unit 153 includes:
a slope obtaining subunit 1534, configured to perform edge point combination on the target edge points, obtain adjacent edge point pairs in the target edge points, calculate edge slopes corresponding to the adjacent edge point pairs, and determine detected edge points based on the edge slopes; detecting that edge slopes corresponding to adjacent edge point pairs formed by edge points are smaller than an edge slope threshold;
the line generating sub-unit 1535 is configured to compose the detected edge points into object edge lines of the target object.
Wherein the standard object condition comprises a position difference threshold and a standard object length threshold; the apparatus 1100 further comprises:
a standard determining module 17, configured to determine that the position difference value and the object edge line meet a standard object condition if the position difference value is less than or equal to a position difference threshold and the length of the object edge line is greater than or equal to a standard object length threshold;
and an anomaly determination module 18, configured to determine that the position difference value and the object edge line do not satisfy the standard object condition if the position difference value is greater than the position difference threshold or the length of the object edge line is smaller than the standard object length threshold.
Wherein the standard object condition comprises an object anomaly threshold; the apparatus 1100 further comprises:
the quality obtaining module 19 is configured to generate a detection function according to the first detection parameter and the second detection parameter, determine a value of the first detection parameter in the detection function as a position difference value, determine the second detection parameter in the detection function as a length of an object edge line, and obtain an object detection quality corresponding to the detection function;
the anomaly determination module 18 is configured to determine that the position difference value and the object edge line do not satisfy the standard object condition if the object detection quality is greater than the object anomaly threshold;
the standard determination module 17 is configured to determine that the position difference value and the object edge line satisfy the standard object condition if the object detection quality is less than or equal to the object abnormal threshold.
The embodiment of the application provides an image detection device, which can acquire image gradient information of an image to be detected, and filter the image to be detected according to the image gradient information to obtain a mask image; identifying a first matching pixel point and a second matching pixel point from the mask image, acquiring first position information of the first matching pixel point in the mask image, acquiring second position information of the second matching pixel point in the mask image, and determining a position difference value between the first matching pixel point and the second matching pixel point according to the first position information and the second position information; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than the object scale threshold; and acquiring an object edge line of the target object from the mask image, and determining the target object as a standard object if the position difference value and the object edge line meet the standard object condition. The image gradient information of the image to be detected is obtained by detecting the image to be detected of the target object, the image to be detected is filtered, so that the marginal points in the image to be detected can be preliminarily filtered and screened, and the pixel points of the image to be detected are filtered, so that the filtering process of the image to be detected can be quantized and objective, the interference of semantic information and the like of the image can be avoided, and the accuracy and the efficiency of image detection can be improved to a certain extent. And based on the characteristics of the target object, identifying the first matching pixel point and the second matching pixel point from the obtained mask image, using the first matching pixel point and the second matching pixel point as reference points for detecting the target object, detecting the relative positions of the first matching pixel point and the second matching pixel point, and detecting the edge line of the target object, thereby realizing the detection of the flatness and the contour line of the target object, enabling the target object to be detected based on the object characteristics of the target object, and improving the accuracy and the efficiency of image detection by adding the objectivity of the detection process.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 12, the computer device in the embodiment of the present application may include: one or more processors 1201, memory 1202, and input-output interface 1203. The processor 1201, the memory 1202, and the input/output interface 1203 are connected by a bus 1204. The memory 1202 is configured to store a computer program, where the computer program includes program instructions, and the input/output interface 1203 is configured to receive data and output data, for example, for data interaction between a computer device and a collection device, or for data interaction between a computer device and a terminal device; the processor 1201 is configured to execute program instructions stored by the memory 1202.
The processor 1201 may perform the following operations:
performing edge difference processing on an image to be detected to obtain image gradient information of the image to be detected, and performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image;
performing pixel matching identification processing on the mask image to obtain a first matching pixel and a second matching pixel; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than the object scale threshold;
pixel point position extraction is carried out on the mask image to obtain first position information of a first matching pixel point in the mask image and second position information of a second matching pixel point in the mask image, pixel position matching is carried out according to the first position information and the second position information, and a position difference value between the first matching pixel point and the second matching pixel point is determined;
and performing edge recognition on the target object from the mask image to obtain an object edge line of the target object, and determining the target object as a standard object if the position difference value and the object edge line meet the standard object condition.
In some possible embodiments, the processor 1201 may be a Central Processing Unit (CPU), and the processor may be other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1202 may include both read-only memory and random access memory, and provides instructions and data to the processor 901 and to the input/output interface 1203. A portion of the memory 1202 may also include non-volatile random access memory. For example, memory 1202 may also store device type information.
In a specific implementation, the computer device may execute, through each built-in functional module thereof, the implementation manner provided in each step in fig. 3 or fig. 5, which may be referred to specifically for the implementation manner provided in each step in fig. 3 or fig. 5, and is not described herein again.
The embodiment of the present application provides a computer device, including: the image detection device comprises a processor, an input/output interface and a memory, wherein a computer program in the memory is acquired by the processor, and the steps of the method shown in the figure 5 are executed to perform the image detection operation. The method and the device for detecting the mask image achieve the purposes that the image gradient information of the image to be detected is obtained, and the image to be detected is filtered according to the image gradient information to obtain the mask image; identifying a first matching pixel point and a second matching pixel point from the mask image, acquiring first position information of the first matching pixel point in the mask image, acquiring second position information of the second matching pixel point in the mask image, and determining a position difference value between the first matching pixel point and the second matching pixel point according to the first position information and the second position information; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to the edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than the object scale threshold; and acquiring an object edge line of the target object from the mask image, and determining the target object as a standard object if the position difference value and the object edge line meet the standard object condition. The image gradient information of the image to be detected is obtained by detecting the image to be detected of the target object, the image to be detected is filtered, so that the marginal points in the image to be detected can be preliminarily filtered and screened, and the pixel points of the image to be detected are filtered, so that the filtering process of the image to be detected can be quantized and objective, the interference of semantic information and the like of the image can be avoided, and the accuracy and the efficiency of image detection can be improved to a certain extent. And based on the characteristics of the target object, identifying the first matching pixel point and the second matching pixel point from the obtained mask image, using the first matching pixel point and the second matching pixel point as reference points for detecting the target object, detecting the relative positions of the first matching pixel point and the second matching pixel point, and detecting the edge line of the target object, thereby realizing the detection of the flatness and the contour line of the target object, enabling the target object to be detected based on the object characteristics of the target object, and improving the accuracy and the efficiency of image detection by adding the objectivity of the detection process.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, where the computer program is suitable for being loaded by the processor and executing the image detection method provided in each step in fig. 3 or fig. 5, and for details, reference may be made to implementation manners provided in each step in fig. 3 or fig. 5, which are not described herein again. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. By way of example, a computer program can be deployed to be executed on one computer device or on multiple computer devices at one site or distributed across multiple sites and interconnected by a communication network.
The computer-readable storage medium may be the image detection apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, provided on the computer device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the computer device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the computer device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer readable storage medium, and executes the computer instruction, so that the computer device executes the methods provided in the various optional manners in fig. 3 or fig. 5, thereby detecting the image to be detected of the target object, obtaining the image gradient information of the image to be detected, filtering the image to be detected, preliminarily filtering and screening edge points in the image to be detected, filtering pixel points of the image to be detected, quantifying and objectively filtering the image to be detected, avoiding interference of semantic information and the like of the image, and improving the accuracy and efficiency of image detection to a certain extent. And based on the characteristics of the target object, identifying the first matching pixel point and the second matching pixel point from the obtained mask image, using the first matching pixel point and the second matching pixel point as reference points for detecting the target object, detecting the relative positions of the first matching pixel point and the second matching pixel point, and detecting the edge line of the target object, thereby realizing the detection of the flatness and the contour line of the target object, enabling the target object to be detected based on the object characteristics of the target object, and improving the accuracy and the efficiency of image detection by adding the objectivity of the detection process.
The terms "first," "second," and the like in the description and in the claims and drawings of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprises" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, product, or apparatus that comprises a list of steps or elements is not limited to the listed steps or modules, but may alternatively include other steps or modules not listed or inherent to such process, method, apparatus, product, or apparatus.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the specification for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and the related apparatus provided by the embodiments of the present application are described with reference to the flowchart and/or the structural diagram of the method provided by the embodiments of the present application, and each flow and/or block of the flowchart and/or the structural diagram of the method, and the combination of the flow and/or block in the flowchart and/or the block diagram can be specifically implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image sensing device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image sensing device, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image sensing device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be loaded onto a computer or other programmable image sensing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (19)

1. An image detection method, characterized in that the method comprises:
performing edge difference processing on an image to be detected to obtain image gradient information of the image to be detected, and performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image;
performing pixel matching identification processing on the mask image to obtain a first matching pixel and a second matching pixel; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to an edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than an object scale threshold;
extracting pixel positions of the mask image to obtain first position information of the first matching pixel in the mask image and second position information of the second matching pixel in the mask image, matching the pixel positions according to the first position information and the second position information, and determining a position difference value between the first matching pixel and the second matching pixel;
and performing edge identification on the target object from the mask image to obtain an object edge line of the target object, and determining the target object as a standard object if the position difference value and the object edge line meet a standard object condition.
2. The method of claim 1, wherein the performing edge difference processing on the image to be detected to obtain image gradient information of the image to be detected comprises:
carrying out filtering and denoising treatment on the image to be detected to obtain a denoised image;
and acquiring an edge difference operator, carrying out edge difference processing on the de-noised image by adopting the edge difference operator to obtain a pixel gradient amplitude of the de-noised image, and forming the pixel gradient amplitude into image gradient information of the image to be detected.
3. The method as claimed in claim 2, wherein said filtering and de-noising said image to be detected to obtain a de-noised image comprises:
acquiring a convolution kernel template, acquiring an image pixel value of the image to be detected, and performing pixel convolution processing on the convolution kernel template and the image pixel value to obtain a de-noised pixel value;
and combining the de-noised pixel values into a de-noised image.
4. The method of claim 2, wherein the edge difference operator comprises a vertical difference operator and a horizontal difference operator;
the obtaining of the edge difference operator, performing edge difference processing on the denoised image by using the edge difference operator to obtain a pixel gradient amplitude of the denoised image, includes:
acquiring the longitudinal difference operator and the transverse difference operator;
performing edge difference processing on the denoised image by adopting the longitudinal difference operator to obtain a longitudinal gradient amplitude value, and performing edge difference processing on the denoised image by adopting the transverse difference operator to obtain a transverse gradient amplitude value;
and carrying out gradient fusion processing on the longitudinal gradient amplitude and the transverse gradient amplitude to obtain a pixel gradient amplitude of the de-noised image.
5. The method as claimed in claim 1, wherein said performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image comprises:
performing boundary filtering processing on the image to be detected according to the image gradient information to obtain a filtered image;
and performing edge binary conversion processing on the filtered image to obtain a binary image, identifying an object region where a target object in the binary image is located, and performing pixel filtering on the binary image based on the object region to obtain a mask image.
6. The method according to claim 5, wherein the performing boundary filtering processing on the image to be detected according to the image gradient information to obtain a filtered image comprises:
acquiring pixel gradient amplitudes of N to-be-detected pixel points forming the to-be-detected image from the image gradient information; n is a positive integer;
acquiring a pixel gradient amplitude of a neighborhood pixel point of an ith pixel point to be detected, and updating an image pixel value of the ith pixel point to be detected into an invalid pixel value if the pixel gradient amplitude of the ith pixel point to be detected is smaller than the pixel gradient amplitude of the neighborhood pixel point; i is a positive integer, i is less than or equal to N;
and determining the updated image to be detected as a filtering image.
7. The method according to claim 5, wherein the performing an edge binary conversion process on the filtered image to obtain a binary image comprises:
acquiring filtering pixel values respectively corresponding to N filtering pixel points included in the filtering image; n is a positive integer;
updating the filtering pixel value of the filtering pixel point with the filtering pixel value smaller than the first pixel threshold value to be a first default pixel value; updating the filtering pixel value of the filtering pixel point with the filtering pixel value larger than a second pixel threshold value to be a second default pixel value;
marking the filtering pixel point with the filtering pixel value larger than or equal to the first pixel threshold value and smaller than or equal to the second pixel threshold value as an intermediate pixel point, acquiring a connected neighborhood pixel point of the intermediate pixel point, and updating the pixel value of the intermediate pixel point according to the connected neighborhood pixel point; the connected neighborhood pixel points refer to pixel points adjacent to the intermediate pixel points;
and determining the updated filtering image as a binary image.
8. The method of claim 5, wherein the identifying the object region in which the target object is located in the binary image comprises:
performing pixel line-by-line projection on the binary image to obtain line projection values corresponding to each image line, determining the image line with the maximum line projection value as a reference line, and performing line number expansion on the reference line to obtain a transverse area;
performing pixel column-by-column projection on the binary image to obtain a column projection value corresponding to each image column, determining the image column with the maximum column projection value as a reference column, and performing column number expansion on the reference column to obtain a longitudinal area;
dividing the binary image into M sub-regions based on the transverse region and the longitudinal region, acquiring the distribution zero-divergence of the pixel points of the region corresponding to a second default pixel value in each sub-region, and determining the sub-region with the distribution zero-divergence larger than or equal to an object distribution threshold as an object region where a target object is located; m is a positive integer, and the second default pixel value refers to a pixel value greater than a second pixel threshold.
9. The method of claim 5, wherein the pixel filtering the binary image based on the object region to obtain a mask image comprises:
updating pixel values of pixel points located in a background area in the binary image to be first default pixel values, and determining the updated binary image as a mask image; the background region refers to a region other than the object region in the binary image.
10. The method of claim 1, wherein the performing pixel matching identification processing on the mask image to obtain a first matching pixel and a second matching pixel comprises:
acquiring at least two edge pixel points of which the pixel values belong to the edge pixel range in the mask image;
performing coordinate screening processing on the at least two edge pixel points, and determining a first matching pixel point and a second matching pixel point in the at least two edge pixel points; the first matching pixel points refer to the pixel points with the minimum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points, and the second matching pixel points refer to the pixel points with the maximum longitudinal coordinate corresponding to the mask image in the at least two edge pixel points;
the pixel position extraction of the mask image to obtain first position information of the first matching pixel in the mask image and second position information of the second matching pixel in the mask image includes:
sequentially acquiring K first extension pixel points along a first direction by taking the first matching pixel points as starting points, respectively performing transverse coordinate matching processing on first pixel columns where the K first extension pixel points are located, and determining first position information of the first matching pixel points in the mask image;
sequentially acquiring K second extended pixel points along a second direction by taking the second matched pixel points as starting points, respectively performing transverse coordinate matching processing on second pixel columns where the K second extended pixel points are located, and determining second position information of the second matched pixel points in the mask image; the first direction and the second direction are opposite directions, and K is a positive integer.
11. The method of claim 1, wherein the edge identifying the target object from the mask image to obtain an object edge line of the target object comprises:
acquiring at least two edge pixel points of which the pixel values belong to the edge pixel range in the mask image;
performing pixel distribution analysis on the at least two edge pixel points to obtain third pixel rows corresponding to the at least two edge pixel points respectively, and performing edge detection on each third pixel row to obtain a target edge point corresponding to each third pixel row; the target edge point is an edge pixel point with the minimum transverse coordinate in the third pixel row;
performing linear detection on the target edge line, determining detected edge points from the target edge points, and forming the detected edge points into an object edge line of the target object; the slope of the object edge line is less than or equal to an edge slope threshold.
12. The method of claim 11, wherein said linearly detecting the target edge line, determining a detected edge point from the target edge points, and forming the detected edge point into an object edge line of the target object comprises:
acquiring a polar coordinate curve corresponding to each target edge point, and determining a target polar coordinate curve from the polar coordinate curves; intersection points exist between the target polar coordinate curve and the d polar coordinate curves, and d is a positive integer which is greater than or equal to the straight line determination threshold;
determining an edge reference line based on the target polar coordinate curve and a target edge point corresponding to the target polar coordinate curve;
and determining the target edge points on the edge reference lines as detected edge points, and connecting the detected edge points to obtain object edge lines of the target object.
13. The method of claim 11, wherein said linearly detecting the target edge line, determining a detected edge point from the target edge points, and forming the detected edge point into an object edge line of the target object comprises:
performing edge point combination on the target edge points to obtain adjacent edge point pairs in the target edge points, calculating edge slopes corresponding to the adjacent edge point pairs, and determining detected edge points based on the edge slopes; the edge slopes corresponding to adjacent edge point pairs formed by the detected edge points are all smaller than the edge slope threshold;
and forming the detected edge points into object edge lines of the target object.
14. The method of claim 1, wherein the standard object condition comprises a location variance threshold and a standard object length threshold; the method further comprises the following steps:
if the position difference value is smaller than or equal to the position difference threshold value and the length of the object edge line is larger than or equal to the standard object length threshold value, determining that the position difference value and the object edge line meet a standard object condition;
and if the position difference value is larger than the position difference threshold value or the length of the object edge line is smaller than the standard object length threshold value, determining that the position difference value and the object edge line do not meet the standard object condition.
15. The method of claim 1, wherein the standard object condition comprises an object anomaly threshold; the method further comprises the following steps:
generating a detection function according to a first detection parameter and a second detection parameter, determining a value of the first detection parameter in the detection function as the position difference value, and determining the second detection parameter in the detection function as the length of the edge line of the object, so as to obtain the object detection quality corresponding to the detection function;
if the object detection quality is greater than the object abnormal threshold, determining that the position difference value and the object edge line do not meet the standard object condition;
and if the object detection quality is less than or equal to the object abnormal threshold, determining that the position difference value and the object edge line meet the standard object condition.
16. An image detection apparatus, characterized in that the apparatus comprises:
the gradient acquisition module is used for carrying out edge difference processing on an image to be detected to obtain image gradient information of the image to be detected;
the image filtering module is used for performing mask filtering processing on the image to be detected according to the image gradient information to obtain a mask image;
the position acquisition module is used for carrying out pixel point matching identification processing on the mask image to obtain a first matching pixel point and a second matching pixel point; the pixel value of the first matching pixel point and the pixel value of the second matching pixel point both belong to an edge pixel range, and the distance between the first matching pixel point and the second matching pixel point is greater than an object scale threshold;
the position obtaining module is further configured to perform pixel position extraction on the mask image to obtain first position information of the first matching pixel in the mask image and second position information of the second matching pixel in the mask image;
the difference acquisition module is used for carrying out pixel position matching according to the first position information and the second position information and determining a position difference value between the first matching pixel point and the second matching pixel point;
an edge line obtaining module, configured to perform edge recognition on the target object from the mask image to obtain an object edge line of the target object;
and the standard detection module is used for determining the target object as a standard object if the position difference value and the object edge line meet the standard object condition.
17. A computer device comprising a processor, a memory, an input output interface;
the processor is connected to the memory and the input/output interface, respectively, wherein the input/output interface is configured to receive data and output data, the memory is configured to store a computer program, and the processor is configured to call the computer program to enable the computer device to perform the method according to any one of claims 1 to 15.
18. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method of any of claims 1-15.
19. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the method of any of claims 1-15.
CN202111166764.4A 2021-09-30 2021-09-30 Image detection method, image detection device, computer, readable storage medium, and program product Pending CN114331951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111166764.4A CN114331951A (en) 2021-09-30 2021-09-30 Image detection method, image detection device, computer, readable storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111166764.4A CN114331951A (en) 2021-09-30 2021-09-30 Image detection method, image detection device, computer, readable storage medium, and program product

Publications (1)

Publication Number Publication Date
CN114331951A true CN114331951A (en) 2022-04-12

Family

ID=81045293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111166764.4A Pending CN114331951A (en) 2021-09-30 2021-09-30 Image detection method, image detection device, computer, readable storage medium, and program product

Country Status (1)

Country Link
CN (1) CN114331951A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116147842A (en) * 2022-11-14 2023-05-23 广州科易光电技术有限公司 Gas leakage detection method and device, equipment and storage medium
CN116228698A (en) * 2023-02-20 2023-06-06 北京鹰之眼智能健康科技有限公司 Filler state detection method based on image processing
CN116310745A (en) * 2023-05-10 2023-06-23 北京瑞莱智慧科技有限公司 Image processing method, data processing method, related device and storage medium
CN116363140A (en) * 2023-06-02 2023-06-30 山东鲁玻玻璃科技有限公司 Method, system and device for detecting defects of medium borosilicate glass and storage medium
CN116147842B (en) * 2022-11-14 2024-04-26 广州科易光电技术有限公司 Gas leakage detection method and device, equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116147842A (en) * 2022-11-14 2023-05-23 广州科易光电技术有限公司 Gas leakage detection method and device, equipment and storage medium
CN116147842B (en) * 2022-11-14 2024-04-26 广州科易光电技术有限公司 Gas leakage detection method and device, equipment and storage medium
CN116228698A (en) * 2023-02-20 2023-06-06 北京鹰之眼智能健康科技有限公司 Filler state detection method based on image processing
CN116228698B (en) * 2023-02-20 2023-10-27 北京鹰之眼智能健康科技有限公司 Filler state detection method based on image processing
CN116310745A (en) * 2023-05-10 2023-06-23 北京瑞莱智慧科技有限公司 Image processing method, data processing method, related device and storage medium
CN116310745B (en) * 2023-05-10 2024-01-23 北京瑞莱智慧科技有限公司 Image processing method, data processing method, related device and storage medium
CN116363140A (en) * 2023-06-02 2023-06-30 山东鲁玻玻璃科技有限公司 Method, system and device for detecting defects of medium borosilicate glass and storage medium
CN116363140B (en) * 2023-06-02 2023-08-25 山东鲁玻玻璃科技有限公司 Method, system and device for detecting defects of medium borosilicate glass and storage medium

Similar Documents

Publication Publication Date Title
CN110427932B (en) Method and device for identifying multiple bill areas in image
CN114331951A (en) Image detection method, image detection device, computer, readable storage medium, and program product
CN109165538B (en) Bar code detection method and device based on deep neural network
US20180122083A1 (en) Method and device for straight line detection and image processing
Liu et al. Digital image forgery detection using JPEG features and local noise discrepancies
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN109479082B (en) Image processing method and apparatus
CN110782424B (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN108875504B (en) Image detection method and image detection device based on neural network
US20180253852A1 (en) Method and device for locating image edge in natural background
CN113111844B (en) Operation posture evaluation method and device, local terminal and readable storage medium
CN108038826B (en) Method and device for correcting perspective deformed shelf image
CN112435223B (en) Target detection method, device and storage medium
CN112183517B (en) Card edge detection method, device and storage medium
CN114037992A (en) Instrument reading identification method and device, electronic equipment and storage medium
CN109816721B (en) Image positioning method, device, equipment and storage medium
Sert A new modified neutrosophic set segmentation approach
CN111161348A (en) Monocular camera-based object pose estimation method, device and equipment
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
Maity et al. Background modeling and foreground extraction in video data using spatio-temporal region persistence features
CN111126248A (en) Method and device for identifying shielded vehicle
US20210216766A1 (en) Method and device for identifying number of bills and multiple bill areas in image
CN114004839A (en) Image segmentation method and device of panoramic image, computer equipment and storage medium
KR20220168875A (en) A device for estimating the lodging area in rice using AI and a method for same
CN112085683A (en) Depth map reliability detection method in significance detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination