CN113160259B - Edge detection method, edge detection device, computer equipment and storage medium - Google Patents

Edge detection method, edge detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN113160259B
CN113160259B CN202110485292.2A CN202110485292A CN113160259B CN 113160259 B CN113160259 B CN 113160259B CN 202110485292 A CN202110485292 A CN 202110485292A CN 113160259 B CN113160259 B CN 113160259B
Authority
CN
China
Prior art keywords
edge
point
distribution curve
rectangular area
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110485292.2A
Other languages
Chinese (zh)
Other versions
CN113160259A (en
Inventor
常英杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unisinsight Technology Co Ltd
Original Assignee
Chongqing Unisinsight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unisinsight Technology Co Ltd filed Critical Chongqing Unisinsight Technology Co Ltd
Priority to CN202110485292.2A priority Critical patent/CN113160259B/en
Publication of CN113160259A publication Critical patent/CN113160259A/en
Application granted granted Critical
Publication of CN113160259B publication Critical patent/CN113160259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the technical field of image processing, and provides an edge detection method, an edge detection device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be detected, wherein the image to be detected comprises a first edge to be detected; determining a first rectangular area and a second rectangular area in an image to be detected, wherein a first central axis of the first rectangular area and a first central axis of the second rectangular area are parallel to a first side, and the distance between the first central axis and the first side is smaller than a first preset value; respectively carrying out gray projection on the first rectangular region and the second rectangular region along respective first central axes to obtain a first distribution curve and a second distribution curve; determining a target detection area according to the first distribution curve and the second distribution curve; and performing edge detection on the target detection area to obtain the edge of the first edge. The invention can be suitable for edge detection of objects to be detected with different surface characteristics, and has higher compatibility.

Description

Edge detection method, edge detection device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an edge detection method, an edge detection device, a computer device, and a storage medium.
Background
The purpose of edge detection is to identify points in the digital image where the brightness changes significantly, by which edges of a particular object in the image can be identified.
In the existing edge detection method, a region of interest (ROI) is gradually extracted through region segmentation; manually drawing the ROI and following the ROI by a characteristic following method; the edge detection is directly carried out, then the target edge is selected according to the characteristics, and the methods are carried out based on the surface characteristics of the object to be detected, but the surface characteristics of different objects to be detected are quite different, so that in the existing method, the edge detection is required to be carried out based on the corresponding surface characteristics of different objects to be detected, and the method has no compatibility.
Disclosure of Invention
The invention aims to provide an edge detection method, an edge detection device, computer equipment and a storage medium, which can shield the change of the surface characteristics of an object to be detected, can stably detect an edge ROI (region of interest) without being influenced by the change of the surface characteristics of the object to be detected, and can be suitable for edge detection of the object to be detected with different surface characteristics, and have higher compatibility.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
In a first aspect, the present invention provides an edge detection method, the method comprising: acquiring an image to be detected, wherein the image to be detected comprises a first edge to be detected; determining a first rectangular area and a second rectangular area in the image to be detected, wherein a first central axis of the first rectangular area and a first central axis of the second rectangular area are parallel to the first side, and the distance between the first central axis and the first side is smaller than a first preset value; respectively carrying out gray projection on the first rectangular region and the second rectangular region along respective first central axes to obtain a first distribution curve and a second distribution curve; determining a target detection area according to the first distribution curve and the second distribution curve; and performing edge detection on the target detection area to obtain the edge of the first edge.
In a second aspect, the present invention provides an edge detection device, the device comprising: the device comprises an acquisition module, a detection module and a detection module, wherein the acquisition module is used for acquiring an image to be detected, and the image to be detected comprises a first edge to be detected; the first determining module is used for determining a first rectangular area and a second rectangular area in the image to be detected, wherein a first central axis of the first rectangular area and a first central axis of the second rectangular area are parallel to the first side, and the distance between the first central axis and the first side is smaller than a first preset value; the projection module is used for respectively carrying out gray projection on the first rectangular area and the second rectangular area along respective first central axes to obtain a first distribution curve and a second distribution curve; the second determining module is further used for determining a target detection area according to the first distribution curve and the second distribution curve; and the detection module is used for carrying out edge detection on the target detection area so as to obtain the edge of the first edge.
In a third aspect, the present invention provides a computer device comprising a memory storing a computer program and a processor implementing an edge detection method as described above when executing the computer program.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements an edge detection method as described above.
Compared with the prior art, the method has the advantages that gray projection is carried out on the first rectangular area and the second rectangular area in the image to be detected along the respective first central axes, so that a first distribution curve and a second distribution curve are obtained; determining a target detection area according to the first distribution curve and the second distribution curve; and finally, carrying out edge detection on the target detection area to obtain the edge of the first edge. The invention shields the change of the surface characteristics of the object to be detected, is not influenced by the change of the surface characteristics, can be suitable for the edge detection of the object to be detected with different surface characteristics, and has higher compatibility.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a flow of an edge detection method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a flow of another edge detection method according to an embodiment of the present invention.
Fig. 3 is an exemplary diagram of a first rectangular area according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a flow of another edge detection method according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a flow of another edge detection method according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a flow of another edge detection method according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a flow of another edge detection method according to an embodiment of the present invention.
Fig. 8 is an exemplary diagram of a first edge point in a first rectangular area according to an embodiment of the present invention.
Fig. 9 is an exemplary diagram of a second rectangular area provided in an embodiment of the present invention.
Fig. 10 is an exemplary diagram of a target detection area provided in an embodiment of the present invention.
FIG. 11 is an exemplary diagram of detected vertical and horizontal edges provided by an embodiment of the present invention.
Fig. 12 is a block diagram of an edge detection device according to an embodiment of the present invention.
Fig. 13 is a block schematic diagram of a computer device according to an embodiment of the present invention.
Icon: 10-a computer device; 11-a processor; 12-memory; 13-bus; 14-a communication interface; 100-edge detection means; 110-an acquisition module; 120-a first determination module; 130-a projection module; 140-a second determination module; 150-a detection module; 160-a position determination module.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, if the terms "upper", "lower", "inner", "outer", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or the azimuth or the positional relationship in which the inventive product is conventionally put in use, it is merely for convenience of describing the present invention and simplifying the description, and it is not indicated or implied that the apparatus or element referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus it should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
The existing edge detection technology is carried out based on the surface characteristics of the objects to be detected, and the surface characteristics of different objects to be detected are quite different, so that the edge detection is needed to be carried out based on the corresponding surface characteristics of different objects to be detected.
Taking the industry of cameras as an example, in the production process of the cameras, it is required to ensure that the projection distance between the optical axis of the lens and the center of the image sensor is smaller than a certain design value, otherwise, the dark angle of the image of the finished product camera is caused, or the defect of increased peripheral distortion is caused. For high resolution products, this technical index will be more stringent. In practical situations, if no special treatment is performed, the design error of the lens structure and the processing error of the printed circuit board PCB (Printed Circuit Board, PCB) where the image sensor is located inevitably result in that the finished camera cannot meet the technical indexes.
Therefore, in actual production of a camera, it is required to accurately detect the position of the image sensor on the PCB, but for position detection, it is most important that the edge of the image sensor relative to the PCB can be detected, and the existing edge detection technology is generally based on the surface features of the image sensor, so each edge detection method can only be applied to image sensors with similar surface features.
In view of this, the embodiments of the present invention provide an edge detection method, apparatus, computer device and storage medium, which can shield the change of the surface features of the image sensor, so that the edge detection method is not affected by the change of the surface features, and can be applied to edge detection of image sensors with different surface features, and has high compatibility, which will be described in detail below.
It should be noted that, for edge detection of an image sensor in the camera industry, only one application scenario of the embodiment of the present invention is that, in fact, the edge detection method provided by the embodiment of the present invention is also applicable to other scenarios requiring edge detection.
Referring to fig. 1, fig. 1 is a schematic diagram of a flow of an edge detection method according to an embodiment of the present invention, the method includes the following steps:
step S100, an image to be detected is obtained, wherein the image to be detected comprises a first edge to be detected.
In this embodiment, the image to be detected may include an image of a portion of the object to be detected, for example, one corner of the object to be detected, or one side of the object to be detected, and may also include a complete image of the object to be detected.
In this embodiment, the imaging device for capturing an image to be detected may determine the capturing accuracy according to the need, and for a scene with a high accuracy requirement for edge detection, the imaging device with a high resolution is used to capture, for example, for an application scene for edge detection of an image sensor of a camera, an industrial camera is generally used to image the image sensor on the PCB.
Step S110, a first rectangular area and a second rectangular area are determined in the image to be detected, wherein a first central axis of the first rectangular area and a first central axis of the second rectangular area are parallel to the first edge, and the distance between the first central axis and the first edge is smaller than a first preset value.
In this embodiment, the first rectangular area and the second rectangular area may be rectangular or square, and the sizes of the first rectangular area and the second rectangular area may be the same or different, and the first rectangular area and the second rectangular area may be rectangular or square, or may be rectangular or non-square, and the boundary between the first side to be detected and the image to be detected may be parallel or not parallel.
In this embodiment, the smaller the first preset value is, the closer the first central axis of the first rectangular area and the first central axis of the second rectangular area are, the higher the accuracy of the edge detection result is, and the distance between the first central axis of the first rectangular area and the first edge can be represented by the number of pixels.
And step S120, respectively carrying out gray projection on the first rectangular area and the second rectangular area along respective first central axes to obtain a first distribution curve and a second distribution curve.
In this embodiment, the first rectangular area and the second rectangular area have respective first central axes, the first distribution curve is obtained by gray-scale projection of the first rectangular area along the first central axes, and the second distribution curve is obtained by gray-scale projection of the second rectangular area along the first central axes. The generation of the first distribution curve and the second distribution curve can be performed independently and simultaneously, or can be performed by generating one and generating the other. By converting a two-dimensional rectangular area into a one-dimensional distribution curve, the influence of the change of the surface characteristics of the object to be detected on edge detection is shielded, the compatibility is improved, the influence of the change of the brightness of an image, dirt existing on the surface of the object to be detected and the like on the edge detection is shielded, and the stability of the method is also improved on the premise of ensuring the accuracy of the edge detection.
Step S130, determining a target detection area according to the first distribution curve and the second distribution curve.
In this embodiment, the target detection region is a region where the edge of the first side is located, also referred to as a region of interest ROI (Region Of Interest, ROI).
In step S140, edge detection is performed on the target detection area to obtain an edge of the first edge.
In this embodiment, the operators that can be used for edge detection of the target detection area include, but are not limited to, canny operator, prewitt operator, sobel operator, and the like.
According to the method provided by the embodiment of the invention, the corresponding distribution curve is obtained by carrying out gray projection on the first rectangular area and the second rectangular area in the image to be detected, the target detection area is further determined according to the distribution curve, and finally, the edge of the first edge is obtained by carrying out edge detection on the target detection area, so that the whole process is not required to be based on the surface characteristics of the object to be detected in the image to be detected, the method can be suitable for edge detection of the object to be detected with different surface characteristics, has higher compatibility, and meanwhile, the unstable edge detection caused by the influence of the surface dirt of the object to be detected on the surface characteristics is avoided.
On the basis of fig. 1, the embodiment of the present invention further provides a specific implementation manner of determining the first rectangular area and the second rectangular area, please refer to fig. 2, fig. 2 is a schematic diagram of a flow of another edge detection method provided in the embodiment of the present invention, and step S110 includes the following sub-steps:
in the substep S1101, a first point and a second point are determined within a preset range of the first edge, wherein an abscissa of the first point and an abscissa of the second point are both within the first abscissa and the second abscissa, and a distance between the first point and the first edge and a distance between the second point and the first edge are both smaller than a first preset value.
In the present embodiment of the present invention, in the present embodiment,the first edge includes a first endpoint and a second endpoint, the coordinates of the first endpoint including a first abscissa and the coordinates of the second endpoint including a second abscissa, e.g., the coordinates of the first endpoint and the second endpoint are respectively expressed as: (R) v1 ,C v1 ) And (R) v2 ,C v2 ) The abscissa of the first point is expressed as: r is R p1 The abscissa of the second point is expressed as: r is R p2 Both of them need to satisfy the following conditions: r is R p1 ∈(min(R v1 ,R v2 ),max(R v1 ,R v2 )),R p2 ∈(min(R v1 ,R v2 ),max(R v1 ,R v2 ))。
In this embodiment, after the abscissa of the first point and the abscissa of the second point are determined, the distance between the first point and the first edge is smaller than the first preset value when the ordinate of the first point is determined, and the distance between the second point and the first edge is smaller than the first preset value when the ordinate of the second point is determined, i.e. the first point and the second point are as close to the first edge as possible, e.g. the distance between the first point and the first edge is denoted as D v1 The distance between the second point and the first edge is denoted as D v2 D is then v1 →0,D v2 →0。
In sub-step S1102, a first rectangular area is determined with the first point as the center, wherein a first central axis of the first rectangular area is parallel to the first edge and a distance between the first central axis and the first edge is smaller than a first preset value.
In this embodiment, it will be appreciated that, in order to detect the edge of the first edge, both the first rectangular region and the second rectangular region need to include regions that span across both sides of the first edge, that is, both the first rectangular region and the second rectangular region cover regions that are located on both sides of the first edge.
In this embodiment, in order to more clearly show the first rectangular area, please refer to fig. 3, fig. 3 is an exemplary diagram of the first rectangular area provided in the embodiment of the present invention, fig. 3 is an image to be detected including one corner of the image sensor, fig. 3, P v For the first point, S w A half shaft serving as a first center shaft S l Is a half shaft of the second center shaft. As a specific embodiment, S w The values of (2) are such that the first rectangular region comprises as much as possible only the photosensitive region of the image sensor in the direction of the first central axis, i.e. is located at S in the first rectangular region w The right part only contains the photosensitive area of the image sensor, and does not contain areas other than the photosensitive area, so that the subsequent projection result can more accurately reflect the gray features of the image sensor. As a specific embodiment, S l The requirements are as follows: s is S l >D v And S is l >1, wherein D v The distance between the first point and the first side is represented by the number of pixels. At D v Under the condition of 0, the first edge passes through the middle of the first rectangular area along the direction of the first central axis, that is, the first central axis is approximately coincident with the first edge.
In sub-step S1103, a second rectangular area is determined with the second point as the center, where the first central axis of the second rectangular area is parallel to the first edge and the distance between the first central axis and the first edge is smaller than a first preset value.
In this embodiment, the determination manner of the second rectangular area is the same as that of the first rectangular area, except that the first point and the second point are different points, and the specific determination process is not described herein.
According to the method provided by the embodiment of the invention, the first rectangular area and the second rectangular area meeting the conditions can be rapidly determined by determining the central point of the rectangular area and then determining the corresponding rectangular area.
On the basis of fig. 1, the embodiment of the present invention further provides a specific implementation manner of obtaining the first distribution curve and the second distribution curve, please refer to fig. 4, fig. 4 is a schematic diagram of a flow of another edge detection method provided by the embodiment of the present invention, and step S120 includes the following sub-steps:
In sub-step S1201, a first gray value of a first pixel on each column of the first rectangular region parallel to the first central axis of the first rectangular region is counted.
In this embodiment, for the first rectangular area, each column includes a plurality of first pixel points, and each column corresponds to a first gray value, where the first gray value is a sum of gray values of the first pixel points on the column.
It should be noted that, when the first central axis is parallel to the coordinate axis of the rectangular planar coordinate system, the coordinate values of the first pixel point are all integers, the gray value of the corresponding coordinate point can be directly obtained at this time to calculate the first gray value, when the first central axis is not parallel to the coordinate axis of the rectangular planar coordinate system, the calculated coordinate value of the first pixel point may be a decimal, at this time, the gray value of the pixel point nearest to the coordinate value of the first pixel point may be used as the gray value of the first pixel point to perform statistics, or other interpolation methods may be used to determine the gray value of the coordinate value of the first pixel point.
Substep S1202, generating a first distribution curve from the first gray values.
In this embodiment, the first distribution curve is generated according to the first gray value corresponding to each column in the first rectangular area, and as a specific embodiment, the first distribution curve is expressed as: Wherein r is is Is the c-th in the first rectangular area i Row start coordinates of columns, r ie Is the c-th in the first rectangular area i Row termination coordinates of columns, G (r, c i ) For the r line c i A first gray value of a first pixel of a column.
In sub-step S1203, a second gray value of the second pixel point on each column of the second rectangular region parallel to the first central axis of the second rectangular region is counted.
In this embodiment, for the second rectangular area, each column includes a plurality of second pixel points, and each column corresponds to a second gray value, where the second gray value is a sum of gray values of the second pixel points on the column.
In sub-step S1204, a second distribution curve is generated from the second gray values.
In this embodiment, the second distribution curve is generated according to the second gray value corresponding to each column in the second rectangular region, and as a specific embodiment, the second distribution curve representsThe method comprises the following steps:wherein r is js Is the c-th in the second rectangular area j Row start coordinates of columns, r je Is the c-th in the second rectangular area j Row termination coordinates of columns, G (r, c j ) For the r line c j And a second gray level value of a second pixel point of the column.
According to the method provided by the embodiment of the invention, the result of gray projection is represented by the first distribution curve and the second distribution curve, so that the target detection area can be conveniently determined by the first distribution curve and the second distribution curve in a subsequent mathematical calculation mode, and the target detection area can be more accurately determined.
On the basis of fig. 1, the embodiment of the present invention further provides a specific implementation manner of determining the target detection area, referring to fig. 5, fig. 5 is a schematic diagram of a flow of another edge detection method provided by the embodiment of the present invention, and step S130 includes the following sub-steps:
substep S1301, deriving the first distribution curve, to obtain a plurality of first edge points with derivative values satisfying the preset condition.
In this embodiment, as a specific implementation manner, in order to eliminate noise in the first rectangular area and make the determined target detection area more accurate, the first distribution curve may be first subjected to gaussian smoothing filtering, and then the filtered first distribution curve may be derived.
In this embodiment, the preset condition is that the absolute value of the derivative value is greater than a preset threshold, and the first edge point is a pixel point whose absolute value of the derivative value is greater than the preset threshold. The edge characteristics of the distribution curve are: when the derivative value is greater than 0 and the absolute value is greater than a preset threshold, the corresponding first edge point is at the edge of the change from dark to light, and when the derivative value is less than 0 and the absolute value is greater than the preset threshold, the corresponding first edge point is at the edge of the change from light to dark. The first edge point may be expressed as:
Or->Wherein t is a preset threshold.
In this embodiment, the first edge point includes a pixel point, and the derivative value may be greater than 0, or less than 0, or both greater than 0 and less than 0.
In sub-step S1302, the second distribution curve is derived, so as to obtain a plurality of second edge points with derivative values meeting the preset condition.
In this embodiment, the second edge point is similar to the first edge point, and will not be described here again.
And substep S1303, determining a first target edge point and a second target edge point from the plurality of first edge points and the plurality of second edge points, where a distance between the first target edge point and the second target edge point is greater than a second preset value.
In this embodiment, taking the determination of the first target edge point as an example, according to a specific application scene, edge features matched with the application scene are determined, and then the first target edge point is determined from the first edge points, for example, when the application scene needs to be a bright and dark edge, the first target edge point is determined from the first edge points with a derivative value greater than 0, so according to actual needs, the first target edge point may be a pixel point with a derivative value greater than 0 in the first edge point or a pixel point with a derivative value less than 0 in the first edge point, and the second target edge point is similar to the first target edge point, but needs to satisfy that a distance between the first target edge point and the second target edge point is greater than a second preset value.
As one embodiment, the distance between the first target edge point and the second target edge point is l s The second preset value is: 2+2n r Wherein n is r The number of head and tail edge points removed in the process of fitting a preset straight line. In a specific application scenario, considering that an edge point between a first target edge point and a second target edge point is not completely detected during edge detection, l is s On the meeting ofUnder the condition, the method can be set as much as possible, so that the fitted straight line is more accurate, and finally, the edge detection result is more accurate.
Substep S1304, generating an edge line from the first target edge point and the second target edge point.
In sub-step S1305, an expansion operation is performed on the edge line to obtain a target detection area.
In this embodiment, the purpose of the expansion operation is to amplify the edge line, and the idea is that: defining a structural element, moving the structural element in the whole image to each pixel point, and if the structural element is equal to at least one pixel of the pixel points corresponding to the image, reserving the value of the pixel point. The size of the structural element should be selected so that the minimum covered edge area of the target detection area is optimal, and setting the structural element too large can introduce interference edges.
According to the method provided by the embodiment of the invention, the first distribution curve and the second distribution curve are derived, and the first target edge point and the second target edge point are selected according to the derivative values, so that the edge straight line is more accurate, and finally the obtained target detection area cannot be excessively large, so that excessive interference edges are introduced, and cannot be excessively small, so that the target detection area cannot completely cover the edge of the first edge.
On the basis of fig. 1, the embodiment of the present invention further provides a specific implementation manner of determining the edge of the first edge, referring to fig. 6, fig. 6 is a schematic diagram of a flow of another edge detection method provided by the embodiment of the present invention, and step S140 includes the following sub-steps:
in sub-step S1401, edge detection is performed on the target detection area by using a preset edge detection operator, so as to obtain a plurality of third edge points.
In this embodiment, the third edge point is the output of the preset edge detection operator.
In sub-step S1402, a plurality of third edge points are linearly fitted to obtain an edge function of the first edge.
In this embodiment, the straight line fitting may employ, but is not limited to, least square method, gradient descent method, gauss newton method, or the like.
Substep S1403 determines the edge of the first edge from the edge function of the first edge.
According to the method provided by the embodiment of the invention, after the optimal target detection area is determined, the edge detection is performed by adopting the preset edge detection operator, so that the accuracy of the finally determined edge of the first edge is ensured.
It should be noted that, the substeps S1101-S1103 of the step S110 in fig. 2 may replace the step S110 in fig. 4-6, the substeps S1201-1204 of the step S120 in fig. 4 may replace the step S120 in fig. 2, 5-6, the substeps S1301-S1305 of the step S130 in fig. 5 may replace the step S130 in fig. 2, 4 and 6, and the substeps S1401-S1403 of the step S140 in fig. 6 may replace the step S140 in fig. 2, 4-5 to obtain the corresponding technical effects.
In this embodiment, in order to determine the position of the image sensor on the PCB during the production process of the camera, the edges of the two sides of the image sensor may be detected in the above manner, and the position of the image sensor on the PCB is determined according to the detection result, so the embodiment of the present invention further provides a processing manner for determining the position of the image sensor on the PCB, please refer to fig. 7, fig. 7 is a schematic diagram illustrating a flow of another edge detection method provided in the embodiment of the present invention, and the method further includes the following steps:
Step S200, edge detection is carried out on the second edge, and the edge of the second edge is obtained.
In this embodiment, an image sensor located on a PCB is first imaged to obtain an image to be detected, where the image to be detected includes a first edge to be detected and a second edge to be detected, the first edge and the second edge are two edges of the image sensor, and an included angle is formed between the first edge and the second edge.
In this embodiment, the manner of edge detection of the second edge is similar to that of the first edge, except that when edge detection is performed on the second edge, the determined first central axis of the first rectangle and the first central axis of the second rectangular area are parallel to the second edge, the distance between the first central axis and the second edge is smaller than a first preset value, and the detection of the edges of the other edges is similar to that of the first edge, which is not repeated here.
Step S210, determining the position of the image sensor in the image to be detected according to the edge of the first side and the edge of the second side.
In order to more clearly illustrate the specific application of the edge detection method, the embodiment of the invention provides an example of a detection process for calculating edges of two edges of an image sensor in combination with a specific application scene.
In the security industry, when manufacturing a movement lens, a sheet metal part and a PCB (printed Circuit Board) fixed with an image sensor are fixed together through glue to manufacture a module, and then the module is fixed with the lens through a screw hole of the sheet metal part. In the process of manufacturing the module, the center of the image sensor needs to be ensured to be aligned with the optical axis of the lens, and the method can be practically performed as follows.
Firstly, a device is designed, and a clamp carrier (the clamp is matched with a PCB board of an image sensor) controlled by multiple shafts (at least comprising an X translation shaft, a Y translation shaft and a Z rotation shaft) is installed on the device.
Secondly, a set of imaging system is arranged below the clamp carrying platform to image the PCB of the image sensor, the imaged image at least comprises one corner of the image sensor, and as a specific implementation mode, the pixel length of two sides corresponding to the corner is at least 1/2 of the image size in each dimension.
Thirdly, a module which is glued and confirmed that the coincidence degree of the center of the image sensor and the optical axis meets the technical index is placed on the clamp carrier.
Fourth, the imaging system is used for collecting the image of the standard module to obtain an image to be detected, and then the edge detection method is used for obtaining the pixel positions of the two edges of the standard module in the image.
Fifthly, placing a PCB (printed circuit board) of an image sensor to be adjusted on a clamp carrying platform, then collecting an image to be detected from an imaging system in real time, and detecting the image to be detected by using the edge detection methodTwo sides of the image sensor of (2), and respectively calculating the distance s between the two sides and the corresponding two sides of the standard module h Sum s v And an included angle theta.
Finally, adjusting an X translation axis, a Y translation axis and a Z rotation axis, and changing the position and the horizontal angle of the PCB of the image sensor until s h 、s v And if theta is smaller than the corresponding index, the coincidence ratio of the center of the image sensor and the optical axis is considered to meet the design index. And then, dispensing and solidifying the metal plate and the PCB in adjustment according to the process requirement, and copying a module.
In the above method, the edge detection process using the edge detection method provided by the embodiment of the present invention is as follows:
(1) Selecting a first point P v1 . In this embodiment, the pixel length of the two edges of one corner of the photosensitive area of the image sensor in the image is at least 1/2 of the corresponding dimension of the image, the image having dimensions 2592×1944. The image sensor photosensitive area has a lateral edge length of at least 1296 and a vertical edge length of at least 972. Thus the first point P v1 Is selected as (700,1296).
(2) A first rectangular region is generated. S is S l The value of (2) is set to 1296, S w The value of (2) is set to 200%<972-700) can be set to ensure that the first rectangular area generated does not exceed the photosensitive area in the vertical direction, and at the same time, the edge points in the image can be covered in the lateral direction without excessive calculation, so that the target edge points are not missed. The first rectangular area is shown as a rectangular box of the black border in fig. 8.
(3) And calculating gray projection in the first rectangular area to obtain a first distribution curve, carrying out Gaussian filtering on the first distribution curve, deriving the filtered first distribution curve, and selecting points meeting the requirements. Here, the preset threshold t in the preset condition is set to be 5, and according to the needs of the actual scene, the pixel point on the edge from dark to bright, i.e. f' >0, is selected according to the edge characteristics of the image sensor. As indicated by the "x" in the black circle in fig. 8.
(4) From image transmissionThe distribution characteristics of the sensor edges, we choose the rightmost point as in fig. 8 as the first target edge point p v1 The sitting sign is (r v1 ,c v1 )。
(5) And (3) calculating a second target edge point on the vertical edge of the target by repeating the steps (1) - (4). As a specific embodiment, in order to prevent the first rectangular region from exceeding the photosensitive region of the image sensor in the vertical direction, a point on the horizontal edge is calculated by the same procedure as described above, and p is used h1 The obtained sitting sign is indicated as (r h1 ,c h1 ). The projected rectangle center P selected at this time h1 Is (972,1892), S l 972, S w 500. Then select the second point P v2 Is the coordinates of (a)S l Set to 40, S w The value of (2) is set to +.>The projected rectangle (i.e., the second rectangular region) thus generated is shown as a black box in fig. 9. After the projected rectangle is determined, a second target edge point p is calculated according to the same method v2 As indicated by "x" in fig. 9.
(6) P pair of v1 、p v2 The connected line segments are inflated to obtain a target detection region ROI for edge detection, as shown by the black rectangular box in fig. 10. Here, the expansion structural element is selected to be square, and the side length thereof is set to 16.
(7) Edge detection is performed within the ROI. And after a third edge point is obtained by adopting a canny edge detection operator, filtering out interference edges through characteristic values, for example, removing edges with too small length, removing edges with larger difference between the direction and 90 degrees, and the like.
(8) And (3) performing straight line fitting after obtaining the third edge point to obtain the mathematical equation representation of the target edge. The least squares fit is used here.
(9) The horizontal edge is detected. Since p has been calculated h1 Only the steps are needed to be performed5) P is calculated again by the principle of (2) h2 And (3) obtaining the product. And then calculating the mathematical representation of the horizontal edge according to the methods of the steps (6) - (8). The two edges obtained by the final detection are shown in fig. 11.
It should be noted that, in this embodiment, the above determination of the position of the image sensor in the image to be detected is only one application scenario of the above edge detection method, in fact, the edge detection may also be applied to other scenarios, for example, calculating a deviation of the position of the image sensor according to the standard position of the image sensor on the PBC and the actual position detected by using the above edge detection method, determining whether the deviation is within an acceptable range, or, for example, imaging the image sensor, so that the image to be detected includes the complete image of the image sensor, and determining the size of the image sensor by determining the edges of each side of the image sensor.
In order to perform the respective steps of edge detection in the above embodiments and the various possible implementations, an implementation of the edge detection device 100 is given below. Referring to fig. 12, fig. 12 is a block diagram illustrating an edge detection device 100 according to an embodiment of the invention. It should be noted that, the basic principle and the technical effects of the edge detection device 100 provided in this embodiment are the same as those of the above embodiment, and for brevity, the description of this embodiment is not mentioned.
The edge detection device 100 includes an acquisition module 110, a first determination module 120, a projection module 130, a second determination module 140, a detection module 150, and a position determination module 160.
The obtaining module 110 is configured to obtain an image to be detected, where the image to be detected includes a first edge to be detected.
The first determining module 120 is configured to determine a first rectangular area and a second rectangular area in the image to be detected, where a first central axis of the first rectangular area and a first central axis of the second rectangular area are both parallel to the first edge and a distance between the first central axis and the first edge is smaller than a first preset value.
As one specific embodiment, the first edge includes a first endpoint and a second endpoint, the coordinates of the first endpoint include a first abscissa, the coordinates of the second endpoint include a second abscissa, and the first determining module 120 is specifically configured to: determining a first point and a second point in a preset range of a first edge, wherein the abscissa of the first point and the abscissa of the second point are in the first abscissa and the second abscissa range, and the distance between the first point and the first edge and the distance between the second point and the first edge are smaller than a first preset value; determining a first rectangular area by taking a first point as a center, wherein a first central axis of the first rectangular area is parallel to a first side, and the distance between the first central axis and the first side is smaller than a first preset value; and determining a second rectangular area by taking the second point as the center, wherein the first central axis of the second rectangular area is parallel to the first edge and the distance between the first central axis and the first edge is smaller than a first preset value.
The projection module 130 is configured to respectively perform gray scale projection on the first rectangular region and the second rectangular region along respective first central axes, so as to obtain a first distribution curve and a second distribution curve.
As one embodiment, the projection module 130 is specifically configured to: counting a first gray value of a first pixel point on each column parallel to a first central axis of the first rectangular region in the first rectangular region; generating a first distribution curve according to the first gray value; counting second gray values of second pixel points on each column parallel to the first central axis of the second rectangular region in the second rectangular region; and generating a second distribution curve according to the second gray value.
As a specific embodiment, the first distribution curve is expressed as: the first distribution curve is expressed as:wherein r is is Is the c-th in the first rectangular area i Row start coordinates of columns, r ie Is the c-th in the first rectangular area i Row termination coordinates of columns, G (r, c i ) For the r line c i The first gray value of the first pixel point of the column, the second distribution curve is expressed as: />Wherein r is js Is the c-th in the second rectangular area j Row start coordinates of columns, r je Is the c-th in the second rectangular area j Row termination coordinates of columns, G (r, c j ) For the r line c j And a second gray level value of a second pixel point of the column.
The second determining module 140 is configured to determine the target detection area according to the first distribution curve and the second distribution curve.
As a specific embodiment, the second determining module 140 is specifically configured to: deriving the first distribution curve to obtain a plurality of first edge points of which derivative values meet preset conditions; deriving a second distribution curve to obtain a plurality of second edge points of which derivative values meet preset conditions; determining a first target edge point and a second target edge point from the plurality of first edge points and the plurality of second edge points, wherein a distance between the first target edge point and the second target edge point is greater than a second preset value; generating an edge straight line according to the first target edge point and the second target edge point; and performing expansion operation on the edge straight line to obtain a target detection area.
The detecting module 150 is configured to perform edge detection on the target detection area to obtain an edge of the first edge.
As one embodiment, the detection module 150 is specifically configured to: performing edge detection on the target detection area by using a preset edge detection operator to obtain a plurality of third edge points; performing straight line fitting on a plurality of third edge points to obtain an edge function of the first edge; an edge of the first edge is determined based on the edge function of the first edge.
The image to be detected further includes a second edge to be detected, the first edge and the second edge are two edges of the image sensor, the first edge and the second edge form an included angle, and the position determining module 160 is configured to: performing edge detection on the second edge to obtain the edge of the second edge; and determining the position of the image sensor in the image to be detected according to the edges of the first side and the edges of the second side.
The embodiment of the present invention further provides a block schematic diagram of a computer device 10 capable of executing the above-mentioned edge detection method, referring to fig. 13, fig. 13 shows a block schematic diagram of a computer device 10 provided by the embodiment of the present invention, where the computer device 10 includes a processor 11, a memory 12, a bus 13, and a communication interface 14. The processor 11 and the memory 12 are connected via a bus 13, and the processor 11 communicates with external devices via a communication interface 14.
The processor 11 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 11 or by instructions in the form of software. The processor 11 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), and the like; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The memory 12 is used for storing a program, such as the edge detection device 100 in the embodiment of the present invention, and each edge detection device 100 includes at least one software functional module that may be stored in the memory 12 in the form of software or firmware (firmware), and the processor 11 executes the program after receiving the execution instruction to implement the edge detection method in the embodiment of the present invention.
The memory 12 may include high-speed random access memory (RAM: random Access Memory) and may also include non-volatile memory (nonvolatile memory). Alternatively, the memory 12 may be a storage device built into the processor 11, or may be a storage device independent of the processor 11.
The bus 13 may be an ISA bus, a PCI bus, an EISA bus, or the like. Fig. 13 is represented by only one double-headed arrow, but does not represent only one bus or one type of bus.
An embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an edge detection method as described above.
In summary, the embodiments of the present invention provide an edge detection method, an edge detection device, a computer device, and a storage medium, where the method includes: acquiring an image to be detected, wherein the image to be detected comprises a first edge to be detected; determining a first rectangular area and a second rectangular area in an image to be detected, wherein a first central axis of the first rectangular area and a first central axis of the second rectangular area are parallel to a first side, and the distance between the first central axis and the first side is smaller than a first preset value; respectively carrying out gray projection on the first rectangular region and the second rectangular region along respective first central axes to obtain a first distribution curve and a second distribution curve; determining a target detection area according to the first distribution curve and the second distribution curve; and performing edge detection on the target detection area to obtain the edge of the first edge. Compared with the prior art, the method has the advantages that gray projection is carried out on the first rectangular area and the second rectangular area in the image to be detected along the respective first central axes, so that a first distribution curve and a second distribution curve are obtained; determining a target detection area according to the first distribution curve and the second distribution curve; and finally, carrying out edge detection on the target detection area to obtain the edge of the first edge. The invention shields the change of the surface characteristics of the object to be detected, is not influenced by the change of the surface characteristics, can be suitable for the edge detection of the object to be detected with different surface characteristics, and has higher compatibility.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. An edge detection method, the method comprising:
acquiring an image to be detected, wherein the image to be detected comprises a first edge to be detected, the first edge comprises a first endpoint and a second endpoint, the coordinates of the first endpoint comprise a first abscissa, and the coordinates of the second endpoint comprise a second abscissa;
determining a first rectangular area and a second rectangular area in the image to be detected, wherein a first central axis of the first rectangular area and a first central axis of the second rectangular area are parallel to the first side, and the distance between the first central axis and the first side is smaller than a first preset value;
respectively carrying out gray projection on the first rectangular region and the second rectangular region along respective first central axes to obtain a first distribution curve and a second distribution curve;
Determining a target detection area according to the first distribution curve and the second distribution curve;
performing edge detection on the target detection area to obtain the edge of the first edge;
the step of determining a first rectangular area and a second rectangular area in the image to be detected comprises the following steps:
determining a first point and a second point in a preset range of the first edge, wherein the abscissa of the first point and the abscissa of the second point are in the first abscissa and the second abscissa range, and the distance between the first point and the first edge and the distance between the second point and the first edge are smaller than the first preset value;
determining a first rectangular area by taking the first point as a center, wherein a first central axis of the first rectangular area is parallel to the first side and the distance between the first central axis and the first side is smaller than the first preset value;
determining a second rectangular area by taking the second point as a center, wherein a first central axis of the second rectangular area is parallel to the first edge and the distance between the first central axis and the first edge is smaller than the first preset value;
the step of determining a target detection area according to the first distribution curve and the second distribution curve includes:
Deriving the first distribution curve to obtain a plurality of first edge points of which derivative values meet preset conditions;
deriving the second distribution curve to obtain a plurality of second edge points of which derivative values meet the preset conditions;
determining a first target edge point and a second target edge point from the plurality of first edge points and the plurality of second edge points, wherein a distance between the first target edge point and the second target edge point is greater than a second preset value;
generating an edge straight line according to the first target edge point and the second target edge point;
and performing expansion operation on the edge straight line to obtain the target detection area.
2. The edge detection method according to claim 1, wherein the step of performing gray-scale projection on the first rectangular region and the second rectangular region along the respective first central axes to obtain a first distribution curve and a second distribution curve includes:
counting a first gray value of a first pixel point on each column parallel to a first central axis of the first rectangular region in the first rectangular region;
generating a first distribution curve according to the first gray value;
counting second gray values of second pixel points on each column parallel to a first central axis of the second rectangular region in the second rectangular region;
And generating a second distribution curve according to the second gray value.
3. The edge detection method of claim 2, wherein the first distribution curve is expressed as:wherein->Is the +.>Row start coordinates of column,/->Is saidFirst rectangular region->Row termination coordinates of column,/->Is->Line->A first gray value of a first pixel point of the column, and the second distribution curve is expressed as: />Wherein->Is the +.>Row start coordinates of column,/->Is the +.>Row termination coordinates of column,/->Is->Line->And a second gray level value of a second pixel point of the column.
4. The edge detection method according to claim 1, wherein the step of edge detecting the target detection area to obtain the edge of the first edge includes:
performing edge detection on the target detection area by using a preset edge detection operator to obtain a plurality of third edge points;
performing linear fitting on the plurality of third edge points to obtain an edge function of the first edge;
and determining the edge of the first edge according to the edge function of the first edge.
5. The edge detection method according to claim 1, wherein the image to be detected further includes a second edge to be detected, the first edge and the second edge being two edges of an image sensor, the first edge and the second edge forming an included angle, the method further comprising:
performing edge detection on the second edge to obtain the edge of the second edge;
and determining the position of the image sensor in the image to be detected according to the edge of the first edge and the edge of the second edge.
6. An edge detection device, the device comprising:
the device comprises an acquisition module, a detection module and a detection module, wherein the acquisition module is used for acquiring an image to be detected, the image to be detected comprises a first edge to be detected, the first edge comprises a first endpoint and a second endpoint, the coordinates of the first endpoint comprise a first abscissa, and the coordinates of the second endpoint comprise a second abscissa;
the first determining module is used for determining a first rectangular area and a second rectangular area in the image to be detected, wherein a first central axis of the first rectangular area and a first central axis of the second rectangular area are parallel to the first side, and the distance between the first central axis and the first side is smaller than a first preset value;
The projection module is used for respectively carrying out gray projection on the first rectangular area and the second rectangular area along respective first central axes to obtain a first distribution curve and a second distribution curve;
the second determining module is further used for determining a target detection area according to the first distribution curve and the second distribution curve;
the detection module is used for carrying out edge detection on the target detection area so as to obtain the edge of the first edge;
the first determining module is specifically configured to: determining a first point and a second point in a preset range of the first edge, wherein the abscissa of the first point and the abscissa of the second point are in the first abscissa and the second abscissa range, and the distance between the first point and the first edge and the distance between the second point and the first edge are smaller than the first preset value; determining a first rectangular area by taking the first point as a center, wherein a first central axis of the first rectangular area is parallel to the first side and the distance between the first central axis and the first side is smaller than the first preset value; determining a second rectangular area by taking the second point as a center, wherein a first central axis of the second rectangular area is parallel to the first edge and the distance between the first central axis and the first edge is smaller than the first preset value;
The second determining module is specifically configured to: deriving the first distribution curve to obtain a plurality of first edge points of which derivative values meet preset conditions; deriving the second distribution curve to obtain a plurality of second edge points of which derivative values meet the preset conditions; determining a first target edge point and a second target edge point from the plurality of first edge points and the plurality of second edge points, wherein a distance between the first target edge point and the second target edge point is greater than a second preset value; generating an edge straight line according to the first target edge point and the second target edge point; and performing expansion operation on the edge straight line to obtain the target detection area.
7. A computer device comprising a memory and a processor, wherein the memory stores a computer program, the processor implementing the edge detection method according to any of claims 1-5 when executing the computer program.
8. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the edge detection method according to any one of claims 1-5.
CN202110485292.2A 2021-04-30 2021-04-30 Edge detection method, edge detection device, computer equipment and storage medium Active CN113160259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110485292.2A CN113160259B (en) 2021-04-30 2021-04-30 Edge detection method, edge detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110485292.2A CN113160259B (en) 2021-04-30 2021-04-30 Edge detection method, edge detection device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113160259A CN113160259A (en) 2021-07-23
CN113160259B true CN113160259B (en) 2024-01-30

Family

ID=76873103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110485292.2A Active CN113160259B (en) 2021-04-30 2021-04-30 Edge detection method, edge detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113160259B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113566721B (en) * 2021-07-27 2024-01-23 凌云光技术股份有限公司 Screw aperture measuring method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5317388A (en) * 1992-06-29 1994-05-31 United Parcel Service Of America, Inc. Method and apparatus for determining the displacement of a rectangular object with respect to a reference position
CN103914830A (en) * 2014-02-22 2014-07-09 小米科技有限责任公司 Straight line detection method and device
CN104298753A (en) * 2014-10-17 2015-01-21 重庆市云日信息技术有限公司 Personnel assessment method based on face image processing
CN104981105A (en) * 2015-07-09 2015-10-14 广东工业大学 Detecting and error-correcting method capable of rapidly and accurately obtaining element center and deflection angle
CN108520521A (en) * 2017-04-20 2018-09-11 南京航空航天大学 The method of wheel tread extraction and splicing based on image procossing
CN109978940A (en) * 2019-03-28 2019-07-05 福州大学 A kind of SAB air bag size vision measuring method
CN110992356A (en) * 2019-12-17 2020-04-10 深圳辰视智能科技有限公司 Target object detection method and device and computer equipment
CN111896556A (en) * 2020-08-04 2020-11-06 湖南大学 Glass bottle bottom defect detection method and system based on machine vision
CN112215240A (en) * 2020-10-13 2021-01-12 珠海博明视觉科技有限公司 Optimization method for improving 2D complex edge detection precision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10789671B2 (en) * 2016-12-28 2020-09-29 Ricoh Company, Ltd. Apparatus, system, and method of controlling display, and recording medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5317388A (en) * 1992-06-29 1994-05-31 United Parcel Service Of America, Inc. Method and apparatus for determining the displacement of a rectangular object with respect to a reference position
CN103914830A (en) * 2014-02-22 2014-07-09 小米科技有限责任公司 Straight line detection method and device
CN104298753A (en) * 2014-10-17 2015-01-21 重庆市云日信息技术有限公司 Personnel assessment method based on face image processing
CN104981105A (en) * 2015-07-09 2015-10-14 广东工业大学 Detecting and error-correcting method capable of rapidly and accurately obtaining element center and deflection angle
CN108520521A (en) * 2017-04-20 2018-09-11 南京航空航天大学 The method of wheel tread extraction and splicing based on image procossing
CN109978940A (en) * 2019-03-28 2019-07-05 福州大学 A kind of SAB air bag size vision measuring method
CN110992356A (en) * 2019-12-17 2020-04-10 深圳辰视智能科技有限公司 Target object detection method and device and computer equipment
CN111896556A (en) * 2020-08-04 2020-11-06 湖南大学 Glass bottle bottom defect detection method and system based on machine vision
CN112215240A (en) * 2020-10-13 2021-01-12 珠海博明视觉科技有限公司 Optimization method for improving 2D complex edge detection precision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Contour-based corner detection and classification by using mean projection transform;Seyed Mostafa Mousavi Kahaki等;《Sensors》;第14卷(第3期);4126-4143 *
基于图像分割的线段检测方法;张博等;《应用技术学报》;第14卷(第04期);328-331 *

Also Published As

Publication number Publication date
CN113160259A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
US6768509B1 (en) Method and apparatus for determining points of interest on an image of a camera calibration object
TWI441095B (en) Distance evaluation methods and apparatuses, and machine readable medium thereof
CN109479082B (en) Image processing method and apparatus
EP3595285A1 (en) Photography processing method and device for camera module, and terminal apparatus
CN110517202A (en) A kind of vehicle body camera calibration method and its caliberating device
US20110128354A1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
Surh et al. Noise robust depth from focus using a ring difference filter
CN113160161B (en) Method and device for detecting defects at edge of target
CN108986129B (en) Calibration plate detection method
JP2013174547A (en) Stereo three-dimensional measuring instrument
WO2019001164A1 (en) Optical filter concentricity measurement method and terminal device
Farag A comprehensive real-time road-lanes tracking technique for autonomous driving
US20200074660A1 (en) Image processing device, driving assistance system, image processing method, and program
CN113744256A (en) Depth map hole filling method and device, server and readable storage medium
CN113160259B (en) Edge detection method, edge detection device, computer equipment and storage medium
US20040109599A1 (en) Method for locating the center of a fiducial mark
CN113112396B (en) Method for detecting conductive particles
KR20160001868A (en) Method for calibrating distortion of image in camera
CN108734666B (en) Fisheye image correction method and device
KR101574195B1 (en) Auto Calibration Method for Virtual Camera based on Mobile Platform
US20220076428A1 (en) Product positioning method
CN113744200B (en) Camera dirt detection method, device and equipment
CN112734721B (en) Optical axis deflection angle detection method, device, equipment and medium
CN115034988A (en) RGBD camera-based two-stage main body point cloud filtering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant