WO2019041590A1 - Procédé de détection de bord à l'aide d'un angle arbitraire - Google Patents

Procédé de détection de bord à l'aide d'un angle arbitraire Download PDF

Info

Publication number
WO2019041590A1
WO2019041590A1 PCT/CN2017/112917 CN2017112917W WO2019041590A1 WO 2019041590 A1 WO2019041590 A1 WO 2019041590A1 CN 2017112917 W CN2017112917 W CN 2017112917W WO 2019041590 A1 WO2019041590 A1 WO 2019041590A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
edge
edge detection
pixels
pixel
Prior art date
Application number
PCT/CN2017/112917
Other languages
English (en)
Chinese (zh)
Inventor
刘苏
张劭龙
耿兴光
张以涛
张俊
张海英
Original Assignee
中国科学院微电子研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院微电子研究所 filed Critical 中国科学院微电子研究所
Publication of WO2019041590A1 publication Critical patent/WO2019041590A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Definitions

  • the present invention relates to an image processing method, and more particularly to an edge detection method at an arbitrary angle.
  • edges bring humanity an image of the world of thinking and is an important way for human beings to understand the world.
  • the mutations that exist in the image and the discontinuous and unstable structures are called edges.
  • the edges often carry a wealth of image information.
  • These edge points constitute the contour of the object, and these contours are often of interest to the researcher. They focus on the characteristics of the research target, and are extremely important for subsequent image segmentation, image matching, target recognition, and computer vision. How to convert images with unclear outlines into clear edge images has become the direction that people have been studying intensively for many years.
  • people have been introducing mathematical methods to extract and interpret image edges. From the original gradient-based Prewitt operator, Sobel operator, etc. to LoG operator and Canny operator, wavelet transform to machine learning, the depth and difficulty of edge detection problem are reflected.
  • the multi-angle edge detection algorithm based on the gradient principle uses a N*N gradient template to convolve the two-dimensional image. Since the template is generally square and its size is up to 5 pixels * 5 pixels, the template can generate a gradient direction of up to 16, ie 0 °, 30 °, 45 °, 60 °, 90 °, 120 °, 135 ° , 150°, 180°, 210°, 225°, 240°, 270°, 300°, 315°, and 330° directions.
  • the classical two-dimensional wavelet transform modulus maximum edge detection method can only perform non-maximum value suppression according to the angle classification after finding the gradient along the x direction and the y direction.
  • using the existing angle edge detection method to perform arbitrary angle edge detection on the image edge basically relies on rotating image and rotating coordinates.
  • the image is rotated and the coordinates are rotated, the image is interpolated, which causes the image gray information to change. Therefore, the edge of the image is recognized after rotating the image and rotating the coordinates, and the edge of the image cannot be guaranteed. It is also necessary to rotate the edge image back to the original position according to the angle of rotation, which again causes the edge image information to change.
  • rotating images and rotating coordinates can cause image size changes and image boundary problems that can increase image processing difficulty.
  • the present invention provides a method for realizing a single pixel arbitrary angle edge detection without changing image information.
  • the present invention provides an edge detection method at any angle, including the following steps:
  • the gray value of the selected part of the above two-dimensional pixel points is extracted by the following rule:
  • Convolution operation is performed on the matrix of several gray values stored in a matrix form and the first derivative f ⁇ (x) of the Gaussian function, and then the absolute value of the convolution operation is taken, and the absolute value is taken locally. maximum;
  • the obtained local maximum value position is assigned to a non-zero gray value, and the gray value of the other pixel positions is set to zero.
  • the first derivative f ⁇ (x) of the Gaussian function is Wherein the first derivative f ⁇ (x) of the Gaussian function is Where ⁇ is a constant, and the value ranges from 1 to 10.
  • the non-zero gray value is 255/the number of edge detection angles.
  • the method further comprises: replacing the pixel represented by the obtained gray value matrix with the original The step of corresponding pixels on the image.
  • the user performs 4 to 8 independent i, j, r, and k settings to achieve 4 to 8 different times. Extract the edge detection of the angle.
  • a plurality of gray value matrices obtained by different edge detection angles are superimposed in an image display form, and a gray level threshold is set according to an actual required edge image requirement for the gray level of the plurality of superimposed images, according to the binary value
  • the threshold is binarized to obtain the desired edge.
  • the desired edge obtained is a single pixel wide edge.
  • the pulse recognition method of the present invention has the following beneficial effects as compared with the prior art:
  • the present invention provides an algorithm that can achieve edge detection at any angle in the range of [0°, 360°];
  • the invention can realize the edge detection of the [0°, 360°] angle interval by applying only the [45°, 90°] edge detection angle, and reduce the complexity of the image edge detection algorithm;
  • the present invention discloses for the first time a formula for constructing an arbitrary angle edge detection operator
  • the edge detection angle construction method of the present invention is more achievable than the existing angle-based classical operator
  • the algorithm transforms the two-dimensional image edge recognition problem into one-dimensional curve signal processing problem, which reduces the complexity of the algorithm
  • the edge generated by this algorithm is a single pixel wide edge.
  • Figure 1 is a schematic illustration of a compact connection of k adjacent pixels of an image
  • FIG. 2 is a schematic diagram of a loose connection of k adjacent pixels of an image
  • FIG. 3 is a schematic diagram of an image edge detecting an arbitrary angle composition form of k adjacent pixels
  • Figure 4 is a schematic illustration of a compact connection of two adjacent pixels of an image
  • Figure 5 is a schematic illustration of a loose connection of two adjacent pixels of an image
  • FIG. 6 is a schematic diagram showing an arbitrary angle composition form of two adjacent pixels of an image edge detection
  • Figure 7 is a schematic diagram of a portion of the image beyond the boundary of the image to complement the 0 amplification
  • Figure 8 is a relational expression for superimposing a plurality of different detection angles
  • Figure 9 is a schematic diagram of superimposing a plurality of different detection angles
  • Figure 10 is an original view and a comparison chart of angle optimization and multiple angle superposition
  • 11 to 14 are the relationship between the number of angles of the circle, the circle and the letters, the circle, the circle and the letter, and the relationship between the connected domain and the number of pixels P;
  • Figure 15 is a schematic view of the break connection of the arm edge
  • Figure 16 is an image of the arm and wrist edges
  • Figure 17 is a curve of an arm wrist transformed into an edge of a one-dimensional curve and a filtered or high-order polynomial fit
  • Figure 18 is a graph showing the curvature of the arm wrist and the corresponding curvature
  • Figure 19 is an image of the arm wrist edge with radial artery information
  • Figure 20 is a segmented radial artery image
  • Figure 21 is a ordinate ordinate averaging and straight line fitting curve of the radial artery
  • Fig. 22 is a coordinate display diagram of the radial artery.
  • the invention discloses an edge detection method of an arbitrary angle, which is obtained by acquiring gray values of an image to be detected, and then scanning the image by using pixel lines of different angles to respectively extract grays of pixels corresponding to the plurality of pixel lines.
  • the degree value is stored as a matrix, and the obtained matrix is convoluted with the first derivative f ⁇ (x) of the Gaussian function, and the absolute value of the convolution operation result is taken, and the local maximum is taken for the absolute value.
  • the obtained local maximum value position is given a non-zero gray value, and the gray value of the other pixel position is set to 0, thereby obtaining a local pole Intermittent points or connections for large points.
  • Those skilled in the art can perform interpolation or fitting based on these points or lines to obtain continuous line segments, and can also superimpose the results obtained by multiple pixel lines of different angles, and then binarize to obtain desired edges, and can also be based on further Connect the domain operations to find a continuous edge line of a single value.
  • the edge detection method of any angle of the present invention includes the following steps:
  • the gray value of the selected part of the above two-dimensional pixel points is extracted by the following rule:
  • Number of cycles r ⁇ compact number i ⁇ (number of pixels per line k-1) + number of pixels per line k ⁇ number of cycles r ⁇ loose times j + number of pixels per line k number of columns n;
  • the thus obtained line segment which is obtained from the pixel at the upper leftmost corner of the image to be detected and which is bent at the lowermost row of the image to be detected is referred to as a "pixel line".
  • the gray value of the pixel points covered by the straight line of the corresponding pixel is extracted by setting different k values each time by different extraction angles (also referred to as edge detection angles).
  • Convolution operation is performed on a plurality of pixel lines stored in a matrix form and a first derivative f ⁇ (x) of a Gaussian function, and the absolute value of the convolution operation result is taken, and the absolute value is taken as a local maximum Value;
  • the first derivative f ⁇ (x) of the Gaussian function is Where ⁇ is a constant, the value ranges from 1 to 10;
  • the gray value matrix, h n, ⁇ ( ⁇ ) represents the result of the convolution operation.
  • the obtained local maximum value position is assigned to a non-zero gray value, and the gray value of the other pixel positions is set to zero.
  • the non-zero gradation value is, for example, 255/the number of edge detection angles.
  • a plurality of gray value matrices obtained by different edge detection angles may be grayscale superimposed in an image display form, and a binarization threshold is set according to an actual required edge image requirement for the gray scale of the image after multiple superpositions, according to the The binarization threshold binarizes the image to obtain the desired edge.
  • the specific calculation method is shown in Figs. 8 and 9, for example, but Figures 8 and 9 are only schematic and are not intended to limit the present invention.
  • different edge detection angles can be selected, for example, from 4 to 8. As shown in Figures 10 to 14, it has been experimentally verified that the best effect is obtained when different edge detection angles are selected from 4 to 8.
  • the step of extracting the gray value of the selected part of the two-dimensional pixel by using the above rule is based on the following principle:
  • the present invention defines two extraction modes, referred to as compact connections and loose connections, respectively:
  • the compact connection means that the first extraction position of the next row of pixels is at the same position as the last extraction position of the pixel of the previous row, and the extracted gray value is represented by the matrix Q ⁇ 2L as follows:
  • the loose connection means that the first extraction position of the pixel of the next row is located one bit to the right of the last extraction position of the pixel of the previous row, that is, the position of one plus, and the gray value extracted by the matrix is Q.
  • ⁇ 2L is expressed as follows:
  • the above-mentioned compact connection and loose connection can be mixed according to a certain rule, for example, i times compact connection, j times loose connection, and then repeated r times.
  • the above i, j, and r are all positive integers not larger than the number of rows m.
  • the different extraction angles (edge detection angles) direction in the numerical setting, are represented by the number of pixels extracted in each row, the number of rows, the number of repetitions of the compact connection and the loose connection, etc., which can be set by These parameters are used to determine the specific extraction angle with.
  • the adjacent pixel point relationship of the image is divided into a compact connection and a loose connection.
  • the compact connection is as shown in FIG. 4: starting from the pixel in the leftmost column of the image, and the pixel of the adjacent row.
  • the first and last pixels are vertically connected, and each two lines form a compact connection unit.
  • several compact connecting units are connected in a line up to the image boundary, and the angle between this line and its y-axis projection is the edge detection angle.
  • Its matrix Q ⁇ 2L is expressed as:
  • the loose connection is shown in Figure 5: starting from the top left corner of the image, starting with the pixels of the leftmost column and the top row, the pixels of the adjacent rows are connected diagonally to the first and last pixels, and each two rows form a loose connection unit.
  • a plurality of loose connecting units are connected in a line up to the image boundary, and the angle between the line and its y-axis projection is the edge detecting direction.
  • Its matrix Q ⁇ 2R is expressed as:
  • the edge detection angle composed of a two-pixel compact connection unit Is the left boundary of the angular interval of the segment.
  • Edge detection angle composed of two-pixel loose connection unit Is the right border of the angular interval of the segment. Therefore, the angle interval is ( ⁇ 2L , ⁇ 2R ).
  • the union of the detected angle interval boundaries is ( ⁇ 1 , ⁇ 2 ) ⁇ ( ⁇ 3 , ⁇ 4 ) ⁇ ... ⁇ ( ⁇ n-1 , ⁇ n ); the range of the union is (45°, 90°) ).
  • the arbitrary angles in the interval are composed as follows:
  • i compact connections and j loose connections constitute one unit repeated r times, and the relationship between the number of rows m and the number of columns n and i, j and r is:
  • each boundary condition also conforms to the above formula.
  • the pixels in the image can be combined according to the required angle, and the image is complement-zero amplified for the part of the algorithm that realizes the boundary beyond the image, as shown in FIG. 7 .
  • the left side boundary is used as the starting point to generate a number of pixel lines X' 1 , X′ 2 ... X′ m , and the upper side boundary is the starting point.
  • Y' 1 ... Y' m-1 where m is a row and k is the number of connected pixels.
  • Each pixel line is convoluted with the first derivative f ⁇ (t) of the Gaussian function, and the absolute value of the convolution operation is obtained:
  • the edge detection angle is reduced from [0°, 360°] to [0°, 180°] by convolving and constructing absolute values of the constructed pixel lines. Therefore, it is only necessary to process the image in the interval of the edge detection angle [0°, 180°].
  • the 90° direction is a vertically segmented image, and each column of pixels constitutes a pixel line. Therefore, the detection angle range [45°, 90°] can be achieved.
  • the angle range [45°, 90°] can be mapped to [0°, 45°], [90°, 135°] and [135°, 180°] by transposing and flipping the image matrix.
  • the specific method is as follows:
  • the image matrix is flipped horizontally, and the edge detection angle interval is mapped from [45°, 90°] to [90°, 135°]. After the image matrix is transposed, the edge detection angle interval is mapped from [45°, 90°] to [135°, 180°]. After the image matrix is horizontally flipped and transposed, the edge detection angle interval is mapped from [45°, 90°] to [0°, 45°]. Based on the above method, the edge detection of the [0°, 360°] angle interval can be realized only by applying the [45°, 90°] edge detection angle.
  • the edge recognition method of any angle of the present invention can be applied to the pulse recognition, and the pulse recognition method includes, for example, the following steps:
  • Pre-treatment of the edges of the arms and wrists to further optimize the edges of the arms and wrists to provide protection for subsequent wrist veins This step specifically includes identifying the maximum connected domain of the arm edge, the arm edge breakpoint connection, and the arm wrist curve fitting as follows:
  • Identify the maximum connected domain of the arm edge identify the connected domain of the generated edge image, and find the largest connected domain of the right edge of the image. If the maximum connected domain runs through the left and right borders of the image, that is, there is no breakpoint in the connected domain, the maximum connected domain can be considered as the edge of the arm wrist.
  • Arm edge breakpoint connection Connect the arm edge segments to form an integral edge of the arm wrist that runs through the left and right borders of the image.
  • the maximum connected domain is only a part of the edge of the arm's wrist, so the other arm wrist edge segments need to be connected.
  • the edge segment is found in the range of 2 pixels in the upper, upper left, left, lower left, and lower directions from the left breakpoint of the largest connected domain.
  • the two connected domains are connected, and the intermediate breakpoint pixels complement the pixels between the two segments by interpolation or other fitting, eventually forming a new connected domain and
  • the breakpoint on the left side of the connected domain is the origin and further search for other edge segments until reaching the left edge of the image.
  • Curve fitting of the arm wrist Eliminate the step point generated in the process of turning the edge of the two-dimensional image into a one-dimensional curve, so that the one-dimensional arm edge curve of the transformation is smoother, and the edge feature of the arm wrist is highlighted.
  • Identifying the sacral stalk algorithm is used to identify the sacral stalk feature points: firstly extract the feature of the extracted arm wrist edge, identify the depression between the hand and the sacral stem, and find the lowest point of the depression.
  • the curvature of the humeral stem at the top of the epidermis is characterized by the fact that the wrist is sunken to the arm with a maximum curvature point, that is, the point where the boundary changes to a greater extent.
  • Radial artery image segmentation and vein recognition are used to segment the radial artery image and fit into a linear function that reflects the trend of the radial artery.
  • the specific steps include:
  • the pulse recognition method comprises the following steps:
  • Edge recognition creates continuous or interrupted points and/or lines at the edge of the arm's wrist.
  • the arm edge is then pre-treated to further optimize the edge of the arm wrist to provide protection for subsequent wrist pulse recognition.
  • the pre-processing process includes identifying the largest connected domain of the arm edge, the breakpoint connection of the arm edge, and the curve fitting of the arm wrist.
  • Identify the maximum connected domain of the arm edge identify the connected domain of the generated edge image and find the largest connected domain on the right edge of the image. If the maximum connected domain runs through the left and right borders of the image, that is, there is no breakpoint in the connected domain, the maximum connected domain can be considered as the edge of the arm wrist.
  • the arm edge breakpoint connection includes the steps of joining the arm edge segments to form an integral edge of the arm wrist that runs through the left and right borders of the image.
  • the maximum connected domain is only a part of the edge of the arm's wrist, so the other arm wrist edge segments need to be connected.
  • the edge segment is found in the range of 2 pixels in the upper, upper left, left, lower left, and lower directions from the left breakpoint of the largest connected domain.
  • the two connected domains are connected, and the intermediate breakpoint pixels complement the pixels between the two segments by interpolation or other fitting, eventually forming a new connected domain and
  • the breakpoint on the left side of the connected domain is the origin and further search for other edge segments until reaching the left edge of the image.
  • the wrist wrist curve fitting includes the following steps: using a low-pass filter or a polynomial curve fitting to eliminate the step point generated in the process of converting the edge of the two-dimensional image into a one-dimensional curve, so that The converted one-dimensional arm edge curve is smoother, highlighting the edge features of the arm wrist.
  • the sacral stem algorithm is used to identify the characteristic points of the sacral stem. As shown in Fig. 18, the extracted wrist arm edge is first extracted, and the depression between the hand and the sacral stem is identified to find the lowest point of the depression.
  • the curvature of the humeral stem at the top of the epidermis is characterized by the fact that the wrist is sunken to the arm with a maximum curvature point, that is, the point where the boundary changes to a greater extent.
  • Second look for peaks and valleys from the maximum curvature curve near the depression.
  • Radial artery image segmentation and pulse recognition An area is constructed with each pixel in the previously generated edge image (Fig. 19) as the origin.
  • the threshold of the mean and the variance is set according to the statistical rule of the mean and variance of the region of the radial artery boundary position. Calculate the mean and variance of the pixels in each edge pixel area.
  • the mean and variance of the pixels in each of the generated edge pixel regions are successively compared with the threshold, and the region meeting the threshold condition is binarized (Fig. 20). Instigation of binarization
  • the pulse image is averaged on the ordinate of the pixel to obtain a curve describing the radial artery image.
  • a quadratic polynomial straight line fitting is performed on the curve to obtain a linear function including the trend of the radial artery (Fig. 21), and the x-coordinate of the pulse is substituted into the linear function to obtain the ordinate of the pulse.
  • the position of the pulse in the image can be determined, as shown in Figure 22.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de détection de bord utilisant un angle arbitraire qui consiste à : déterminer une limite d'une plage d'angle de détection de bord ; déterminer un angle arbitraire dans une plage d'angle de détection de bord ; effectuer une opération de convolution sur chacun d'un certain nombre de lignes de pixels droites construites et d'une première dérivée d'une fonction gaussienne, obtenir une valeur absolue d'un résultat d'opération de convolution, et obtenir une valeur maximale locale de la valeur absolue ; attribuer une valeur de gris à la valeur maximale locale obtenue, et définir une échelle de gris d'autres valeurs maximales non locales en tant que 0 ; remplacer des pixels d'image d'origine par des pixels d'image ayant les valeurs de gris ; et effectuer une superposition de l'échelle de gris sur un certain nombre d'images obtenues au moyen de différents angles de détection de bord, régler, selon les exigences sur une image de bord réellement requise, un seuil de binarisation pour l'échelle de gris d'une image résultant de multiples fois de superposition, et effectuer, en fonction du seuil, une binarisation sur l'image, de façon à obtenir un bord souhaité. Le procédé fournit un algorithme de détection de bord réalisé en utilisant un angle arbitraire et réduit la complexité d'un algorithme de détection de bord d'image.
PCT/CN2017/112917 2017-08-31 2017-11-24 Procédé de détection de bord à l'aide d'un angle arbitraire WO2019041590A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710778429 2017-08-31
CN201710778429.7 2017-08-31

Publications (1)

Publication Number Publication Date
WO2019041590A1 true WO2019041590A1 (fr) 2019-03-07

Family

ID=65513630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112917 WO2019041590A1 (fr) 2017-08-31 2017-11-24 Procédé de détection de bord à l'aide d'un angle arbitraire

Country Status (2)

Country Link
CN (1) CN109427066B (fr)
WO (1) WO2019041590A1 (fr)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059706A (zh) * 2019-04-23 2019-07-26 上海工程技术大学 一种用于富椒盐噪声环境下单一直线的检测方法
CN110490889A (zh) * 2019-08-12 2019-11-22 中电科技(合肥)博微信息发展有限责任公司 一种基于边缘检测的雷达目标提取方法
CN110838127A (zh) * 2019-10-30 2020-02-25 合肥工业大学 一种用于智能汽车的特征图像边缘检测方法
CN110956078A (zh) * 2019-10-09 2020-04-03 中国人民解放军战略支援部队信息工程大学 一种电力线检测方法及装置
CN111179291A (zh) * 2019-12-27 2020-05-19 凌云光技术集团有限责任公司 一种基于邻域关系的边缘像素点提取方法及装置
CN111199235A (zh) * 2020-01-03 2020-05-26 深圳市京湾量子遥感科技有限公司 一种图像边缘提取方法
CN111445491A (zh) * 2020-03-24 2020-07-24 山东智翼航空科技有限公司 微型无人机三邻域极大差值边缘检测狭道导引算法
CN111524139A (zh) * 2020-04-02 2020-08-11 西安电子科技大学 一种基于双边滤波器的角点检测方法和检测系统
CN112435235A (zh) * 2020-11-23 2021-03-02 西安理工大学 一种基于图像分析的籽棉含杂率检测方法
CN112489066A (zh) * 2020-11-30 2021-03-12 国网山西省电力公司晋城供电公司 一种配电设备红外热成像图像边缘的提取方法
CN113706522A (zh) * 2021-09-08 2021-11-26 常州市新创智能科技有限公司 玻纤表面纸板屑检测方法、装置、存储介质和电子设备
CN113727050A (zh) * 2021-11-04 2021-11-30 山东德普检测技术有限公司 面向移动设备的视频超分辨率处理方法、装置、存储介质
WO2021253732A1 (fr) * 2020-06-18 2021-12-23 飞依诺科技(苏州)有限公司 Procédé et appareil de traitement d'image médicale, dispositif informatique et support d'enregistrement
CN113850800A (zh) * 2021-10-15 2021-12-28 郑州磨料磨具磨削研究所有限公司 一种硬脆材料划切缝崩边检测方法
CN113870297A (zh) * 2021-12-02 2021-12-31 暨南大学 一种图像边缘检测方法、装置及存储介质
CN114187267A (zh) * 2021-12-13 2022-03-15 沭阳县苏鑫冲压件有限公司 基于机器视觉的冲压件缺陷检测方法
CN114677340A (zh) * 2022-03-14 2022-06-28 上海第二工业大学 一种基于图像边缘的混凝土表面粗糙度的检测方法
CN115060754A (zh) * 2022-04-29 2022-09-16 江苏隧锦五金制造有限公司 一种不锈钢制品表面质量检测方法
CN115131387A (zh) * 2022-08-25 2022-09-30 山东鼎泰新能源有限公司 基于图像处理的汽油机喷雾撞壁参数自动提取方法及系统
CN115256400A (zh) * 2022-08-26 2022-11-01 北京理工大学 机器人三自由度电驱动耦合关节的运动可行范围线性界定方法
CN115564728A (zh) * 2022-09-30 2023-01-03 苏州大学 一种图像角点检测方法、装置、设备及应用
CN115578732A (zh) * 2022-11-21 2023-01-06 山东爱福地生物股份有限公司 一种肥料生产线的标签识别方法
CN115797925A (zh) * 2023-02-13 2023-03-14 青岛佳美洋食品有限公司 一种鱼肉加工异物检测方法
CN115908429A (zh) * 2023-03-08 2023-04-04 山东歆悦药业有限公司 一种泡脚药粉研磨精度检测方法及系统
CN116596924A (zh) * 2023-07-17 2023-08-15 山东唐乐生物科技股份有限公司 基于机器视觉的甜菊糖苷质量检测方法及系统
CN116416268B (zh) * 2023-06-09 2023-08-18 浙江双元科技股份有限公司 基于递归二分法的锂电池极片边缘位置检测方法及装置
CN116645297A (zh) * 2023-07-24 2023-08-25 济宁龙纳智能科技有限公司 基于人工智能的agv叉车控制方法
CN116863249A (zh) * 2023-09-01 2023-10-10 山东拓新电气有限公司 基于人工智能的煤矿传送带跑偏识别方法
CN116883407A (zh) * 2023-09-08 2023-10-13 山东省永星食品饮料有限公司 基于人工智能的瓶装水杂质检测方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264488B (zh) * 2019-06-20 2021-03-16 合肥工业大学 一种二值图像边缘提取装置
CN110264489B (zh) * 2019-06-24 2022-07-05 北京奇艺世纪科技有限公司 一种图像边界检测方法、装置及终端
CN111127498B (zh) * 2019-12-12 2023-07-25 重庆邮电大学 一种基于边缘自生长的Canny边缘检测方法
CN111882570A (zh) * 2020-07-28 2020-11-03 浙江水晶光电科技股份有限公司 边缘定位方法、装置、存储介质及电子设备
CN114332140B (zh) * 2022-03-16 2022-07-12 北京文安智能技术股份有限公司 一种交通道路场景图像的处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191812A1 (en) * 2001-04-24 2002-12-19 Nam-Deuk Kim Object edge watermarking
CN104156956A (zh) * 2014-08-06 2014-11-19 中国科学院生物物理研究所 一种基于高斯小波一维峰值识别的多角度边缘检测方法
CN104156958A (zh) * 2014-08-06 2014-11-19 中国科学院生物物理研究所 一种电路板布线边缘提取方法及提取平台
CN104732556A (zh) * 2015-04-13 2015-06-24 南通理工学院 基于染色矩阵算法的图像边缘检测方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537646B (zh) * 2014-12-12 2017-06-27 南京理工大学 遥感图像的多角度自动mtf估计方法
CN104715491B (zh) * 2015-04-09 2017-07-21 大连理工大学 一种基于一维灰度矩的亚像素边缘检测方法
CN105740869B (zh) * 2016-01-28 2019-04-12 北京工商大学 一种基于多尺度多分辨率的方形算子边缘提取方法及系统
CN105894521A (zh) * 2016-04-25 2016-08-24 中国电子科技集团公司第二十八研究所 基于高斯拟合的亚像素边缘检测方法
CN105975974A (zh) * 2016-05-10 2016-09-28 深圳市金脉智能识别科技有限公司 一种手指静脉识别中提取roi图像的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191812A1 (en) * 2001-04-24 2002-12-19 Nam-Deuk Kim Object edge watermarking
CN104156956A (zh) * 2014-08-06 2014-11-19 中国科学院生物物理研究所 一种基于高斯小波一维峰值识别的多角度边缘检测方法
CN104156958A (zh) * 2014-08-06 2014-11-19 中国科学院生物物理研究所 一种电路板布线边缘提取方法及提取平台
CN104732556A (zh) * 2015-04-13 2015-06-24 南通理工学院 基于染色矩阵算法的图像边缘检测方法

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059706A (zh) * 2019-04-23 2019-07-26 上海工程技术大学 一种用于富椒盐噪声环境下单一直线的检测方法
CN110059706B (zh) * 2019-04-23 2023-04-07 上海工程技术大学 一种用于富椒盐噪声环境下单一直线的检测方法
CN110490889A (zh) * 2019-08-12 2019-11-22 中电科技(合肥)博微信息发展有限责任公司 一种基于边缘检测的雷达目标提取方法
CN110490889B (zh) * 2019-08-12 2023-05-09 中电科技(合肥)博微信息发展有限责任公司 一种基于边缘检测的雷达目标提取方法
CN110956078A (zh) * 2019-10-09 2020-04-03 中国人民解放军战略支援部队信息工程大学 一种电力线检测方法及装置
CN110956078B (zh) * 2019-10-09 2023-06-30 中国人民解放军战略支援部队信息工程大学 一种电力线检测方法及装置
CN110838127A (zh) * 2019-10-30 2020-02-25 合肥工业大学 一种用于智能汽车的特征图像边缘检测方法
CN111179291A (zh) * 2019-12-27 2020-05-19 凌云光技术集团有限责任公司 一种基于邻域关系的边缘像素点提取方法及装置
CN111179291B (zh) * 2019-12-27 2023-10-03 凌云光技术股份有限公司 一种基于邻域关系的边缘像素点提取方法及装置
CN111199235A (zh) * 2020-01-03 2020-05-26 深圳市京湾量子遥感科技有限公司 一种图像边缘提取方法
CN111445491B (zh) * 2020-03-24 2023-09-15 山东智翼航空科技有限公司 微型无人机三邻域极大差值边缘检测狭道导引方法
CN111445491A (zh) * 2020-03-24 2020-07-24 山东智翼航空科技有限公司 微型无人机三邻域极大差值边缘检测狭道导引算法
CN111524139A (zh) * 2020-04-02 2020-08-11 西安电子科技大学 一种基于双边滤波器的角点检测方法和检测系统
CN111524139B (zh) * 2020-04-02 2023-03-31 西安电子科技大学 一种基于双边滤波器的角点检测方法和检测系统
WO2021253732A1 (fr) * 2020-06-18 2021-12-23 飞依诺科技(苏州)有限公司 Procédé et appareil de traitement d'image médicale, dispositif informatique et support d'enregistrement
CN112435235B (zh) * 2020-11-23 2024-02-02 西安理工大学 一种基于图像分析的籽棉含杂率检测方法
CN112435235A (zh) * 2020-11-23 2021-03-02 西安理工大学 一种基于图像分析的籽棉含杂率检测方法
CN112489066A (zh) * 2020-11-30 2021-03-12 国网山西省电力公司晋城供电公司 一种配电设备红外热成像图像边缘的提取方法
CN112489066B (zh) * 2020-11-30 2023-07-04 国网山西省电力公司晋城供电公司 一种配电设备红外热成像图像边缘的提取方法
CN113706522A (zh) * 2021-09-08 2021-11-26 常州市新创智能科技有限公司 玻纤表面纸板屑检测方法、装置、存储介质和电子设备
CN113706522B (zh) * 2021-09-08 2024-05-31 常州市新创智能科技有限公司 玻纤表面纸板屑检测方法、装置、存储介质和电子设备
CN113850800A (zh) * 2021-10-15 2021-12-28 郑州磨料磨具磨削研究所有限公司 一种硬脆材料划切缝崩边检测方法
CN113850800B (zh) * 2021-10-15 2024-04-30 郑州磨料磨具磨削研究所有限公司 一种硬脆材料划切缝崩边检测方法
CN113727050A (zh) * 2021-11-04 2021-11-30 山东德普检测技术有限公司 面向移动设备的视频超分辨率处理方法、装置、存储介质
CN113727050B (zh) * 2021-11-04 2022-03-01 山东德普检测技术有限公司 面向移动设备的视频超分辨率处理方法、装置、存储介质
CN113870297A (zh) * 2021-12-02 2021-12-31 暨南大学 一种图像边缘检测方法、装置及存储介质
CN113870297B (zh) * 2021-12-02 2022-02-22 暨南大学 一种图像边缘检测方法、装置及存储介质
CN114187267B (zh) * 2021-12-13 2023-07-21 沭阳县苏鑫冲压件有限公司 基于机器视觉的冲压件缺陷检测方法
CN114187267A (zh) * 2021-12-13 2022-03-15 沭阳县苏鑫冲压件有限公司 基于机器视觉的冲压件缺陷检测方法
CN114677340A (zh) * 2022-03-14 2022-06-28 上海第二工业大学 一种基于图像边缘的混凝土表面粗糙度的检测方法
CN114677340B (zh) * 2022-03-14 2024-05-24 上海第二工业大学 一种基于图像边缘的混凝土表面粗糙度的检测方法
CN115060754B (zh) * 2022-04-29 2024-05-24 上海沛圣科技有限公司 一种不锈钢制品表面质量检测方法
CN115060754A (zh) * 2022-04-29 2022-09-16 江苏隧锦五金制造有限公司 一种不锈钢制品表面质量检测方法
CN115131387A (zh) * 2022-08-25 2022-09-30 山东鼎泰新能源有限公司 基于图像处理的汽油机喷雾撞壁参数自动提取方法及系统
CN115131387B (zh) * 2022-08-25 2023-01-24 山东鼎泰新能源有限公司 基于图像处理的汽油机喷雾撞壁参数自动提取方法及系统
CN115256400B (zh) * 2022-08-26 2024-05-28 北京理工大学 机器人三自由度电驱动耦合关节的运动可行范围线性界定方法
CN115256400A (zh) * 2022-08-26 2022-11-01 北京理工大学 机器人三自由度电驱动耦合关节的运动可行范围线性界定方法
CN115564728A (zh) * 2022-09-30 2023-01-03 苏州大学 一种图像角点检测方法、装置、设备及应用
CN115564728B (zh) * 2022-09-30 2023-08-11 苏州大学 一种图像角点检测方法、装置、设备及应用
CN115578732A (zh) * 2022-11-21 2023-01-06 山东爱福地生物股份有限公司 一种肥料生产线的标签识别方法
CN115797925B (zh) * 2023-02-13 2023-04-28 青岛佳美洋食品有限公司 一种鱼肉加工异物检测方法
CN115797925A (zh) * 2023-02-13 2023-03-14 青岛佳美洋食品有限公司 一种鱼肉加工异物检测方法
CN115908429A (zh) * 2023-03-08 2023-04-04 山东歆悦药业有限公司 一种泡脚药粉研磨精度检测方法及系统
CN116416268B (zh) * 2023-06-09 2023-08-18 浙江双元科技股份有限公司 基于递归二分法的锂电池极片边缘位置检测方法及装置
CN116596924B (zh) * 2023-07-17 2023-10-20 山东唐乐生物科技股份有限公司 基于机器视觉的甜菊糖苷质量检测方法及系统
CN116596924A (zh) * 2023-07-17 2023-08-15 山东唐乐生物科技股份有限公司 基于机器视觉的甜菊糖苷质量检测方法及系统
CN116645297B (zh) * 2023-07-24 2023-11-07 济宁龙纳智能科技有限公司 基于人工智能的agv叉车控制方法
CN116645297A (zh) * 2023-07-24 2023-08-25 济宁龙纳智能科技有限公司 基于人工智能的agv叉车控制方法
CN116863249B (zh) * 2023-09-01 2023-11-21 山东拓新电气有限公司 基于人工智能的煤矿传送带跑偏识别方法
CN116863249A (zh) * 2023-09-01 2023-10-10 山东拓新电气有限公司 基于人工智能的煤矿传送带跑偏识别方法
CN116883407B (zh) * 2023-09-08 2023-11-24 山东省永星食品饮料有限公司 基于人工智能的瓶装水杂质检测方法
CN116883407A (zh) * 2023-09-08 2023-10-13 山东省永星食品饮料有限公司 基于人工智能的瓶装水杂质检测方法

Also Published As

Publication number Publication date
CN109427066B (zh) 2021-11-05
CN109427066A (zh) 2019-03-05

Similar Documents

Publication Publication Date Title
WO2019041590A1 (fr) Procédé de détection de bord à l'aide d'un angle arbitraire
CN103310453B (zh) 一种基于子图像角点特征的快速图像配准方法
US20200167596A1 (en) Method and device for determining handwriting similarity
CN108010082B (zh) 一种几何匹配的方法
CN112381183B (zh) 目标检测方法、装置、电子设备及存储介质
Prakash et al. Human recognition using 3D ear images
CN108257155B (zh) 一种基于局部和全局耦合的扩展目标稳定跟踪点提取方法
CN111091075A (zh) 人脸识别方法、装置、电子设备及存储介质
CN108154147A (zh) 基于视觉注意模型的感兴趣区域检测方法
CN104091145A (zh) 人体掌脉特征图像采集方法
Chaman et al. Real-time hand gesture communication system in Hindi for speech and hearing impaired
Nag et al. New cold feature based handwriting analysis for enthnicity/nationality identification
Donati et al. An accurate system for fashion hand-drawn sketches vectorization
CN115240224A (zh) 基于三维手部关键点和图像特征融合的手势特征提取方法
Wagdy et al. Document image skew detection and correction method based on extreme points
JP4710426B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
Wu et al. Image Edge Detection Based on Sobel with Morphology
US20050089225A1 (en) Method for aligning gesture features of image
Ribarić et al. Personal recognition based on the Gabor features of colour palmprint images
CN116912604A (zh) 模型训练方法、图像识别方法、装置以及计算机存储介质
CN116563582A (zh) 基于国产CPU和opencv的图像模板匹配方法及装置
JP4509512B2 (ja) スキュー検知
Cohen et al. 3D iris model and reader for iris identification
Belan et al. A homogenous parameter set for image recognition based on area
Varghese et al. Hexagonal image enhancement using Hex-Gabor filter for machine vision applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17923424

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17923424

Country of ref document: EP

Kind code of ref document: A1