CN116740115B - Image edge detection method and device - Google Patents

Image edge detection method and device Download PDF

Info

Publication number
CN116740115B
CN116740115B CN202311017604.2A CN202311017604A CN116740115B CN 116740115 B CN116740115 B CN 116740115B CN 202311017604 A CN202311017604 A CN 202311017604A CN 116740115 B CN116740115 B CN 116740115B
Authority
CN
China
Prior art keywords
image
determining
target image
template
template image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311017604.2A
Other languages
Chinese (zh)
Other versions
CN116740115A (en
Inventor
李晓明
李勇
贾江凯
郝怡
孙博
李慧超
郑斌
刘丹
刘明明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Digital Technology Holdings Co ltd
State Grid E Commerce Technology Co Ltd
Original Assignee
State Grid Digital Technology Holdings Co ltd
State Grid E Commerce Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Digital Technology Holdings Co ltd, State Grid E Commerce Technology Co Ltd filed Critical State Grid Digital Technology Holdings Co ltd
Priority to CN202311017604.2A priority Critical patent/CN116740115B/en
Publication of CN116740115A publication Critical patent/CN116740115A/en
Application granted granted Critical
Publication of CN116740115B publication Critical patent/CN116740115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image edge detection method and apparatus, and relates to the field of computer vision. The method comprises the following steps: acquiring a target image; determining parameters of a template image according to the target image; determining a template image set according to the parameters; for each pixel point in the target image, determining the ratio gradient of each pixel point according to the pixel point and the template image set; and determining the edge of the target image according to the ratio gradient of each pixel point. By applying the method and the device, the parameters of the template image can be determined according to the information in the target image. And determining the ratio gradient of each pixel point in the target image according to the determined template image set, thereby realizing the edge detection of the target image and improving the accuracy of the edge detection.

Description

Image edge detection method and device
Technical Field
The application relates to the technical field of computers, in particular to the field of computer vision, and particularly relates to an image edge detection method and device.
Background
Edges are important information and features of image data. The SAR (Synthetic Aperture Radar ) image always forms speckle noise due to the limitation of an imaging mechanism, and the noise makes the edge characteristics in the SAR image poor in significance, is difficult to accurately extract, and has adverse effects on subsequent target detection and identification based on the edge characteristics and the like.
At present, the accuracy of an algorithm for SAR image edge extraction is still to be improved.
Disclosure of Invention
The embodiment of the disclosure provides an image edge detection method and device.
In a first aspect, embodiments of the present disclosure provide an image edge detection method, including: acquiring a target image; determining parameters of a template image according to the target image; determining a template image set according to the parameters; for each pixel point in the target image, determining the ratio gradient of each pixel point according to the pixel point and the template image set; and determining the edge of the target image according to the ratio gradient of each pixel point.
In a second aspect, embodiments of the present disclosure provide an image edge detection apparatus, including: an image acquisition unit configured to acquire a target image; a parameter determination unit configured to determine parameters of the template image from the target image; an image determining unit configured to determine a set of template images according to the parameters; a gradient determining unit configured to determine, for each pixel in the target image, a ratio gradient of each pixel from the pixel and the set of template images; and an edge detection unit configured to determine an edge of the target image according to the ratio gradient of each pixel point.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram to which one embodiment of the image edge detection method of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of an image edge detection method of the present disclosure;
FIG. 3 is a flow chart illustrating another embodiment of an image edge detection method of the present disclosure;
FIG. 4 is a schematic illustration of a jet;
FIG. 5 is a schematic diagram of a template image collection;
FIG. 6 is a schematic diagram illustrating the structure of an embodiment of an image edge detection apparatus of the present disclosure;
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present disclosure. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
In order to make the technical scheme and advantages of the present disclosure more apparent, the present disclosure will be further described in detail below with reference to the accompanying drawings and specific embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the image edge detection methods or image edge detection apparatus of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include flying platforms 101, 102, 103, a network 104, and a server 105. Synthetic aperture radars may be mounted on the flying platforms 101, 102, 103 for acquiring SAR images of the ground. The flying platforms 101, 102, 103 may include aircraft, satellites, spacecraft, and the like. The network 104 is used as a medium to provide a communication link between the flying platforms 101, 102, 103 and the server 105. The server 105 may be a server providing various services, such as a background server performing edge detection for SAR images acquired by synthetic aperture radar. And the background server feeds back the edge detection result to the user.
It should be noted that the image edge detection method provided in the embodiments of the present disclosure is generally performed by the server 105. Accordingly, the image edge detection device is generally provided in the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 illustrates a flow 200 of one embodiment of an image edge detection method of the present disclosure. As shown in fig. 2, the image edge detection method of the present embodiment may include the steps of:
in step 201, a target image is acquired.
In the present embodiment, the execution subject of the image edge detection method (e.g., the server 105 shown in fig. 1) may acquire the target image in various ways. The target image may be a SAR image, which may be acquired by a synthetic aperture radar mounted on the flying platform. The target image may include a plurality of objects including, for example, buildings, trees, greenbelts, and the like.
Step 202, determining parameters of the template image according to the target image.
After the execution subject acquires the target image, the parameters of the template image may be determined. Here, the template image may be a template image for calculating a ratio gradient. In this embodiment, the shape of the template image may be specific, and may be, for example, a sector, a rectangle, an ellipse, or the like. The template image may include at least one image of a particular shape. By calculating the pixel points in the area covered by the template image, the ratio gradient of each pixel in the target image can be calculated. The parameters of the target image may be parameters for characterizing a particular shape. If the shape in the template image is a sector, the parameters may include a radius and a central angle. If the shape in the target image is rectangular, the parameters may include length and width.
The execution subject may first acquire the range of values of the parameters of the target image. Then, a suitable value is determined from the above-mentioned value range based on the information of the object included in the target image. Specifically, if the number of objects included in the target image is greater than N and the area occupied by the objects is smaller than P, the minimum value of the above-mentioned value range may be taken as the parameter value. Alternatively, if the number of objects included in the target image is smaller than M, a set of values with the smallest area of the specific shape may be determined from the value ranges of the respective parameters as the parameter values.
Step 203, determining a template image set according to the parameters.
After determining the above parameters, the execution subject may determine a set of template images. Specifically, the execution body may determine the specific shape according to the above parameters. And then combining the specific shapes according to different positions and different orientations to obtain a plurality of target images, thereby obtaining a template image set.
Step 204, for each pixel in the target image, determining a ratio gradient of each pixel according to the pixel and the template image set.
After the execution body obtains the template image set, a ratio gradient may be calculated for each pixel point. Specifically, the execution body may place each pixel point at a specific position of the target image, and then determine a pixel mean value according to the pixel value of each pixel point in the target image covered by the template image. Thus, each template image corresponds to a pixel mean, and the set of template images may correspond to the set of pixel means. After determining the average value of each pixel, the maximum value or the minimum value can be used as the ratio gradient of each pixel point.
Step 205, determining the edge of the target image according to the ratio gradient of each pixel point.
After determining the ratio gradient of each pixel point, the edge of the target image can be obtained. Since the pixel points at the edges have a ratio gradient value much larger than that of the pixel points at the non-edges. That is, the pixel points located at the edges can be displayed remarkably, so that the edges of the target image can be obtained.
The image edge detection method provided by the embodiment of the present disclosure may determine parameters of a template image according to information in a target image. And determining the ratio gradient of each pixel point in the target image according to the determined template image set, thereby realizing the edge detection of the target image and improving the accuracy of the edge detection.
With continued reference to fig. 3, a flow 300 of another embodiment of an image edge detection method according to the present disclosure is shown. As shown in fig. 3, the method of the present embodiment may include the steps of:
in step 301, a target image is acquired.
Step 302, determining parameters of the template image according to pixel information of the target image and a preset sector area value.
The execution subject may determine the contrast and/or brightness of the target image based on the pixel information of the target image. Then, parameters of the template image can be determined according to the contrast and/or brightness in combination with the preset sector area value. Specifically, if the shape in the template image is a sector, and the contrast of the target image is high, it is necessary to set the radius of the sector area to a large value and the central angle to a small value. It will be appreciated that the radius and center values are within a predetermined range of values. And, the error between the area of the sector determined by the radius and the central angle and the area value of the preset sector area should be within a certain range.
In some specific practices, the template image T includes sector pairs, i.e., includes at least two sector areas. The positions of the sector areas can be set according to the actual application scene or randomly. The fan pairs combined at different positions can be used as different template images. The template image T is a binary (0, 1) image of the size (2r+1) × (2r+1), and includes two jet shape areas (value 1) a and B. Wherein A is a sector with an origin at T (r+1 ), a radius r and an opening angle theta, and the sector is obtained by respectively rotating clockwise and anticlockwise theta/2 by taking a straight line formed by two points T (r+1 ) and T (r+1, 2r+1) as an axis and taking the point T (r+1 ). B is a sector with an origin at T (r+1 ), a radius r and an opening angle theta, and the sector is obtained by respectively rotating clockwise and anticlockwise theta/2 by taking a point T (r+1 ) as an axis and taking a straight line formed by two points T (r+1, 1) and T (r+1 ).
An image can be considered as an energy field, with different areas carrying different "potentials" between which diffusion and collisions form edges of different areas in the image. The gradient of a point on an edge, i.e. the difference of different "potential energies" on both sides of the edge at that point. This difference in potential energy is along the normal direction of the edge and gradually diverges progressively to both sides of the normal direction. Physically, the shape of the diffused area is obviously not rectangular or elliptical. The "jet" phenomenon gives an indication of this diffusion region. Imagine a high water pressure on one wall and on the opposite side, a small hole is formed in the wall, and it is obvious that water can be sprayed out of the small hole due to different pressures on the two sides of the wall, and the spraying shape is a fan shape, see fig. 4. That is why the present disclosure designs the ratio gradient calculation region to be fan-shaped. The design of this shape has a reasonable explanation of its true physical phenomenon.
In some alternative implementations of the present embodiment, the execution subject may determine parameters of the template image by: determining the contrast of the target image according to the pixel information of the target image; and determining parameters of the template image according to the contrast and the preset area value of the sector area.
Since the generated template image T is discrete, reasonable settings of the parameters r and θ need to be considered. To effectively denoise without introducing non-homogenous region computation, the discrete area of the fan needs to be set between 20-25. Both parameters r and θ affect the sector area. In this implementation manner, if the contrast of the target image is larger, the radius value of the sector area may be set to a larger value and the central angle may be set to a smaller value in combination with the preset sector area value. For example, r e [8,20], θ e [8,30]. The parameters r and θ may be set to: r=9, θ=30, where the fan area is 22. Or r and θ are set to: r=18, θ=8, and the fan area is 25.
In some alternative implementations of the present embodiment, the execution subject may determine parameters of the template image by: determining detail information of the target image according to pixel information of the target image; and determining parameters of the template image according to the detail information and the preset sector area value.
In this implementation manner, the execution subject may determine the detail information of the target image according to the pixel information of the target image. Specifically, the execution subject may determine whether detail information exists in a unit area according to pixel values of each pixel point in the unit area in the target image. If the execution subject determines that the detail information in the target image is large (larger than the preset number), the radius value of the sector area may be set to a smaller value and the central angle may be set to a larger value.
Step 303, determining a sector pair according to the radius and the central angle by taking the origin of a preset coordinate system as a central point; and rotating the sector pair for N times by taking the origin as a center point, and determining a template image set according to the sector pair obtained by each rotation.
In this embodiment, after determining the radius and central angle of the sector, the origin of the preset coordinate system may be taken as the center point, and the sector pair may be determined according to the radius and central angle. Specifically, the included angle between the middle lines of the two sectors in the sector pair can be set according to the actual application scene. In some specific practices, the angle between the midlines of the two sectors may be 180 °.
The execution body may take a sector pair as a template image. Then, the execution subject may rotate the sector pair in the template image with the origin of the coordinate system as a center point, and may obtain one template image every rotation. And a plurality of template images can be obtained through multiple times of rotation, so that a template image set is obtained.
In some alternative implementation manners of this embodiment, sectors a and B in the image T are simultaneously rotated counterclockwise with T (r+1 ) as the center, and rotated N-1 times, each time by an angle α, another N-1 template images with different sector positions are sequentially obtained, and together with the initial template, a template image set is formed, denoted as TN { T 1 ,T 2 ,...,T n }. Where n×α=180°. The template image set is shown in fig. 5.
Step 304, for each pixel point in the target image, the center of each template image in the template image set is placed at the pixel point, and the ratio gradient of the corresponding area of each template image is calculated respectively.
After the template image set is obtained, the execution subject may calculate each pixel point in the target image. Specifically, for each pixel point, the execution subject may place the center of each template image in the template image set at the pixel point, and calculate the pixel value of the corresponding region of each template image respectively. And further calculating each pixel value to obtain the ratio gradient corresponding to each pixel point.
In some alternative implementations of this embodiment, the executing body may first calculate a gray-scale average for a corresponding region of each template image. The pixel values of the pixel points in the corresponding area are added, and the obtained sum value is divided by the number of the pixel points in the corresponding area to obtain the gray average value. Then, various processes are performed on each gray average value to obtain a plurality of process result values. And selecting one value from the plurality of processing result values as a ratio gradient corresponding to the pixel point.
For a certain pixel I (I, j) in SAR image I with size of MxN, I E [ r+1, M-r],j∈[r+1,M-r]A set of template images TN { T is used 1 ,T 2 ,...,T n The ratio gradient of the pixel is calculated to obtain N different results, which are recorded as SN: { S 1 ,S 2 ,...,S n }。
The specific calculation is as follows:
for the kth template image T k Center T of the template image k (r+1 ) is aligned with pixel I (I, j), now by template T k The sector a and B in (B) identify two sectors I of the image I kA And I kB . Respectively counting the area I kA And I kB Is the gray average value m of (2) kA And m kB Obtaining the result S k =m kA /m kB . Then, for the template image set TN { T 1 ,T 2 ,...,T n N results SN { S } 1 ,S 2 ,...,S n }。
Then, for SN { S 1 ,S 2 ,...,S n All values in the sequence are inverted to obtain SN': 1/S 1 ,1/S 2 ,...,1/S n }. Taking the maximum value S of SN and SN max =max{S 1 ,S 2 ,...,S n, 1/S 1 ,1/S 2 ,...,1/S n As a ratio gradient G (I, j) of the pixel I (I, j).
Step 305, determining the edge of the target image according to the ratio gradient of each pixel point.
According to the image edge detection method provided by the embodiment of the disclosure, through the design of the ratio gradient calculation area of the jet flow shape (fan shape), more accurate image edge ratio gradient information can be calculated under multiplicative noise conditions, the extracted image edge is more obvious, and the quality of subsequent advanced application (such as target detection, identification, tracking and the like) of the image data is improved. The jet flow shape ratio gradient calculation area design can adapt to edge contour extraction of large bending change.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an image edge detection apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the image edge detection apparatus 600 of the present embodiment includes: an image acquisition unit 601, a parameter determination unit 602, an image determination unit 603, a gradient determination unit 604, and an edge detection unit 605.
An image acquisition unit 601 is configured to acquire a target image.
The parameter determination unit 602 is configured to determine parameters of the template image from the target image.
An image determination unit 603 is configured to determine a set of template images from the parameters.
The gradient determining unit 604 is configured to determine, for each pixel point in the target image, a ratio gradient of each pixel point from the pixel point and the template image set.
An edge detection unit 605 configured to determine an edge of the target image from the ratio gradient of each pixel point.
In summary, in the technical scheme of the disclosure, through the design of the ratio gradient calculation region of the jet flow shape (fan shape), more accurate image edge ratio gradient information can still be calculated under multiplicative noise conditions, the extracted image edge is more obvious, and the quality of subsequent advanced application (such as target detection, identification, tracking and the like) of the image data is improved. The jet flow shape ratio gradient calculation area design can adapt to edge contour extraction of large bending change.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (8)

1. An image edge detection method, comprising:
acquiring a target image;
determining parameters of a template image according to the target image, wherein the template image comprises a sector pair, and the parameters comprise the radius and central angle of the sector;
determining a template image set according to the parameters;
for each pixel point in the target image, determining the ratio gradient of each pixel point according to the pixel point and the template image set;
determining the edge of the target image according to the ratio gradient of each pixel point;
the determining parameters of the template image according to the target image comprises the following steps:
determining the contrast and/or brightness of the target image according to the pixel information of the target image;
according to the contrast and/or brightness, combining a preset sector area value, and determining parameters of a template image;
for each pixel point in the target image, determining the ratio gradient of each pixel point according to the pixel point and the template image set, including:
and for each pixel point in the target image, placing the center of each template image in the template image set at the pixel point, and respectively calculating the ratio gradient of the corresponding area of each template image.
2. The method of claim 1, wherein the determining parameters of the template image according to the pixel information of the target image and the preset sector area value comprises:
determining the contrast of the target image according to the pixel information of the target image;
and determining parameters of the template image according to the contrast ratio and the preset sector area value.
3. The method of claim 1, wherein the determining parameters of the template image according to the pixel information of the target image and the preset sector area value comprises:
determining detail information of the target image according to pixel information of the target image;
and determining parameters of the template image according to the detail information and the preset sector area value.
4. The method of claim 1, wherein the determining a set of template images from the parameters comprises:
taking an origin of a preset coordinate system as a center point, and determining a sector pair according to the radius and the central angle;
and rotating the sector pair for N times by taking the origin as a center point, and determining the template image set according to the sector pair obtained by each rotation.
5. The method of claim 1, wherein the midlines of each sector of the pair of sectors are collinear.
6. The method of claim 4, wherein said rotating the pair of sectors N times comprises:
the fan-shaped pair is rotated by a preset angle each time in the same direction until the end of N rotations, wherein n×preset angle=180°.
7. The method of claim 1, wherein the separately computing the ratio gradient of the corresponding region of each template image comprises:
for a corresponding region of each template image, determining a gray average value of the corresponding region;
from each gray level mean, a ratio gradient is determined.
8. An image edge detection apparatus comprising:
an image acquisition unit configured to acquire a target image;
a parameter determination unit configured to determine a parameter of a template image from the target image, wherein the template image includes a pair of sectors, the parameter including a radius and a central angle of the sector;
an image determining unit configured to determine a set of template images according to the parameters;
a gradient determining unit configured to determine, for each pixel in the target image, a ratio gradient of each pixel from the pixel and the set of template images;
an edge detection unit configured to determine an edge of the target image according to a ratio gradient of each pixel point;
the parameter determination unit is further configured to:
determining the contrast and/or brightness of the target image according to the pixel information of the target image;
according to the contrast and/or brightness, combining a preset sector area value, and determining parameters of a template image;
the gradient determination unit is further configured to:
and for each pixel point in the target image, placing the center of each template image in the template image set at the pixel point, and respectively calculating the ratio gradient of the corresponding area of each template image.
CN202311017604.2A 2023-08-14 2023-08-14 Image edge detection method and device Active CN116740115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311017604.2A CN116740115B (en) 2023-08-14 2023-08-14 Image edge detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311017604.2A CN116740115B (en) 2023-08-14 2023-08-14 Image edge detection method and device

Publications (2)

Publication Number Publication Date
CN116740115A CN116740115A (en) 2023-09-12
CN116740115B true CN116740115B (en) 2023-11-17

Family

ID=87910083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311017604.2A Active CN116740115B (en) 2023-08-14 2023-08-14 Image edge detection method and device

Country Status (1)

Country Link
CN (1) CN116740115B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765288A (en) * 2018-05-25 2018-11-06 杭州电子科技大学 A kind of POCS Image Super-resolution Reconstruction methods kept based on edge
CN110533679A (en) * 2019-07-29 2019-12-03 西安电子科技大学 SAR image edge detection method based on logarithmic transformation Yu gal cypress convolution
CN111382658A (en) * 2019-11-14 2020-07-07 北京航空航天大学 Road traffic sign detection method in natural environment based on image gray gradient consistency
CN112561940A (en) * 2020-12-08 2021-03-26 中国人民解放军陆军工程大学 Dense multi-target parameter extraction method and device and terminal equipment
CN114332182A (en) * 2022-03-14 2022-04-12 北京化工大学 SAR image registration method, equipment and medium based on multi-feature constraint
CN114757950A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic image processing method, device and computer readable storage medium
CN115439607A (en) * 2022-09-01 2022-12-06 中国民用航空总局第二研究所 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN115457063A (en) * 2022-08-23 2022-12-09 武汉海微科技有限公司 Method, device and equipment for extracting edge of circular hole of PCB (printed Circuit Board) and storage medium
CN116205907A (en) * 2023-04-26 2023-06-02 苏州特铭精密科技有限公司 Decorative plate defect detection method based on machine vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765288A (en) * 2018-05-25 2018-11-06 杭州电子科技大学 A kind of POCS Image Super-resolution Reconstruction methods kept based on edge
CN110533679A (en) * 2019-07-29 2019-12-03 西安电子科技大学 SAR image edge detection method based on logarithmic transformation Yu gal cypress convolution
CN111382658A (en) * 2019-11-14 2020-07-07 北京航空航天大学 Road traffic sign detection method in natural environment based on image gray gradient consistency
CN112561940A (en) * 2020-12-08 2021-03-26 中国人民解放军陆军工程大学 Dense multi-target parameter extraction method and device and terminal equipment
CN114332182A (en) * 2022-03-14 2022-04-12 北京化工大学 SAR image registration method, equipment and medium based on multi-feature constraint
CN114757950A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic image processing method, device and computer readable storage medium
CN115457063A (en) * 2022-08-23 2022-12-09 武汉海微科技有限公司 Method, device and equipment for extracting edge of circular hole of PCB (printed Circuit Board) and storage medium
CN115439607A (en) * 2022-09-01 2022-12-06 中国民用航空总局第二研究所 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN116205907A (en) * 2023-04-26 2023-06-02 苏州特铭精密科技有限公司 Decorative plate defect detection method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"一种改进的SAR图像边缘检测算法";柏正尧 等;《信号处理》;第133-136页 *

Also Published As

Publication number Publication date
CN116740115A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN107301661B (en) High-resolution remote sensing image registration method based on edge point features
US11443437B2 (en) Vibe-based three-dimensional sonar point cloud image segmentation method
CN109949349B (en) Multi-mode three-dimensional image registration and fusion display method
CN108596980A (en) Circular target vision positioning precision assessment method, device, storage medium and processing equipment
CN109325971B (en) Image registration method and device
JP2004516533A (en) Synthetic aperture radar and forward-looking infrared image superposition method
CN106373128B (en) Method and system for accurately positioning lips
CN106569191A (en) Method of acquiring target RCS by using high resolution imaging
CN112767245B (en) System and method for map splicing construction based on real-time video images of multiple unmanned aerial vehicles
CN102750691B (en) Corner pair-based image registration method for Cauchy-Schwarz (CS) divergence matching
CN112037287A (en) Camera calibration method, electronic device and storage medium
CN114114267A (en) Target attitude estimation method based on projection matching of spin space target model
CN114519778B (en) Target three-dimensional reconstruction method, device, equipment and medium of multi-angle SAR data
CN113419242A (en) Chromatographic SAR whole scene spot cloud acquisition method and device thereof
CN117094917A (en) Cardiovascular 3D printing data processing method
US20140092217A1 (en) System for correcting rpc camera model pointing errors using 2 sets of stereo image pairs and probabilistic 3-dimensional models
CN113781478B (en) Oil tank image detection method, oil tank image detection device, electronic equipment and computer readable medium
CN115164900A (en) Omnidirectional camera based visual aided navigation method and system in urban environment
CN116740115B (en) Image edge detection method and device
CN113658196A (en) Method and device for detecting ship in infrared image, electronic equipment and medium
EP4009275A1 (en) Golf ball top-view detection method and system, and storage medium
CN116740332B (en) Method for positioning center and measuring angle of space target component on satellite based on region detection
CN109345583B (en) SAR target image geometric dimension estimation method based on OMP
Zhao et al. Principal direction-based Hough transform for line detection
CN116704590A (en) Iris image correction model training method, iris image correction device and iris image correction medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant