CN113034576B - High-precision positioning method, system and medium based on contour - Google Patents

High-precision positioning method, system and medium based on contour Download PDF

Info

Publication number
CN113034576B
CN113034576B CN202110181351.7A CN202110181351A CN113034576B CN 113034576 B CN113034576 B CN 113034576B CN 202110181351 A CN202110181351 A CN 202110181351A CN 113034576 B CN113034576 B CN 113034576B
Authority
CN
China
Prior art keywords
image
pixel
sub
template
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110181351.7A
Other languages
Chinese (zh)
Other versions
CN113034576A (en
Inventor
刘彬
仝西领
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yingxin Computer Technology Co Ltd
Original Assignee
Shandong Yingxin Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yingxin Computer Technology Co Ltd filed Critical Shandong Yingxin Computer Technology Co Ltd
Priority to CN202110181351.7A priority Critical patent/CN113034576B/en
Publication of CN113034576A publication Critical patent/CN113034576A/en
Application granted granted Critical
Publication of CN113034576B publication Critical patent/CN113034576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-precision positioning method based on a contour, which comprises the following steps: acquiring a first image, and performing an optimization processing step on the first image to obtain a second image corresponding to the first image; setting a gradient change threshold, and executing a sub-pixel optimization step on the second image based on the gradient change threshold to obtain a first sub-pixel outline corresponding to the second image; executing a template creating step based on the first sub-pixel outline to obtain a plurality of first image templates; acquiring a first image to be positioned, comparing the first image to be positioned based on a first image template to obtain a positioning template, and positioning the first image to be positioned based on the positioning template; the method can optimize the image to be processed by adopting the HDR technology, improve the precision of the image to be processed by adopting the sub-pixel level optimization processing, finally realize the high-precision and quick positioning of the object in the area to be processed, save the time cost and improve the processing quality of the visual positioning method.

Description

High-precision positioning method, system and medium based on contour
Technical Field
The invention relates to the technical field of visual positioning, in particular to a high-precision positioning method, a high-precision positioning system and a high-precision positioning medium based on contours.
Background
In the production process of the server, when a firmware is tested or installed, positioning is needed, and the existing positioning method is based on a gray value, a fixed shape or a traditional positioning technology, so that the existing positioning method cannot be flexibly applied to various scenes, and the existing positioning method cannot accurately and quickly position an object in an area needing to be processed.
Disclosure of Invention
The invention mainly solves the problem that the existing visual positioning method can not accurately and quickly position the object in the area needing to be processed.
In order to solve the technical problems, the invention adopts a technical scheme that: the high-precision positioning method based on the contour comprises the following steps:
acquiring a first image, and performing an optimization processing step on the first image to obtain a second image;
setting a gradient change threshold, and executing a sub-pixel optimization step on the second image based on the gradient change threshold to obtain a first sub-pixel outline;
executing a template creating step based on the first sub-pixel outline to obtain a first image template;
acquiring a first image to be positioned, and comparing the first image to be positioned based on the first image template to obtain a positioning template;
and positioning the first image to be positioned based on the positioning template.
As an improvement, the sub-pixel optimization step includes:
configuring an edge detection algorithm and a bilinear interpolation algorithm;
acquiring a first pixel point of the second image based on the edge detection algorithm and the gradient change threshold;
and acquiring the first sub-pixel outline based on the bilinear interpolation algorithm and the first pixel point.
As an improvement, the template creating step includes:
setting a scaling range, an angle range and an offset step length;
calculating the number of templates based on the angle range and the offset step length;
performing contour offset processing on the first sub-pixel contour based on the angle range, the offset step length and the scaling range to obtain second sub-pixel contours corresponding to the number of the templates;
and setting the second sub-pixel outline as the first image template.
As an improvement, the step of performing the comparison step on the first image to be located based on the first image template further includes:
configuring a semantic segmentation algorithm, and processing the first image to be positioned by adopting the semantic segmentation algorithm to obtain a second image to be positioned;
executing the optimization processing step on the second image to be positioned to obtain a third image to be positioned;
performing the sub-pixel optimization step on the third image to be positioned to obtain a third sub-pixel outline;
performing the comparing step on the third sub-pixel outline and the first image template.
As an improvement, the step of obtaining a first pixel point of the second image based on the edge detection algorithm and the gradient change threshold further includes:
extracting a second pixel point of the second image;
acquiring a first color value of the second pixel point;
acquiring a gradient change parameter of the first color value by adopting the edge detection algorithm;
and selecting the second pixel point corresponding to the gradient change parameter and the gradient change threshold value as the first pixel point.
As an improved solution, the step of obtaining the first sub-pixel profile based on the bilinear interpolation algorithm and the first pixel point further includes:
acquiring a first pixel coordinate of the first pixel point in the second image;
setting a first proximity threshold and a first proximity range of the first pixel coordinate;
selecting a first pixel coordinate which is positioned in the first adjacent range and has a distance with the first pixel coordinate as the first adjacent threshold value as a second pixel coordinate;
and acquiring the first sub-pixel outline based on the second pixel coordinate and the bilinear interpolation algorithm.
As an improvement, the step of obtaining the first sub-pixel profile based on the second pixel coordinate and the bilinear interpolation algorithm further comprises:
calculating a sub-pixel value of the second pixel coordinate by adopting the bilinear interpolation algorithm;
fitting the sub-pixel values by adopting a least square method to obtain a first graph;
acquiring a central coordinate of the first graph, and setting the central coordinate as a sub-pixel coordinate corresponding to the first pixel coordinate;
and integrating the sub-pixel coordinates to obtain the first sub-pixel outline.
As an improvement, the step of aligning comprises:
setting a first offset threshold and a second offset threshold;
comparing the third sub-pixel profile with the second sub-pixel profile and calculating an angle offset value and a scaling offset value of the second sub-pixel profile for the third sub-pixel profile based on the scaling range and the angle range;
comparing whether the angle offset value and the scaling offset value are not greater than the first offset threshold and the second offset threshold, respectively; and if so, selecting the second sub-pixel outline corresponding to the angle deviation value and the scaling deviation value as the positioning template of the first image to be positioned.
The invention also provides a high-precision positioning system based on the contour, which comprises:
the system comprises an image optimization module, a sub-pixel processing module, a template creating module and an image positioning module;
the image optimization module is used for acquiring a first image and executing an optimization processing step on the first image to obtain a second image;
the sub-pixel processing module is used for setting a gradient change threshold value and executing a sub-pixel optimization step on the second image based on the gradient change threshold value to obtain a first sub-pixel outline;
the template creating module is used for executing a template creating step according to the first sub-pixel outline to obtain a first image template;
the image positioning module is used for acquiring a first image to be positioned and comparing the first image to be positioned based on the first image template to obtain a positioning template; and the image positioning module positions the first image to be positioned through the positioning template.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the contour-based high-precision positioning method.
The invention has the beneficial effects that:
1. the high-precision positioning method based on the profile can optimize the image to be processed by adopting an HDR technology, improve the precision of the image to be processed by adopting sub-pixel level optimization processing, and finally realize high-precision and quick positioning of the object in the area to be processed by the efficient comparison processing steps of a plurality of sub-pixel level models, thereby not only saving the time cost, but also improving the processing quality of the visual positioning method and making up for the vacancy in the prior art.
2. The high-precision positioning system based on the contour can optimize the image to be processed by adopting HDR technology through the mutual matching of the image optimization module, the sub-pixel processing module, the template creation module and the image positioning module, improve the precision of the image to be processed by adopting sub-pixel level optimization processing, execute the high-efficiency comparison processing steps of a plurality of sub-pixel level models, and finally realize the high-precision and quick positioning of the object in the area to be processed, thereby not only saving the time cost, but also improving the processing quality of the visual positioning method and making up the vacancy in the prior art.
3. The computer-readable storage medium can guide the image optimization module, the sub-pixel processing module, the template creation module and the image positioning module to cooperate, further realize the optimization of the image to be processed by adopting HDR technology, improve the precision of the image to be processed by adopting sub-pixel level optimization processing, execute the high-efficiency comparison processing steps of a plurality of sub-pixel level models, and finally realize the high-precision and quick positioning of the object in the area to be processed, thereby not only saving time cost, but also improving the processing quality of the visual positioning method, making up the vacancy in the prior art, and effectively increasing the operability of the high-precision positioning method based on the contour.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a contour-based high-precision positioning method according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a contour-based high-precision positioning method according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram illustrating an implementation effect of the contour-based high-precision positioning method according to embodiment 1 of the present invention;
fig. 4 is an architecture diagram of a contour-based high-precision positioning system according to embodiment 2 of the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
In the description of the present invention, it should be noted that the described embodiments of the present invention are a part of the embodiments of the present invention, and not all embodiments; all other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the description of the present invention, it should be noted that HDR (High-Dynamic Range) is a High Dynamic illumination rendering technology, canny is an edge detection algorithm, RGB (Red Green Blue) is a color standard, and DenseNets is a semantic segmentation algorithm.
In the description of the present invention, it should be noted that the terms "first", "second", "third", and "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified or limited otherwise, the terms "optimization processing step", "sub-pixel optimization step", "sub-pixel outline", "template creation step", "image template", "image to be positioned", "comparison step", "edge detection algorithm", "bilinear interpolation algorithm", "gradient change threshold", "zoom range", "angle range", "offset step", "gradient change parameter", "color value", "positioning template", "outline offset processing", "semantic segmentation algorithm" should be understood in a broad sense. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
The present embodiment provides a method for high-precision positioning based on a contour, as shown in fig. 1 to 3, including the following steps:
s100, acquiring a first image, and performing an optimization processing step on the first image to obtain a second image;
step S100 specifically includes:
s101, configuring an image acquisition device, and capturing a plurality of first images with the same specification and different exposure levels by using the image acquisition device;
the optimization processing step is to adopt HDR technology to carry out optimization processing on a plurality of first images to obtain second images;
for example: acquiring a first exposure value corresponding to each captured first image; acquiring a response function of the image acquisition equipment; acquiring pixel values corresponding to 256 pixel points in each first image; setting a first threshold value and a second threshold value; judging the functional property of the response function; if the function property is a linear function, screening out pixel values which are greater than or equal to a first threshold value and less than or equal to a second threshold value from the pixel values; the pixel point corresponding to the just obtained pixel value is a pixel point with normal exposure and normal brightness, and the pixel point is defined as a pixel point; removing the pixel values which are just obtained, wherein the pixel points corresponding to the residual pixel values are pixel points with overexposure or underexposure and pixel points with over-dark or over-bright exposure, and the pixel points are defined as bad pixel points; dividing the pixel value corresponding to the bad pixel point by the exposure value of the first image corresponding to the bad pixel point, wherein the exposure value refers to exposure time, and obtaining the brightness value corresponding to the bad pixel point; equally distributing the brightness value to the pixel-like points; synthesizing the pixel points to obtain a second image in the embodiment; if the function property is a nonlinear function, evaluating a response function, converting the response function into a linear function, and then executing the steps;
in the step, the image is optimized through the HDR technology, so that the obtained image is stronger in applicability, and the application range of the step is further widened.
S200, performing a sub-pixel optimization step on the second image to obtain a first sub-pixel outline of the second image; executing a template creating step based on the sub-pixel outline to obtain a plurality of image templates;
step S200 specifically includes:
acquiring an image specification parameter of the second image, and setting an image template area specification according to the image specification parameter; acquiring the visual field range of the image acquisition equipment and the position change of an object to be positioned, when the visual field range of the image acquisition equipment is large and the position change of the object to be positioned is small, setting the specification of the image template area to be 3-5 times of the image specification parameter of the second image, wherein the value of the multiple is not limited and can be set according to the actual situation, and when the visual field range of the image acquisition equipment is large and the position change of the object to be positioned is large, setting the specification of the image template area to be the visual field specification of the whole camera; this step further determines the ROI region of the template localization;
setting an angle range and an offset step length; calculating the number of image templates based on the angle range and the offset step length; the number of the image templates = angle range/offset step length, the angle range and the offset step length can be set according to each test condition, and the specific value range is not limited;
setting a gradient change threshold value; acquiring a first gray value or RGB value (namely a first color value) of each second pixel point of the second image; detecting a gradient change parameter of the first gray value or the RGB value by adopting a canny edge detection algorithm; extracting a second pixel point of the gradient change parameter in the gradient change threshold value in the second pixel point, and defining the second pixel point as a first pixel point; acquiring a first pixel coordinate of the first pixel point in the second image; the gradient change threshold is also set according to each test condition, a second pixel point with fast gradient change is selected under the normal condition, and the gradient change threshold is not limited;
setting a first proximity threshold and a first proximity range of a first pixel coordinate, and acquiring a second pixel coordinate of a first pixel point which is located in the first proximity range of the first pixel coordinate and is away from the first pixel coordinate as the first proximity threshold; the first proximity threshold and the first proximity range are set based on the specific first pixel coordinate condition, and corresponding four second pixel coordinates are obtained in the end under normal conditions;
calculating a sub-pixel value of each second pixel coordinate by adopting a bilinear interpolation algorithm; fitting the sub-pixel values by adopting a least square method to obtain a first graph; acquiring a central coordinate of the first graph; the coordinates are sub-pixel coordinates corresponding to the first pixel coordinates respectively;
for example: if the obtained first graph is a circle, obtaining the circle center coordinate of the circle; the center coordinate is the sub-pixel coordinate corresponding to the first pixel coordinate.
Wherein, the bilinear interpolation algorithm is as follows: setting a bilinear interpolation formula; calculating a gray value or RGB value of the second pixel coordinate; calculating a weight value of the second pixel coordinate to the first pixel coordinate; substituting the weight value and the gray value/RGB value into the bilinear interpolation formula respectively to obtain a sub-pixel value of each second pixel coordinate;
for example: the bilinear interpolation formula is set as follows: f (i, j) = w 1+ p1+ w 2+ p2+ w 3+ p3+ w4 p4; if the first pixel coordinate is (2.5, 4.5), the corresponding second pixel coordinate is (2, 4), (2, 5), (3, 4), (3, 5); pi (i =1,2,3,4) is a gray value/RGB value of the second pixel coordinate, and wi (i =1,2,3,4) is a weight value corresponding to the second pixel coordinate.
Integrating the sub-pixel coordinates to obtain a first sub-pixel outline of the second image;
setting a zooming range, wherein the zooming range is an interval in which an object to be positioned is zoomed in or zoomed out;
creating an image template (i.e. a second sub-pixel outline) corresponding to the number of templates and having different angle ranges, step sizes and scaling ranges for the first sub-pixel outline, wherein the first sub-pixel outline needs to be subjected to outline offset processing, and then the corresponding sub-pixel coordinates are changed, and the outline offset processing is as follows: for example, the angle range is 0 to 5 degrees, the offset step is 1, and the zoom range is 10%, then the first sub-pixel profile is correspondingly moved 5 times within 0 to 5 degrees, and a corresponding 10% reduced and enlarged profile is correspondingly generated, which is only used for exemplifying the step and cannot be used as a limitation for the angle range, the offset step, and the zoom range;
in the step, the image is optimized through a unique sub-pixel optimization step, so that the image contour with higher precision is obtained, the sub-pixel optimization step in the step can greatly shorten the image processing time, and the image processing efficiency is improved.
S300, obtaining an image to be positioned, performing an approximate screening step on the image to be positioned to obtain a third image, and performing an optimization processing step and a sub-pixel optimization step on the third image to obtain a second sub-pixel outline; and performing a comparison step on the second sub-pixel outline and the image template, and positioning the image to be positioned based on the comparison step.
Step S300 specifically includes:
screening the image to be positioned (namely the first image to be positioned) by adopting a DenseNus semantic segmentation algorithm and a deep learning algorithm to obtain a third image (namely a second image to be positioned), wherein the algorithm is to repeatedly mark and train in a plurality of images to be positioned so as to generate an image closer to a part to be positioned; the meaning of screening the image template by the DenseNumbers semantic segmentation algorithm and the deep learning algorithm is that the redundancy is reduced, and the visual positioning efficiency of the image is improved;
sequentially comparing the second sub-pixel outline with the image template, and calculating a first difference value (namely an angle offset value) of the angle range of the second sub-pixel outline and the image template and a second difference value (namely a scaling offset value) of the scaling range by each comparison;
setting a first similarity (namely a first offset threshold) and a second similarity (namely a second offset threshold), and comparing the first difference and the second difference with the first similarity and the second similarity respectively;
if the first difference and the second difference are respectively smaller than or equal to the first similarity and the second similarity, selecting the image template corresponding to the first difference and the second difference, wherein the image template represents the positioning of the second image, namely a positioning template; the similarity can be set according to the situation, and is generally set between 0.0 and 1, when the similarity is between 0.0 and 1, the positioning accuracy of the whole image is better, and the range of 0.0 to 1 is not taken as the limit of the similarity;
the pre-established template can be used for carrying out comparison for a plurality of times through the step, finally, a high-precision positioning template corresponding to the image to be positioned is obtained, and the object in the image can be positioned by utilizing the positioning template.
When the contour-based high-precision positioning method described in this embodiment is applied to a production and assembly process, the method can be used to perform high-precision positioning on an object in a processing area, so as to avoid collision or automatic damage when a robot used for production and assembly grips the object or loads the object into a server; the method can be used for positioning the object in the processing area with high precision, so that the positive and negative positions of the object in the processing area are prevented from being identified by mistake; the method can be used for positioning the object in the processing area with high precision, greatly improves the production efficiency, saves the manpower, and ensures the quality and the safety of the object in the processing area.
Example 2
The present embodiment provides a contour-based high-precision positioning system, as shown in fig. 4, including:
the system comprises an image optimization module, a sub-pixel processing module, a template creating module and an image positioning module;
the image optimization module is used for acquiring a first image, and performing HDR optimization processing on the first image to obtain a second image;
after the second image is obtained, the image optimization module sends a first signal to the sub-pixel processing module; after receiving the first signal, the sub-pixel processing module executes a sub-pixel optimization step on the second image to obtain a first sub-pixel outline of the second image; the sub-pixel processing module executes a template establishing step on the sub-pixel outline to obtain a plurality of image templates;
specifically, the method comprises the following steps: the sub-pixel processing module sets a gradient change threshold value; the sub-pixel processing module acquires a first gray value or RGB value (namely a first color value) of each second pixel point of the second image; the sub-pixel processing module detects a gradient change parameter of the first gray value or the RGB value by adopting a canny edge detection algorithm; the sub-pixel processing module extracts a second pixel point of the gradient change parameter in the gradient change threshold value from the second pixel point, and defines the second pixel point as a first pixel point; the sub-pixel processing module acquires a first pixel coordinate of the first pixel point in the second image;
the sub-pixel processing module sets a first adjacent threshold and a first adjacent range of a first pixel coordinate, and obtains a second pixel coordinate of a first pixel point which is located in the first adjacent range of the first pixel coordinate and is far away from the first pixel coordinate to be the first adjacent threshold;
the sub-pixel processing module calculates the sub-pixel value of each second pixel coordinate by adopting a bilinear interpolation algorithm; fitting the sub-pixel values by adopting a least square method to obtain a first graph; acquiring a central coordinate of the first graph; the coordinates are sub-pixel coordinates corresponding to the first pixel coordinates respectively;
integrating the sub-pixel coordinates by a sub-pixel processing module to obtain a first sub-pixel outline of the second image; after the sub-pixel processing module obtains the first sub-pixel outline, a second signal is sent to the template creating module, and after the template creating module receives the second signal, corresponding steps are executed to create an image template;
specifically, the method comprises the following steps: the template creating module acquires the image specification parameters of the second image and sets the regional specification of the image template according to the image specification parameters; the template creating module acquires the visual field range of the image acquisition equipment and the position change of an object to be positioned, when the visual field range of the image acquisition equipment is large and the position change of the object to be positioned is small, the template creating module sets the specification of the image template area to be 3-5 times of the image specification parameter of the second image, the value of the magnification is not limited, and when the visual field range of the image acquisition equipment is large and the position change of the object to be positioned is large, the template creating module sets the specification of the image template area to be the visual field specification of the whole camera;
the template establishing module sets an angle range and an offset step length; the template creating module calculates the number of image templates based on the angle range and the offset step length; wherein the number of image templates = angle range/offset step; the template establishing module sets a zooming range, wherein the zooming range is an interval in which an object to be positioned is zoomed in or zoomed out; the template creation module creates image templates corresponding to the number of templates and having different angle ranges, step sizes and zoom ranges for the first sub-pixel outline.
The template creating module sends a third signal to the image positioning module after obtaining the image template, the image positioning module obtains an image to be positioned after receiving the third signal, and performs an approximate screening step on the image to be positioned to obtain a third image, and the image positioning module performs an optimization processing step and a sub-pixel optimization step on the third image to obtain a second sub-pixel outline; the image positioning module executes a comparison step on the second sub-pixel outline and the image template, and positions the image to be positioned based on the comparison step;
specifically, the method comprises the following steps: the image positioning module screens the image to be positioned by adopting a DenseNus semantic segmentation algorithm and a deep learning algorithm to obtain a third image; the image positioning module compares the second sub-pixel contour with the image template in sequence, and calculates a first difference value of an angle range and a second difference value of a zooming range of the second sub-pixel contour and the image template in each comparison; the image positioning module sets a first similarity and a second similarity, and compares the first difference and the second difference with the first similarity and the second similarity respectively; if the first difference and the second difference are respectively smaller than or equal to the first similarity and the second similarity, the image positioning module selects the image template corresponding to the first difference and the second difference as a positioning template.
Based on the same inventive concept as the contour-based high-precision positioning method in the foregoing embodiments, the present specification further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the contour-based high-precision positioning method.
Different from the prior art, the high-precision positioning method, the system and the medium based on the contour can optimize the image to be processed by adopting HDR technology through the method, improve the precision of the image to be processed by adopting sub-pixel level optimization processing, and execute the high-efficiency comparison processing steps of a plurality of sub-pixel level models.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, and a program that can be implemented by the hardware and can be instructed by the program to be executed by the relevant hardware may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic or optical disk, and the like.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A high-precision positioning method based on a contour is characterized by comprising the following steps:
acquiring a first image, and performing an optimization processing step on the first image to obtain a second image;
setting a gradient change threshold, and executing a sub-pixel optimization step on the second image based on the gradient change threshold to obtain a first sub-pixel outline;
executing a template creating step based on the first sub-pixel outline to obtain a first image template;
acquiring a first image to be positioned, and comparing the first image to be positioned based on the first image template to obtain a positioning template;
positioning the first image to be positioned based on the positioning template;
the template creating step includes: setting a scaling range, an angle range and an offset step length; calculating the number of templates based on the angle range and the offset step; performing contour offset processing on the first sub-pixel contour based on the angle range, the offset step length and the scaling range to obtain second sub-pixel contours corresponding to the number of the templates; setting the second sub-pixel outline as the first image template;
the step of performing a comparison step on the first image to be positioned based on the first image template further comprises: configuring a semantic segmentation algorithm, and processing the first image to be positioned by adopting the semantic segmentation algorithm to obtain a second image to be positioned; executing the optimization processing step on the second image to be positioned to obtain a third image to be positioned; performing the sub-pixel optimization step on the third image to be positioned to obtain a third sub-pixel outline; performing the comparing step on the third sub-pixel outline and the first image template.
2. The contour-based high-precision positioning method according to claim 1, characterized in that: the sub-pixel optimization step comprises:
configuring an edge detection algorithm and a bilinear interpolation algorithm;
acquiring a first pixel point of the second image based on the edge detection algorithm and the gradient change threshold;
and acquiring the first sub-pixel outline based on the bilinear interpolation algorithm and the first pixel point.
3. The contour-based high-precision positioning method according to claim 2, characterized in that: the step of obtaining a first pixel point of the second image based on the edge detection algorithm and the gradient change threshold further comprises:
extracting a second pixel point of the second image;
acquiring a first color value of the second pixel point;
acquiring a gradient change parameter of the first color value by adopting the edge detection algorithm;
and selecting the second pixel point corresponding to the gradient change parameter and the gradient change threshold value as the first pixel point.
4. A contour-based high-precision positioning method according to claim 2 or 3, characterized in that: the step of obtaining the first sub-pixel profile based on the bilinear interpolation algorithm and the first pixel point further includes:
acquiring a first pixel coordinate of the first pixel point in the second image;
setting a first proximity threshold and a first proximity range of the first pixel coordinate;
selecting a first pixel coordinate which is located in the first proximity range and has a distance with the first pixel coordinate as the first proximity threshold value as a second pixel coordinate;
and acquiring the first sub-pixel outline based on the second pixel coordinate and the bilinear interpolation algorithm.
5. The contour-based high-precision positioning method according to claim 4, characterized in that: the step of obtaining the first sub-pixel profile based on the second pixel coordinates and the bilinear interpolation algorithm further comprises:
calculating a sub-pixel value of the second pixel coordinate by adopting the bilinear interpolation algorithm;
fitting the sub-pixel values by adopting a least square method to obtain a first graph;
acquiring a central coordinate of the first graph, and setting the central coordinate as a sub-pixel coordinate corresponding to the first pixel coordinate;
and integrating the sub-pixel coordinates to obtain the first sub-pixel outline.
6. The contour-based high-precision positioning method according to claim 1, characterized in that: the step of aligning comprises:
setting a first offset threshold and a second offset threshold;
comparing the third sub-pixel profile with the second sub-pixel profile and calculating an angle offset value and a scaling offset value of the second sub-pixel profile for the third sub-pixel profile based on the scaling range and the angle range;
comparing whether the angle offset value and the scaling offset value are not greater than the first offset threshold and the second offset threshold, respectively; and if so, selecting the second sub-pixel outline corresponding to the angle deviation value and the scaling deviation value as the positioning template of the first image to be positioned.
7. A contour-based high precision positioning system, comprising: the system comprises an image optimization module, a sub-pixel processing module, a template creating module and an image positioning module;
the image optimization module is used for acquiring a first image and performing an optimization processing step on the first image to obtain a second image;
the sub-pixel processing module is used for setting a gradient change threshold value and executing a sub-pixel optimization step on the second image based on the gradient change threshold value to obtain a first sub-pixel outline;
the template creating module is used for executing a template creating step according to the first sub-pixel outline to obtain a first image template; the template creating module is also used for setting a zooming range, an angle range and an offset step length; the template creation module calculates the number of templates based on the angle range and the offset step; the template creating module carries out contour offset processing on the first sub-pixel contours based on the angle range, the offset step length and the scaling range to obtain second sub-pixel contours corresponding to the number of the templates; the template creation module sets the second sub-pixel outline as the first image template;
the image positioning module is used for acquiring a first image to be positioned and comparing the first image to be positioned based on the first image template to obtain a positioning template; the image positioning module positions the first image to be positioned through the positioning template; the image positioning module is also used for configuring a semantic segmentation algorithm, and the image positioning module processes the first image to be positioned by adopting the semantic segmentation algorithm to obtain a second image to be positioned; the image positioning module executes the optimization processing step on the second image to be positioned to obtain a third image to be positioned; the image positioning module executes the sub-pixel optimization step on the third image to be positioned to obtain a third sub-pixel outline; the image positioning module performs the comparing step on the third sub-pixel outline and the first image template.
8. A computer-readable storage medium, having a computer program stored thereon, wherein the computer program, when being executed by a processor, implements the steps of the contour-based high-precision positioning method according to any one of claims 1 to 6.
CN202110181351.7A 2021-02-10 2021-02-10 High-precision positioning method, system and medium based on contour Active CN113034576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110181351.7A CN113034576B (en) 2021-02-10 2021-02-10 High-precision positioning method, system and medium based on contour

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110181351.7A CN113034576B (en) 2021-02-10 2021-02-10 High-precision positioning method, system and medium based on contour

Publications (2)

Publication Number Publication Date
CN113034576A CN113034576A (en) 2021-06-25
CN113034576B true CN113034576B (en) 2023-03-21

Family

ID=76461193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110181351.7A Active CN113034576B (en) 2021-02-10 2021-02-10 High-precision positioning method, system and medium based on contour

Country Status (1)

Country Link
CN (1) CN113034576B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003258A (en) * 2018-06-15 2018-12-14 广东工业大学 A kind of high-precision sub-pix circular pieces measurement method
CN109472271A (en) * 2018-11-01 2019-03-15 凌云光技术集团有限责任公司 Printed circuit board image contour extraction method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10281741A (en) * 1997-04-07 1998-10-23 Nikon Corp Image processor
US7162073B1 (en) * 2001-11-30 2007-01-09 Cognex Technology And Investment Corporation Methods and apparatuses for detecting classifying and measuring spot defects in an image of an object
CN106969706A (en) * 2017-04-02 2017-07-21 聊城大学 Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision
CN109409385B (en) * 2018-10-16 2021-02-19 南京鑫和汇通电子科技有限公司 Automatic identification method for pointer instrument
CN110930359A (en) * 2019-10-21 2020-03-27 浙江科技学院 Method and system for detecting automobile shifting fork

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003258A (en) * 2018-06-15 2018-12-14 广东工业大学 A kind of high-precision sub-pix circular pieces measurement method
CN109472271A (en) * 2018-11-01 2019-03-15 凌云光技术集团有限责任公司 Printed circuit board image contour extraction method and device

Also Published As

Publication number Publication date
CN113034576A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN111758024B (en) Defect detection method and device
CN107507173B (en) No-reference definition evaluation method and system for full-slice image
CN108629343B (en) License plate positioning method and system based on edge detection and improved Harris corner detection
JP5699788B2 (en) Screen area detection method and system
CN110648349A (en) Weld defect segmentation method based on background subtraction and connected region algorithm
JP2013089234A (en) Image processing system
CN105160682B (en) Method for detecting image edge and device
CN107274452B (en) Automatic detection method for acne
CN110298344A (en) A kind of positioning of instrument knob and detection method based on machine vision
CN107369176B (en) System and method for detecting oxidation area of flexible IC substrate
CN110569774B (en) Automatic line graph image digitalization method based on image processing and pattern recognition
CN116258722B (en) Intelligent bridge building detection method based on image processing
EP4276755A1 (en) Image segmentation method and apparatus, computer device, and readable storage medium
CN113066088A (en) Detection method, detection device and storage medium in industrial detection
CN114881965A (en) Wood board joint detection method based on artificial intelligence and image processing
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN113610091A (en) Intelligent identification method and device for air switch state and storage medium
JPH10149449A (en) Picture division method, picture identification method, picture division device and picture identification device
CN113034576B (en) High-precision positioning method, system and medium based on contour
CN106934846B (en) Cloth image processing method and system
CN109003268B (en) Method for detecting appearance color of ultrathin flexible IC substrate
CN109426770B (en) Iris identification method
CN115760825A (en) Rammed earth wall apparent crack intelligent detection system
CN113643290B (en) Straw counting method and device based on image processing and storage medium
KR101881795B1 (en) Method for Detecting Edges on Color Image Based on Fuzzy Theory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant