CN114359383A - Image positioning method, device, equipment and storage medium - Google Patents

Image positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN114359383A
CN114359383A CN202111615125.1A CN202111615125A CN114359383A CN 114359383 A CN114359383 A CN 114359383A CN 202111615125 A CN202111615125 A CN 202111615125A CN 114359383 A CN114359383 A CN 114359383A
Authority
CN
China
Prior art keywords
area
image
edge
detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111615125.1A
Other languages
Chinese (zh)
Inventor
唐铭志
周钟海
姚毅
杨艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Suzhou Luster Vision Intelligent Device Co Ltd
Suzhou Lingyunguang Industrial Intelligent Technology Co Ltd
Original Assignee
Luster LightTech Co Ltd
Suzhou Luster Vision Intelligent Device Co Ltd
Suzhou Lingyunguang Industrial Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd, Suzhou Luster Vision Intelligent Device Co Ltd, Suzhou Lingyunguang Industrial Intelligent Technology Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN202111615125.1A priority Critical patent/CN114359383A/en
Publication of CN114359383A publication Critical patent/CN114359383A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an image positioning method, an image positioning device, image positioning equipment and a storage medium, wherein the method comprises the following steps: carrying out edge point detection on a target area of an image to be processed to obtain a preliminary edge point; wherein the target area is a standard graphic area; constructing a detection area on the image to be processed according to the preliminary edge point; wherein the detection area includes: a target area and a background area other than the target area; determining accurate edge points in the detection area according to the gray value change condition between adjacent pixel points in the detection area; and determining the position information of the target area in the image to be processed according to the accurate edge point. By the technical scheme provided by the embodiment of the invention, the target area in the image can be accurately positioned, and a new scheme is provided for the target area positioning technology in image processing.

Description

Image positioning method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image positioning method, an image positioning device, image positioning equipment and a storage medium.
Background
In a scene in which a target area is processed based on an image processing technique, the positioning of the target area is a precondition for performing a subsequent processing operation, and when the target area is not positioned, the subsequent operation is not mentioned.
However, due to unstable imaging and the influence of external interference, the positioning of the target area often generates large deviation, which causes positional deviation of the positioned target area, and in a severe case, the positioning directly fails, which affects the result of subsequent detection processing on the image.
Therefore, how to accurately position the target area in the image is a problem to be solved urgently at present.
Disclosure of Invention
The invention provides an image positioning method, an image positioning device, image positioning equipment and a storage medium, which can accurately position a target area in an image and provide a new scheme for a target area positioning technology in image processing.
In a first aspect, an embodiment of the present invention provides an image positioning method, where the method includes:
carrying out edge point detection on a target area of an image to be processed to obtain a preliminary edge point; wherein the target area is a standard graphic area;
constructing a detection area on the image to be processed according to the preliminary edge point; wherein the detection area includes: a target area and a background area other than the target area;
determining accurate edge points in the detection area according to the gray value change condition between adjacent pixel points in the detection area;
and determining the position information of the target area in the image to be processed according to the accurate edge point.
In a second aspect, an embodiment of the present invention further provides an image positioning apparatus, including:
the acquisition module is used for detecting edge points of a target area of an image to be processed to obtain preliminary edge points; wherein the target area is a standard graphic area;
the construction module is used for constructing a detection area on the image to be processed according to the preliminary edge point; wherein the detection area includes: a target area and a background area other than the target area;
the edge point determining module is used for determining accurate edge points in the detection area according to the gray value change condition between adjacent pixel points in the detection area;
and the information determining module is used for determining the position information of the target area in the image to be processed according to the accurate edge point.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image localization method as provided by any of the embodiments of the present invention.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is used to execute the image positioning method provided in any embodiment of the present invention when the computer program is executed by a processor.
The method comprises the steps of carrying out edge point detection on a target area of an image to be processed to obtain a preliminary edge point, constructing a detection area on the image to be processed according to the preliminary edge point, and determining an accurate edge point in the detection area according to the gray value change condition between adjacent pixel points in the detection area; and determining the position information of the target area in the image to be processed according to the accurate edge point. By the technical scheme provided by the embodiment of the invention, the target area in the image can be accurately positioned, and a new scheme is provided for the target area positioning technology in image processing.
Drawings
Fig. 1A is a flowchart of an image positioning method according to an embodiment of the present invention;
fig. 1B is a schematic view of a detection area according to a first embodiment of the present invention;
fig. 2 is a flowchart of an image positioning method according to a second embodiment of the present invention;
fig. 3 is a flowchart of an image positioning method according to a third embodiment of the present invention;
fig. 4A is a flowchart of an image positioning method according to a fourth embodiment of the present invention;
fig. 4B is a schematic diagram illustrating a matching effect of a target area according to a fourth embodiment of the present invention;
fig. 5 is a block diagram of an image positioning apparatus according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1A is a flowchart of an image positioning method according to an embodiment of the present invention, and fig. 1B is a schematic diagram of a detection area according to an embodiment of the present invention, which is applicable to a situation of positioning a target area in an image, where the method may be executed by an image positioning apparatus, the apparatus may be implemented in a software and/or hardware manner, and may be integrated in an electronic device having an image positioning function, as shown in fig. 1A-1B, the image positioning method according to the embodiment specifically includes:
s101, performing edge point detection on a target area of an image to be processed to obtain a preliminary edge point.
The image to be processed refers to an image in which a target area needs to be located, and specifically, the image to be processed may be an image acquired by an image sensor, for example, may be a microscopic image acquired by a microscopic camera; images downloaded from a database or the internet, etc. The target area refers to an area in the image to be processed, which needs to be positioned. The target area in this embodiment is a standard graphic area, and specifically, the target area may be a standard graphic area such as a rectangle or a triangle. The preliminary edge point refers to an edge pixel point of a target area in the preliminarily determined image to be processed.
Optionally, in this embodiment, there are many ways to detect edge points of the target area of the image to be processed. One of the possible embodiments is: specifically, the image to be processed may be input into the edge point detection model, the edge point detection model may perform edge point detection on the target area of the image to be processed, and each detected edge point may be output as a preliminary edge point. Another possible implementation is: based on an image edge detection algorithm, such as a Laplacian (Laplacian) edge detection algorithm, edge point detection is performed on a target area of an image to be processed to obtain a preliminary edge point.
Optionally, in order to ensure the accuracy of the determined preliminary edge point of the image to be processed, it may be determined whether the image to be processed meets the positioning requirement, and the preliminary edge point is further determined according to the determination result, and correspondingly, the edge point detection is performed on the target area of the image to be processed to obtain the preliminary edge point, which may include:
and if the image to be processed does not meet the positioning requirement, performing image enhancement on the image to be processed, and performing edge point detection on the target area of the image to be processed after the image enhancement to obtain a preliminary edge point.
The positioning requirement refers to a requirement that needs to be met when the image to be processed is positioned, and specifically, the positioning requirement may include a requirement on indexes such as definition, brightness, exposure rate or contrast of the image to be processed. Image enhancement processing is an image processing technique that can enhance useful information in an image and improve the visual effect of the image.
Optionally, index thresholds of indexes such as image definition, brightness, exposure rate, contrast and the like may be preset, and further, whether the image to be processed meets the positioning requirement is determined according to a judgment result of whether the indexes such as image definition, brightness, exposure rate, contrast and the like of the image to be processed meet preset conditions. Specifically, if the definition, brightness, exposure rate and contrast of the image to be processed all reach the preset index threshold, the image to be processed can be considered to meet the positioning requirement, and if not, the image to be processed can be considered not to meet the positioning requirement; the determination model may also be used to determine whether the image to be processed meets the positioning requirement, specifically, the image to be processed may be input into a preset determination model, the image to be processed may be analyzed, and a result of whether the image to be processed meets the positioning requirement may be output.
Optionally, if it is determined in the foregoing manner that the to-be-processed image does not meet the positioning requirement, an image enhancement technique may be used to purposefully emphasize the overall or local characteristics of the to-be-processed image, for example, the to-be-processed image is processed according to an index that does not meet the requirement, or some interesting features in the to-be-processed image are emphasized, so that differences between features of different objects in the image are enlarged, that is, the to-be-processed image is subjected to image enhancement processing. And then, carrying out edge point detection on the target area of the image to be processed after the image enhancement processing to obtain a preliminary edge point. It should be noted that, in this way, the accuracy of determining the preliminary edge point can be improved.
Optionally, if the image to be processed meets the positioning requirement through the judgment in the above manner, the edge point detection may be directly performed on the target area of the image to be processed, so as to obtain a preliminary edge point.
And S102, constructing a detection area on the image to be processed according to the preliminary edge points.
The detection area refers to an area where a preliminary edge point determined in the image to be processed is located. Referring to fig. 1B, the detection area includes: a target area and a background area other than the target area. Preferably, this embodiment may set at least one detection area on each area side in the target area. When a detection area is provided on each area side, the detection area needs to include a plurality of preliminary edge points. This embodiment preferably provides at least two detection areas on each area side. It should be noted that fig. 1B only shows a case where the detection region is constructed on the left region side of the target region, and the detection regions also need to be constructed in the same manner for the remaining three region sides. The background area refers to an area other than the target area of the standard pattern in each detection area of the image to be processed.
Optionally, after the preliminary edge points are obtained, a detection area including at least two preliminary edge points can be constructed according to the position of the preliminary edge points and the preset area specification by directly taking the preliminary edge points as a reference; or, in the preliminary edge points, according to a preset screening rule, determining representative target edge points, and further constructing a detection region based on the target edge points, specifically, if the preset screening rule extracts a preset number of preliminary edge points as the target edge points, constructing the detection region on the image to be processed correspondingly, according to the preliminary edge points, including: and aiming at each region edge of the target region, extracting at least two target edge points from the preliminary edge points, and constructing a detection region corresponding to the target edge points according to a preset region specification by taking each target edge point as a reference.
Wherein the target edge points refer to at least two edge points determined from the preliminary edge points. The predetermined area specification refers to a predetermined pixel size specification of the detection area.
Optionally, for each region edge of the target region, at least two target edge points may be randomly extracted from the preliminary edge points, and a detection region corresponding to the target edge point is constructed according to a preset pixel size specification of the detection region with each target edge point as a reference; or based on a certain preset condition, selecting an edge point satisfying the preset condition from the preliminary edge points as a target edge point, for example, extracting one edge point every preset distance as the target edge point. And further constructing a detection area corresponding to the target edge point according to the pixel size specification of a preset detection area by taking each target edge point as a reference.
For example, referring to fig. 1B, for the left region side of the target rectangular region, the distance between a preset number of pixel points is used as a spacing distance, for example, 10 pixel points are used as the spacing distance, the target edge point is extracted, and a detection region corresponding to the target edge point is constructed on the image to be processed according to a pixel matrix specification (e.g., 100 × 1) with a preset size, that is, a preset region specification, with each target edge point as a reference.
It should be noted that, by extracting the target edge points from the preliminary edge points and further constructing the detection region based on the target edge points, the accuracy of the constructed target region can be ensured and the efficiency of constructing the detection region can be improved.
S103, determining accurate edge points in the detection area according to the gray value change condition between adjacent pixel points in the detection area.
The accurate edge points refer to at least two edge pixel points on each zone edge of the finally determined target zone. The gray value refers to the color depth of a pixel point in an image, and generally ranges from 0 to 255.
Optionally, after the detection area is constructed, information of all pixel points in the detection area can be input into a preset accurate edge point determination model, so that the accurate edge point determination model outputs accurate edge points in the detection area according to the gray value change condition between adjacent pixel points; the gray values of all the pixel points in each detection area can be counted, the gray value change conditions between adjacent pixel points in the detection areas are analyzed based on preset rules, the pixel points meeting the preset rules are determined, and the accurate edge points in the detection areas are determined according to the pixel points meeting the preset rules.
And S104, determining the position information of the target area in the image to be processed according to the accurate edge point.
The position information refers to the azimuth coordinate information of the target area in the image to be processed.
Optionally, after the accurate edge points in the detection area are determined, the determined accurate edge points may be classified first, and the accurate edge points are processed according to the classification result, so as to determine the position information of the target area in the image to be processed. Specifically, the accurate edge points may be classified based on each region edge of the target region, for example, if the target region includes four region edges, the determined accurate edge points may be classified into four corresponding types, and further, for each region edge, the position information of the target region in the image to be processed is determined based on the position coordinates of the accurate edge points corresponding to the region edge.
Optionally, after the position information of the target region in the image to be processed is determined, the accurate position of the target region in the image to be processed may be further determined according to the position information of the target region in the image to be processed, so as to achieve accurate positioning of the target region of the image to be processed.
The method comprises the steps of carrying out edge point detection on a target area of an image to be processed to obtain a preliminary edge point, constructing a detection area on the image to be processed according to the preliminary edge point, and determining an accurate edge point in the detection area according to the gray value change condition between adjacent pixel points in the detection area; and determining the position information of the target area in the image to be processed according to the accurate edge point. By the technical scheme provided by the embodiment of the invention, the target area in the image can be accurately positioned, and a new scheme is provided for the target area positioning technology in image processing.
Example two
Fig. 2 is a flowchart of an image positioning method according to a second embodiment of the present invention, and this embodiment further explains "determining an accurate edge point in a detection area according to a variation of a gray value between adjacent pixel points in the detection area" in detail based on the above embodiment, and as shown in fig. 2, the image positioning method according to this embodiment specifically includes:
s201, performing edge point detection on a target area of the image to be processed to obtain a preliminary edge point.
S202, constructing a detection area on the image to be processed according to the preliminary edge points.
S203, determining the gray gradient between each pixel point and the adjacent pixel point according to the gray value of each pixel point in the detection area.
The gray gradient represents the deviation of the gray value between a pixel point and an adjacent pixel point.
Optionally, after the detection area is constructed, for each pixel point in the detection area, obtaining a gray value corresponding to the pixel point and a gray value of an adjacent pixel point of the pixel point, and further, inputting the gray values of the pixel point and the adjacent pixel point into a preset gray gradient determination model, and outputting a gray gradient between the pixel point and the adjacent pixel point; or substituting the gray values of the pixel point and the adjacent pixel points into a preset calculation formula according to a certain calculation rule to determine the gray gradient between the pixel point and the adjacent pixel points.
And S204, if the gray gradient is larger than the gradient threshold, taking the pixel point as an accurate edge point in the detection area.
The gradient threshold may be a preset threshold for measuring whether the gray level change between a pixel point and an adjacent pixel point reaches a condition that the pixel point is used as an accurate edge point.
Optionally, after determining the gray scale gradient between the pixel point and the adjacent pixel point, the relationship between the gray scale gradient and the gradient threshold value can be further determined, and if the gray scale gradient is smaller than the gradient threshold value, the pixel point is rejected; and if the gray gradient is larger than the gradient threshold, taking the pixel point as an accurate edge point in the detection area. Specifically, each detection area may determine at least one accurate edge point.
Optionally, after the accurate edge point in the detection area is determined, the accurate edge point may be updated according to a preset rule. Specifically, the offset error of the precise edge point in the detection area may be determined first; and updating the accurate edge point in the detection area according to the offset error and/or the position relation between the accurate edge point and the background area.
The offset error refers to a coordinate deviation between the accurate edge points, and may represent a fluctuation condition between the accurate edge points.
Optionally, after determining the accurate edge points, the offset error of the accurate edge points may be calculated according to the position coordinates of the accurate edge points by further using a preset algorithm, that is, the offset error of the accurate edge points in the detection area is determined. For example, for each region edge, an average value of the position coordinates of each accurate edge point on the region edge may be calculated first, and a deviation between the position coordinates of each accurate edge point and the average value may be further used as the offset error of the accurate edge point.
Optionally, after determining the offset error of the accurate edge point in the detection area, the corresponding accurate edge point with the offset error greater than the offset threshold may be removed as an interference point according to the offset error, and the corresponding accurate edge point with the offset error smaller than the offset threshold is used as an updated accurate edge point, so as to update the accurate edge point; or the accurate edge point with the position coordinate closer to the background area can be used as the updated accurate edge point only according to the position relation between the accurate edge point and the background area, so that the accurate edge point is updated; and combining the offset error and the position relation between the accurate edge point and the background area, and taking the accurate edge point of which the position coordinate is closer to the background area and of which the offset error is smaller than the deviation threshold as the updated accurate edge point.
It should be noted that, by updating the accurate edge points, the accuracy of the determined accurate edge points can be ensured, the interference points are eliminated, and the influence of the interference points on the positioning result is avoided.
And S205, determining the position information of the target area in the image to be processed according to the accurate edge point.
After the detection area is constructed, the gray gradient between each pixel point and the adjacent pixel point is further determined according to the gray value of each pixel point in the detection area, if the gray gradient is larger than a gradient threshold value, the pixel point is used as an accurate edge point in the detection area, and finally the position information of the target area in the image to be processed is determined according to the accurate edge point. By the method, accurate and precise edge points can be obtained more effectively, so that the target area in the image can be precisely positioned.
EXAMPLE III
Fig. 3 is a flowchart of an image positioning method according to a third embodiment of the present invention, and in this embodiment, based on the above embodiment, a detailed explanation is further performed on "determining position information of a target area in an image to be processed according to a precise edge point", and as shown in fig. 3, the image positioning method according to this embodiment specifically includes:
s301, performing edge point detection on a target area of the image to be processed to obtain a preliminary edge point.
And S302, constructing a detection area on the image to be processed according to the preliminary edge points.
S303, determining accurate edge points in the detection area according to the gray value change condition between adjacent pixel points in the detection area.
S304, fitting the accurate edge points in the detection areas belonging to the same area side in the target area to obtain the fitting edge line of each area side.
The fitting edge line refers to an edge line of the target area obtained by a fitting technology.
Optionally, for each region edge of the target region, the accurate edge point of the corresponding region edge in the detection region may be determined first, and the accurate edge point of the region edge is fitted by using a fitting algorithm to obtain a fitting edge line of the corresponding region edge. The fitting algorithm may adopt a least square method or other fitting algorithms, which is not limited in the present invention.
Optionally, fitting the accurate edge points in the detection areas belonging to the same area edge in the target area, and further obtaining a fitting edge line of each area edge according to a preset rule, which may specifically include: fitting accurate edge points in detection areas belonging to the same area side in a target area, and determining a fitting coefficient and a correlation coefficient of each area side; and if the correlation coefficient of each region edge meets the correlation requirement, determining the fitting edge line of each region edge according to the fitting coefficient of each region edge.
Wherein, the fitting coefficient refers to a relevant fitting coefficient for determining the fitting edge line. The correlation coefficient represents the degree of conformity of the functional relationship between the two variables to the linear relationship. The correlation requirement refers to a requirement for a value of an absolute value of a correlation coefficient, and specifically, if the absolute value of the correlation coefficient is 0, the correlation requirement is considered to be not satisfied, and in addition, the closer the absolute value of the correlation coefficient is to 1, the better the fitting effect is, and therefore, the correlation requirement may include a preset coefficient threshold value of the absolute value of the correlation coefficient.
Optionally, after the accurate edge points in the detection area are determined, the accurate edge points in the detection area belonging to the same area side in the target area can be fitted, the fitting coefficient and the correlation coefficient of each area side are determined, whether the absolute value of the correlation coefficient of each area side reaches a preset coefficient threshold value is further judged, if yes, the correlation coefficient of the corresponding area side meets the correlation requirement, and when the correlation coefficient of each area side meets the correlation requirement, the fitting edge line of each area side can be determined according to the fitting coefficient of each area side.
And S305, determining the intersection point of the fitting edge lines of the region sides as the top point of the target region.
The vertex of the target region refers to the vertex of the target standard graph region.
Optionally, after the fitted edge lines of the respective region sides are obtained, the coordinate information of the intersection points of the fitted edge lines of the respective region sides may be calculated according to a preset calculation rule, or the relevant parameter information of the fitted edge lines of the respective region sides may be input into a preset intersection point determination model to determine the coordinate information of the intersection points of the fitted edge lines. After the coordinate information of the intersection points of the fitting edge lines is determined, the intersection points of the fitting edge lines of the sides of each region are further determined according to the coordinate information of the intersection points, and the intersection points of the fitting edge lines are used as the vertexes of the target region.
Optionally, if the target region is a rectangular graphic region, the intersection point of the four fitting edge lines may be further determined according to the determined fitting edge lines of the four region sides, and the intersection point is used as the vertex of the target region. If the target area is a triangular graphic area, the intersection point of the three fitting edge lines can be further determined according to the determined fitting edge lines of the three area sides, and the intersection point is used as the vertex of the target area.
S306, determining the position information of the target area in the image to be processed according to the vertex of the target area.
The position information refers to the azimuth information of the target area in the image to be processed.
Optionally, after determining the vertex of the target area, the position coordinates of each vertex of the target area are determined in a coordinate system where the image to be processed is located according to the position information of the vertex, each vertex of the target area is further connected, so that each accurate area edge of the target area can be determined, and the orientation information of the coordinates of the whole target area in the image to be processed, that is, the position information of the target area in the image to be processed, is determined according to the relationship between each area edge.
After the accurate edge points in the detection area are determined, the accurate edge points in the detection area belonging to the same area side in the target area are fitted to obtain the fitting edge lines of the area sides, the intersection points of the fitting edge lines of the area sides are determined to be used as the top points of the target area, and the position information of the target area in the image to be processed is determined according to the top points of the target area. The position information of the target area determined in the mode in the image to be processed is more accurate, and therefore accurate positioning of the target area in the image can be achieved.
Example four
Fig. 4A is a flowchart of an image positioning method according to a fourth embodiment of the present invention, and fig. 4B is a schematic diagram of a matching effect of a target area according to the fourth embodiment of the present invention, in this embodiment, a detailed explanation is further performed on "detecting an edge point of the target area of an image to be processed to obtain a preliminary edge point" on the basis of the above embodiment, as shown in fig. 4A, the image positioning method according to this embodiment specifically includes:
s401, according to the matching template of the target area, carrying out area matching processing on the image to be processed to obtain a matching area.
The matching template includes a target region and a background region, and specifically, the matching template may be a shape correlation matching template or a grayscale correlation matching template. The shape correlation matching template is a template for matching based on a geometric shape, and the grayscale correlation matching template is a template for matching based on a grayscale value. The matching area refers to an area in the image to be processed, which is matched with the template.
Optionally, if the target area is a rectangular area, the number of the matching templates is at least four.
Optionally, if the shape of the target region has more features, the shape correlation matching template may be used as the matching template of the target region, otherwise, the grayscale correlation matching template may be used as the matching template of the target region.
Optionally, after the matching template of the target region is determined, the determined matching template of the target region and the image to be processed may be input into the matching model together, so that the matching model performs region matching processing on the image to be processed based on each matching template, and a region with the highest similarity degree with the determined matching template in the image to be processed is determined and is used as the matching region. Or according to the matching template of the determined target area and the image to be processed, performing area matching processing on the image to be processed by using an image processing technology, determining an area with the highest similarity degree with the determined matching template in the image to be processed, and taking the area as a matching area.
For example, referring to fig. 4B, the target area is a thick solid line rectangular area, the four dashed line areas are four matching areas of the target area, and the four matching areas are located in areas where four vertices of the target rectangular area are located respectively. Fig. 4B shows the matching effect of the image to be processed after the region matching process is performed using the matching template.
And obtaining a matching area of the target area after the area matching processing is carried out on the image to be processed.
S402, determining a boundary area of the target area according to the matching area.
The boundary area refers to an area where the boundary of the target area is located.
Optionally, after the matching area in the target area is determined, the to-be-processed image labeled with the matching area may be input into a preset boundary determination model by using the boundary determination model, and the boundary area of the target area is output, or based on the matching area, the boundary area of each area edge of the target area may be determined by using a preset rule, so as to obtain the boundary area of the target area. For example, taking the determination of the boundary region of the upper region edge as an example, the central points of two matching regions of the upper region edge may be determined first, a preset number of pixel specification distances (for example, distances between two pixel points above and below the central point) are reserved above and below the central point, and the boundary region of the upper region edge of the target region is determined, as shown in fig. 4B, the boundary region of the upper region edge of the target region is a thin solid line rectangular region. And performing edge detection on the matching area by using a conventional image edge detection algorithm, and determining the boundary area of the target area in the matching area.
And S403, extracting a preliminary edge point from the boundary area.
Optionally, after determining the boundary region of the target region, an edge point detection model or an edge detection algorithm may be further adopted to extract preliminary edge points in the boundary region.
And S404, constructing a detection area on the image to be processed according to the preliminary edge points.
S405, determining accurate edge points in the detection area according to the gray value change situation between adjacent pixel points in the detection area.
And S406, determining the position information of the target area in the image to be processed according to the accurate edge point.
According to the embodiment of the invention, the image to be processed is subjected to area matching processing according to the matching template of the target area to obtain the matching area, the boundary area of the target area is determined according to the matching area, the preliminary edge point is extracted from the boundary area, the detection area is constructed according to the preliminary edge point, the accurate edge point is determined, and finally the position information of the target area in the image to be processed is determined. The image to be processed is subjected to area matching processing, namely preliminary processing, by utilizing the matching template, more accurate preliminary edge points can be determined, and follow-up accurate positioning of an image target area is facilitated.
EXAMPLE five
Fig. 5 is a block diagram of an image positioning apparatus according to a fifth embodiment of the present invention, where the image positioning apparatus according to the fifth embodiment of the present invention is capable of executing an image positioning method according to any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the executed method. The image localization apparatus may include an acquisition module 501, a construction module 502, an edge point determination module 503, and an information determination module 504.
An obtaining module 501, configured to perform edge point detection on a target area of an image to be processed to obtain a preliminary edge point; wherein the target area is a standard graphic area;
a constructing module 502, configured to construct a detection region on the image to be processed according to the preliminary edge point; wherein the detection area includes: a target area and a background area other than the target area;
an edge point determining module 503, configured to determine an accurate edge point in the detection area according to a gray value change condition between adjacent pixel points in the detection area;
an information determining module 504, configured to determine, according to the accurate edge point, position information of the target area in the image to be processed.
The method comprises the steps of carrying out edge point detection on a target area of an image to be processed to obtain a preliminary edge point, constructing a detection area on the image to be processed according to the preliminary edge point, and determining an accurate edge point in the detection area according to the gray value change condition between adjacent pixel points in the detection area; and determining the position information of the target area in the image to be processed according to the accurate edge point. By the technical scheme provided by the embodiment of the invention, the target area in the image can be accurately positioned, and a new scheme is provided for the target area positioning technology in image processing.
Further, the building module 502 is specifically configured to:
and aiming at each region edge of the target region, extracting at least two target edge points from the preliminary edge points, and constructing a detection region corresponding to the target edge points according to a preset region specification by taking each target edge point as a reference.
Further, the edge point determining module 503 may include:
the gradient determining unit is used for determining the gray gradient between each pixel point and the adjacent pixel point according to the gray value of each pixel point in the detection area;
and the edge point determining unit is used for taking the pixel point as an accurate edge point in the detection area if the gray gradient is greater than a gradient threshold.
Further, the above apparatus further comprises:
the offset error determining module is used for determining the offset error of the accurate edge point in the detection area;
and the updating module is used for updating the accurate edge point in the detection area according to the offset error and/or the position relation between the accurate edge point and the background area.
Further, the information determining module 504 may include:
the edge line determining unit is used for fitting the accurate edge points in the detection areas belonging to the same area edge in the target area to obtain the fitting edge line of each area edge;
the vertex determining unit is used for determining the intersection point of the fitting edge lines of each region side as the vertex of the target region;
and the position information determining unit is used for determining the position information of the target area in the image to be processed according to the vertex of the target area.
Further, the edge line determination unit may include:
the coefficient determining subunit is used for fitting the accurate edge points in the detection areas belonging to the same area side in the target area and determining the fitting coefficient and the correlation coefficient of each area side;
and the edge line determining subunit is used for determining the fitting edge line of each region edge according to the fitting coefficient of each region edge if the correlation coefficient of each region edge meets the correlation requirement.
Further, the obtaining module 501 is specifically configured to:
and if the image to be processed does not meet the positioning requirement, performing image enhancement processing on the image to be processed, and performing edge point detection on a target area of the image to be processed after the image enhancement processing to obtain a preliminary edge point.
Further, the obtaining module 501 may include:
the matching region determining unit is used for performing region matching processing on the image to be processed according to a matching template of a target region to obtain a matching region; wherein the matching template contains the target region and the background region;
a boundary region determining unit, configured to determine a boundary region of the target region according to the matching region;
an extraction unit for extracting preliminary edge points from the boundary area.
EXAMPLE six
Fig. 6 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary device suitable for use to implement embodiments of the present invention. The device shown in fig. 6 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present invention.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory (cache 32). The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM (Compact disk Read-Only Memory), a DVD-ROM (Digital Video disk-Read-Only Memory), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. This communication may be via an Input/Output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network, such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, RAID (Redundant Arrays of Independent Disks) systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, such as implementing an image positioning method provided by an embodiment of the present invention, by running a program stored in the system memory 28.
EXAMPLE seven
The seventh embodiment of the present invention further provides a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, where the computer program is used for executing the image positioning method provided by the embodiment of the present invention when the computer program is executed by a processor.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the embodiments of the present invention have been described in more detail through the above embodiments, the embodiments of the present invention are not limited to the above embodiments, and many other equivalent embodiments may be included without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (11)

1. An image localization method, comprising:
carrying out edge point detection on a target area of an image to be processed to obtain a preliminary edge point; wherein the target area is a standard graphic area;
constructing a detection area on the image to be processed according to the preliminary edge point; wherein the detection area includes: a target area and a background area other than the target area;
determining accurate edge points in the detection area according to the gray value change condition between adjacent pixel points in the detection area;
and determining the position information of the target area in the image to be processed according to the accurate edge point.
2. The method according to claim 1, wherein the constructing a detection region on the image to be processed according to the preliminary edge point comprises:
and aiming at each region edge of the target region, extracting at least two target edge points from the preliminary edge points, and constructing a detection region corresponding to the target edge points according to a preset region specification by taking each target edge point as a reference.
3. The method according to claim 1, wherein the determining the accurate edge point in the detection area according to the gray value variation between the adjacent pixel points in the detection area comprises:
determining the gray gradient between each pixel point and the adjacent pixel point according to the gray value of each pixel point in the detection area;
and if the gray gradient is larger than a gradient threshold value, taking the pixel point as an accurate edge point in the detection area.
4. The method of claim 3, further comprising:
determining an offset error of a precise edge point in the detection area;
and updating the accurate edge point in the detection area according to the offset error and/or the position relation between the accurate edge point and the background area.
5. The method according to claim 1, wherein the determining the position information of the target area in the image to be processed according to the precise edge point comprises:
fitting the accurate edge points in the detection areas belonging to the same area side in the target area to obtain a fitting edge line of each area side;
determining the intersection point of the fitting edge lines of each region side as the top point of the target region;
and determining the position information of the target area in the image to be processed according to the vertex of the target area.
6. The method of claim 5, wherein fitting the accurate edge points in the detection regions belonging to the same region side in the target region to obtain a fitted edge line of each region side comprises:
fitting accurate edge points in detection areas belonging to the same area side in a target area, and determining a fitting coefficient and a correlation coefficient of each area side;
and if the correlation coefficient of each region edge meets the correlation requirement, determining the fitting edge line of each region edge according to the fitting coefficient of each region edge.
7. The method according to claim 1, wherein the performing edge point detection on the target area of the image to be processed to obtain a preliminary edge point comprises:
and if the image to be processed does not meet the positioning requirement, performing image enhancement processing on the image to be processed, and performing edge point detection on a target area of the image to be processed after the image enhancement processing to obtain a preliminary edge point.
8. The method according to claim 1 or 7, wherein the performing edge point detection on the target area of the image to be processed to obtain a preliminary edge point comprises:
according to a matching template of a target area, carrying out area matching processing on the image to be processed to obtain a matching area; wherein the matching template contains the target region and the background region;
determining a boundary area of a target area according to the matching area;
extracting preliminary edge points from the boundary region.
9. An image localization apparatus, comprising:
the acquisition module is used for detecting edge points of a target area of an image to be processed to obtain preliminary edge points; wherein the target area is a standard graphic area;
the construction module is used for constructing a detection area on the image to be processed according to the preliminary edge point; wherein the detection area includes: a target area and a background area other than the target area;
the edge point determining module is used for determining accurate edge points in the detection area according to the gray value change condition between adjacent pixel points in the detection area;
and the information determining module is used for determining the position information of the target area in the image to be processed according to the accurate edge point.
10. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image localization method of any of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image localization method according to any one of claims 1 to 8.
CN202111615125.1A 2021-12-27 2021-12-27 Image positioning method, device, equipment and storage medium Pending CN114359383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111615125.1A CN114359383A (en) 2021-12-27 2021-12-27 Image positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111615125.1A CN114359383A (en) 2021-12-27 2021-12-27 Image positioning method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114359383A true CN114359383A (en) 2022-04-15

Family

ID=81102756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111615125.1A Pending CN114359383A (en) 2021-12-27 2021-12-27 Image positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114359383A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114993348A (en) * 2022-05-30 2022-09-02 中国第一汽车股份有限公司 Map precision testing method and device, electronic equipment and storage medium
CN116030065A (en) * 2023-03-31 2023-04-28 云南琰搜电子科技有限公司 Road quality detection method based on image recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114993348A (en) * 2022-05-30 2022-09-02 中国第一汽车股份有限公司 Map precision testing method and device, electronic equipment and storage medium
CN116030065A (en) * 2023-03-31 2023-04-28 云南琰搜电子科技有限公司 Road quality detection method based on image recognition

Similar Documents

Publication Publication Date Title
CN108228798B (en) Method and device for determining matching relation between point cloud data
KR20200045522A (en) Methods and systems for use in performing localization
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
CN108564082B (en) Image processing method, device, server and medium
CN109712071B (en) Unmanned aerial vehicle image splicing and positioning method based on track constraint
CN114359383A (en) Image positioning method, device, equipment and storage medium
KR101618996B1 (en) Sampling method and image processing apparatus for estimating homography
US11170501B2 (en) Image analysis device
CN116168351B (en) Inspection method and device for power equipment
CN114511661A (en) Image rendering method and device, electronic equipment and storage medium
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
CN112634235A (en) Product image boundary detection method and electronic equipment
CN113971728A (en) Image recognition method, model training method, device, equipment and medium
CN111598917B (en) Data embedding method, device, equipment and computer readable storage medium
CN103837135A (en) Workpiece detecting method and system
CN112857746A (en) Tracking method and device of lamplight detector, electronic equipment and storage medium
CN114494323A (en) Obstacle detection method, device, equipment and storage medium
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
CN111368915A (en) Drawing verification method, device, equipment and storage medium
CN111815748A (en) Animation processing method and device, storage medium and electronic equipment
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
JP5712859B2 (en) Image recognition apparatus and image recognition method
CN114220011A (en) Goods quantity identification method and device, electronic equipment and storage medium
CN113469087A (en) Method, device, equipment and medium for detecting picture frame in building drawing
CN113706705A (en) Image processing method, device and equipment for high-precision map and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination