CN111861979A - Positioning method, positioning equipment and computer readable storage medium - Google Patents

Positioning method, positioning equipment and computer readable storage medium Download PDF

Info

Publication number
CN111861979A
CN111861979A CN202010473979.XA CN202010473979A CN111861979A CN 111861979 A CN111861979 A CN 111861979A CN 202010473979 A CN202010473979 A CN 202010473979A CN 111861979 A CN111861979 A CN 111861979A
Authority
CN
China
Prior art keywords
image
target object
threshold condition
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010473979.XA
Other languages
Chinese (zh)
Inventor
罗文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Lianbao Information Technology Co Ltd
Original Assignee
Hefei Lianbao Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Lianbao Information Technology Co Ltd filed Critical Hefei Lianbao Information Technology Co Ltd
Priority to CN202010473979.XA priority Critical patent/CN111861979A/en
Publication of CN111861979A publication Critical patent/CN111861979A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention discloses a positioning method, a device and a computer readable storage medium, wherein the method comprises the following steps: carrying out binarization processing on a designated image to obtain a binarized image, wherein the designated image comprises a target object; determining a designated area in the binary image according to prior information corresponding to the target object; and positioning the designated area, determining a target image corresponding to the target object, and improving the stability in the appearance detection process.

Description

Positioning method, positioning equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a positioning method, a positioning device, and a computer-readable storage medium.
Background
In the process of utilizing a machine to detect the appearance of a product, the determination of the positioning information of the product through the machine is a necessary link for detecting the defects. For example, in the process of defect detection of screen appearance by using a machine, the machine needs to determine specific position information of the screen in the image. However, due to the influence of environmental factors such as placement position, illumination change, uneven brightness, linear defects and the like and/or self factors, the machine has low accuracy in determining the product positioning information, improves the accuracy in determining the product positioning information, and is beneficial to improving the stability in the appearance detection process.
Disclosure of Invention
The embodiment of the invention provides a positioning method, positioning equipment and a computer readable storage medium, which have the effect of improving the stability in the appearance detection process.
An aspect of an embodiment of the present invention provides a positioning method, where the method includes: carrying out binarization processing on a designated image to obtain a binarized image, wherein the designated image comprises a target object; determining a designated area in the binary image according to prior information corresponding to the target object; and positioning the designated area, and determining a target image corresponding to the target object.
In an implementation manner, the determining a designated region in the binarized image according to the prior information corresponding to the target object includes: carrying out connected domain segmentation on the binary image to obtain a plurality of connected domain areas; sequencing the connected domain regions based on a set rule to determine candidate regions; and judging the candidate region based on a threshold condition corresponding to the prior information to determine a specified region meeting the threshold condition.
In one embodiment, the threshold condition comprises at least one of: a first threshold condition corresponding to a size prior of the target object; a second threshold condition corresponding to area prior information of the target object; a third threshold condition corresponding to the target object and the specified image degree of deviation prior information.
In an implementation manner, the determining the candidate region based on a threshold condition corresponding to the prior information to determine the specified region meeting the threshold condition includes: judging whether a first threshold condition is met or not according to the size information of the minimum circumscribed image corresponding to the candidate area, and obtaining a first judgment result; wherein a shape of the minimal enclosing image is consistent with a shape of the target object; and/or judging whether a second threshold condition is met according to the area information of the candidate region to obtain a second judgment result; and/or judging whether a third threshold condition is met or not according to the deviation degree of the candidate region and the specified image to obtain a third judgment result; and when judging that the corresponding threshold condition is met according to the first judgment result, the second judgment result and/or the third judgment result, determining the candidate area as the designated area meeting the threshold condition.
In an embodiment, the positioning the designated area and determining the target image corresponding to the target object includes: determining a line scanning area based on the designated area, and performing line scanning processing on the line scanning area to obtain a maximum gray level jump point in the line scanning area; performing line fitting processing on the maximum gray level jump point to obtain a boundary line corresponding to the target object; and determining a target image corresponding to the target object according to the boundary line.
In an implementation manner, before the binarizing processing is performed on the specified image to obtain a binarized image, the method further includes: filtering the designated image to obtain a filtered image; and the filtered image is used for carrying out binarization processing.
In an implementation manner, after the positioning processing is performed on the designated area and a target image corresponding to the target object is determined, the method further includes: and carrying out affine transformation on the target image to obtain a corrected image corresponding to the target image.
Another aspect of an embodiment of the present invention provides a positioning apparatus, where the apparatus includes: the processing module is used for carrying out binarization processing on the specified image to obtain a binarized image, wherein the specified image comprises a target object; a determining module, configured to determine a specified region in the binarized image according to prior information corresponding to the target object; and the positioning module is used for positioning the designated area and determining a target image corresponding to the target object.
In an embodiment, the determining module includes: the segmentation submodule is used for carrying out connected domain segmentation on the binary image to obtain a plurality of connected domain areas; the sorting submodule is used for sorting the connected domain areas based on a set rule and determining candidate areas; and the judgment submodule is used for judging the candidate region based on a threshold condition corresponding to the prior information so as to determine the specified region meeting the threshold condition.
In one embodiment, the threshold condition comprises at least one of: a first threshold condition corresponding to a size prior of the target object; a second threshold condition corresponding to area prior information of the target object; a third threshold condition corresponding to the target object and the specified image degree of deviation prior information.
In an embodiment, the determining sub-module includes: judging whether a first threshold condition is met or not according to the size information of the minimum circumscribed image corresponding to the candidate area, and obtaining a first judgment result; wherein a shape of the minimal enclosing image is consistent with a shape of the target object; and/or judging whether a second threshold condition is met according to the area information of the candidate region to obtain a second judgment result; and/or judging whether a third threshold condition is met or not according to the deviation degree of the candidate region and the specified image to obtain a third judgment result; and when judging that the corresponding threshold condition is met according to the first judgment result, the second judgment result and/or the third judgment result, determining the candidate area as the designated area meeting the threshold condition.
In an embodiment, the positioning module includes: the scanning submodule determines a line scanning area based on the designated area, performs line scanning processing on the line scanning area and acquires a maximum gray level jump point in the line scanning area; the fitting submodule is used for performing line fitting processing on the maximum gray level jump point to obtain a boundary line corresponding to the target object; and the determining submodule is used for determining a target image corresponding to the target object according to the boundary line.
In an embodiment, the apparatus further comprises: the filtering module is used for carrying out filtering processing on the specified image to obtain a filtered image; and the filtered image is used for carrying out binarization processing.
In an embodiment, the apparatus further comprises: and the transformation module is used for carrying out affine transformation on the target image to obtain a corrected image corresponding to the target image.
Another aspect of the invention provides a computer-readable storage medium comprising a set of computer-executable instructions, which when executed, perform any of the above-described positioning methods.
The positioning method, the positioning device and the computer readable storage medium provided by the embodiment of the invention are used for positioning the target object so as to accurately determine the specific position of the target object and avoid error positioning caused by objective or subjective reasons.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic flow chart illustrating an implementation of a positioning method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an implementation of determining a designated area by a positioning method according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating an implementation flow of positioning processing of a positioning method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image filtered in a scene where a positioning method according to an embodiment of the present invention is implemented;
FIG. 5 is a schematic diagram of a binarized image in a positioning method implementation scene according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a minimum circumscribed rectangle in an implementation scenario of a positioning method according to an embodiment of the present invention;
FIG. 7 is a schematic view of a line scanning area in a scene where a positioning method is implemented according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of coordinates of a maximum gray level transition point in an implementation scene of a positioning method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a fitted straight line in an implementation scenario of a positioning method according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a corrected screen image in a scene where a positioning method is implemented according to an embodiment of the present invention;
fig. 11 is a schematic diagram of an implementation module of a positioning apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart illustrating an implementation of a positioning method according to an embodiment of the present invention.
Referring to fig. 1, in one aspect, an embodiment of the present invention provides a positioning method, where the method includes: an operation 101, performing binarization processing on a designated image to obtain a binarized image, wherein the designated image comprises a target object; an operation 102 of determining a designated area in the binary image according to the prior information corresponding to the target object; in operation 103, the designated area is located, and a target image corresponding to the target object is determined.
The positioning method provided by the embodiment of the invention is used for positioning the target object so as to accurately determine the specific position of the target object and avoid error positioning caused by objective or subjective reasons. The method comprises the steps of carrying out binarization processing on a designated image to enable the designated image to have an obvious black-and-white effect, determining a designated area corresponding to a target object in the binarized image according to prior information, and carrying out positioning processing on the designated area, so that the purpose of determining the target image corresponding to the target object in the designated image is achieved. The target object may be a screen, glass, a metal piece, a curtain, or the like, which is different from the background of the placement position of the target object. The objective condition can be the placing position, the placing angle, the size of the background area, the dark line interference, the interference of other objects in the background area and the like. Subjective conditions may be human factors. By positioning the target image, subsequent image recognition can be performed on the target image obtained by the positioning method, so that the accuracy of image recognition is improved. Subsequent image recognition includes, but is not limited to, defect recognition, character recognition, and content recognition.
For example, when the target object is a screen, the specific position of the screen image can be positioned and determined in the whole image including the screen image by the positioning method provided by the method, and subsequent defect analysis is performed through the screen image, so that the problem that the background affects the screen image in the analysis process to reduce the identification accuracy can be avoided.
In operation 101, the designated image may be a picture or a video including a target object, and the image is binarized by using an inter-maximum class variance (OTSU) method to obtain a binarized image. Before binarization, the method can also be used for preprocessing the designated image to reduce the influence of image noise on the binarized image. For example, by the filtering process, a specified image is filtered, and a filtered image is obtained for the binarization process. Through filtering processing, the noise of the target image is suppressed under the condition of keeping the detail characteristics of the image as much as possible, and the phenomenon that the image noise brings errors and extra information to the image is avoided. The filtering process of the method includes, but is not limited to, applying a non-linear filter, a median filter, a morphological filter, a bilateral filter, etc. according to different image noise problems.
In operation 102, the a priori information includes, but is not limited to, size information of the target object, position information of the target object in the designated image, and the like. Based on the prior information, a specified region associated with the target object can be determined in the binarized image. Further, it is to be understood that, in operation 102, to facilitate the segmentation of the designated region, before the determination is performed by using the prior information, the connected domain segmentation may be performed on the binarized image, and the connected domain may be filtered according to the prior information to determine the designated region associated with the target object.
In operation 103, the positioning process specifically includes calculating according to the designated area, obtaining a line scanning area corresponding to the area associated with the target object, and performing line scanning on the line scanning area to position the area associated with the target object, so as to achieve the purpose of determining the target image corresponding to the target object.
Fig. 2 is a schematic flow chart illustrating an implementation of determining a designated area by a positioning method according to an embodiment of the present invention.
Referring to fig. 2, in an implementation, the operation 102 of determining a designated region in the binarized image according to the prior information corresponding to the target object includes: operation 1021, performing connected domain segmentation on the binary image to obtain a plurality of connected domain areas; operation 1022, sorting the connected domain regions based on the set rule, and determining candidate regions; in operation 1023, the candidate region is determined based on the threshold condition corresponding to the prior information to determine the designated region meeting the threshold condition.
In operation 102, the setting rule includes, but is not limited to, an area-based setting rule, a size-based setting rule, a position-based setting rule, a color-based setting rule, a shape-based setting rule, and the like. The method adopts the sorting based on the area setting rule, so that the sorting can be carried out from large to small according to the area of the connected domain region, and the connected domain region with the largest area of the connected domain region is determined as the candidate region. Further, each connected domain region may be named according to a sequence, for example, the first candidate region, the second candidate region, and the third candidate region … are not described below, and after the first candidate region is determined to be a non-designated region through a subsequent operation, the second candidate region, the third candidate region, and the like may be sequentially determined until a designated region meeting a threshold condition is obtained. It will be appreciated that determining the candidate regions as referred to by operation 1022 also includes determining an order of the candidate regions. The threshold condition is a condition set according to the prior information, for example, an aspect ratio parameter having a large difference from the aspect ratio may be preset as the threshold condition according to the aspect ratio of the target object, when the aspect ratio parameter of the candidate region fails to satisfy the aspect ratio parameter threshold condition, the candidate region may be considered not to be the designated region, and the threshold condition may be a position deviation threshold value in addition to the aspect ratio, for example, the prior information knows approximate position information of the image corresponding to the target object in the designated image, and the position information is used as the threshold condition, and when the difference between the candidate region and the position information is far, the candidate region may be considered not to be the designated region. Similarly, when the candidate region satisfies the preset threshold condition, the candidate region can be regarded as the designated region.
In one embodiment, the threshold condition includes at least one of the following conditions: a first threshold condition corresponding to a size prior information of the target object; a second threshold condition corresponding to area prior information of the target object; a third threshold condition corresponding to the target object and the specified image degree of deviation prior information. The first threshold condition may be determined according to the aspect ratio size or other size ratios of the target object according to the shape of the target object, and when the aspect ratio of the candidate region does not satisfy the threshold condition, the candidate region is determined as each non-designated region. It is understood that, since the target object may have different angles in different designated images in the designated images, the threshold condition may be a threshold range obtained according to the target object, and outside the threshold range, the candidate region is considered not to satisfy the threshold condition. The second threshold condition is determined according to the area of the target object, and the third threshold condition is determined according to the position deviation degree of the target object in the designated image. Similarly, since the target object may have different angles and different distances in different designated images, the threshold condition may be a threshold range obtained according to the target object, and if the threshold range is outside, the candidate region is considered not to satisfy the threshold condition.
In an implementation, operation 1023, determining a candidate region based on a threshold condition corresponding to the prior information to determine a specified region meeting the threshold condition includes: judging whether a first threshold condition is met or not according to the size information of the minimum circumscribed image corresponding to the candidate area, and obtaining a first judgment result; wherein the shape of the minimum circumscribed image is consistent with the shape of the target object; and/or judging whether a second threshold condition is met according to the area information of the candidate region to obtain a second judgment result; and/or judging whether a third threshold condition is met or not according to the deviation degree of the candidate region and the specified image, and obtaining a third judgment result; and when the corresponding threshold condition is judged to be met according to the first judgment result, the second judgment result and/or the third judgment result, determining the candidate area as the specified area meeting the threshold condition.
The specific method for judging the candidate region is also not very different according to different threshold conditions. When the candidate region is determined by using the first threshold condition, the minimum circumscribed image of the candidate region needs to be obtained first, and the minimum circumscribed image is determined according to the specific shape of the target object, for example, when the target object is a rectangle, the minimum circumscribed image is a minimum circumscribed rectangle that is consistent with the aspect ratio of the target object, and when the target object is a triangle, the minimum circumscribed image is a minimum circumscribed triangle that is consistent with the aspect ratio of the target object, which is not described in detail below. It should be added that, when the target object is an irregular shape, the minimum circumscribed image may be a minimum circumscribed image that is consistent with the irregular shape, or may be a minimum circumscribed regular image corresponding to the irregular shape, such as a minimum circumscribed rectangle. Specifically, the formula for calculating the aspect ratio of the minimum bounding rectangle of the candidate region is as follows:
Figure BDA0002515215050000091
Wherein
Figure BDA0002515215050000092
The width and the height of the minimum circumscribed image are respectively, and the ratio _ w _ h is the obtained aspect ratio. The obtained aspect ratio is used for judgment of a first threshold condition to obtain a first judgment result. Specifically, the judgment conditions are as follows:
ratio_w_h_thod_min<ratio_w_h<ratio_w_h_thod_max
in the above formula, ratio _ w _ h _ same _ min is a preset minimum aspect ratio threshold, and ratio _ w _ h _ same _ max is a preset maximum aspect ratio threshold. The minimum aspect ratio threshold and the maximum aspect ratio threshold may be determined from statistically derived a priori information.
When the candidate region is judged by adopting the first threshold condition, the area of the candidate region can be determined by the number of pixels of the candidate region. And the obtained area is used for judging a second threshold value condition so as to obtain a second judgment result. Specifically, the judgment conditions are as follows:
area_region_thod_min<area_region<area_region_thod_max
the area _ region _ same _ min is a preset minimum area threshold, the area _ region is an obtained candidate area, and the area _ region _ same _ max is a preset maximum area threshold. Similarly, the minimum area threshold and the maximum area threshold may be determined according to statistically obtained a priori information.
When the candidate region is judged by adopting the third threshold condition, the image deviation degree after the deviation degree of the pixel of the candidate region and the center position of the designated image is calculated, and specifically, the calculation formula is as follows:
Figure BDA0002515215050000093
Wherein, w and h are the width and height of the image; n is the number of pixels of the candidate area; dis _ degree is the degree of deviation of the candidate region from the specified image. The obtained image deviation degree is used for judging a third threshold value condition to obtain a third judgment result. Specifically, the judgment conditions are as follows:
dis_degree<dis_degree_thod_max
here, the maximum image deviation threshold dis _ degree _ same _ max is used. The maximum image deviation threshold may be determined from statistically derived prior information.
When the threshold condition is judged, according to the preset condition, the method has the following conditions:
the method can judge based on the first threshold condition and determine that the candidate area meeting the first threshold condition is the designated area.
The method can judge based on the second threshold condition and determine the candidate area meeting the second threshold condition as the designated area.
The method can judge based on the third threshold condition and determine the candidate area meeting the third threshold condition as the designated area.
The method can judge based on the first threshold condition and the second threshold condition, and determine the candidate area meeting the first threshold condition and the second threshold condition as the designated area.
The method can perform judgment based on the first threshold condition and the third threshold condition, and determine the candidate area meeting the first threshold condition and the third threshold condition as the designated area.
The method can perform judgment based on the second threshold condition and the third threshold condition, and determine the candidate area meeting the second threshold condition and the third threshold condition as the designated area.
The method can perform judgment based on the first threshold condition, the second threshold condition and the third threshold condition, and determine the candidate area meeting the first threshold condition, the second threshold condition and the third threshold condition as the designated area.
The method can perform judgment based on a first threshold condition, a second threshold condition and a third threshold condition, and determine a candidate area satisfying any two of the first threshold condition, the second threshold condition and the third threshold condition as a designated area.
Further, when a plurality of regions are designated, the designated regions may be sorted according to the degree of deviation from the threshold condition, and the subsequent positioning operation may be performed on the designated region having the smallest degree of deviation.
Further, if all the candidate regions are finally non-specified regions, the purpose of re-determining the candidate regions can be achieved by changing the filtering processing method, the binarization processing method or other processing parameters, so as to judge the candidate regions and obtain the specified regions.
Fig. 3 is a schematic diagram illustrating an implementation flow of positioning processing of a positioning method according to an embodiment of the present invention.
Referring to fig. 3, in an implementation, the operation 103 of performing a positioning process on the designated area and determining a target image corresponding to the target object includes: operation 1031, determining a line scanning area based on the designated area, performing line scanning processing on the line scanning area, and acquiring a maximum gray level jump point in the line scanning area; operation 1032, performing line fitting processing on the maximum gray level jump point to obtain a boundary line corresponding to the target object; in operation 1033, a target image corresponding to the target object is determined according to the boundary line.
After the designated area is determined, positioning processing is performed on the designated area. First, the coordinates of the designated area are scaled to determine a line scanning area corresponding to the designated area. Specifically, when the designated area is a rectangle, the coordinates of the line scanning area may be obtained based on a formula with the coordinates of the vertex of the minimum bounding rectangle corresponding to the designated area as a reference. Taking the line segment AD of the minimum circumscribed rectangle as an example, finding the line scanning area through which AD passes, wherein the upper left point of the area is defined as pt _ left, and the lower right point of the area is defined as pt _ left
Figure BDA0002515215050000111
The coordinate calculation method is as follows:
Figure BDA0002515215050000112
wherein, the values of the extend _ row and the extend _ col are one tenth of the width and the height of the minimum circumscribed rectangle respectively. By the above formula, the line scanning area corresponding to the line segment AD can be obtained. Similarly, the line scanning areas corresponding to the line segments AB, BC, CD can be obtained in the same manner. It should be noted that the extent _ row and the extent _ col are preset values, and can be adjusted according to actual conditions. pt _ left _ x, pt _ left _ y, and,
Figure BDA0002515215050000113
Pt _ left, respectively,
Figure BDA0002515215050000114
In x-axis and y-axis coordinatesSimilarly, a _ x, a _ y, D _ x, D _ y are x-axis coordinates and y-axis coordinates of A, D, respectively.
After the line scanning area is obtained, the maximum gray level jump point in the line scanning area, namely the boundary point between the target image and the background, is obtained by performing line scanning processing on the line scanning area. And performing line fitting processing according to the maximum gray level jump points to obtain boundary lines between the target image and the background, specifically fitting the jump points by using a least square method to obtain four straight lines. And determining a target image corresponding to the target object according to the boundary line.
In an implementation manner, after the positioning processing is performed on the designated area and the target image corresponding to the target object is determined in operation 103, the method further includes: and carrying out affine transformation on the target image to obtain a corrected image corresponding to the target image.
Further, in order to improve the accuracy of the target image, the method can also obtain the target image after the proportion and the position are corrected through affine transformation, and the proportion and the position of the target image can be corrected according to the prior information corresponding to the target image.
In an implementation manner, before performing binarization processing on the designated image to obtain a binarized image in operation 101, the method further includes: filtering the designated image to obtain a filtered image; the filtered image is used for binarization processing.
In order to improve the accuracy of screen positioning, the method can reduce noise by filtering the specified image before the binarization processing, and the filtered image is used for the binarization processing.
To facilitate understanding of the above embodiments, a specific implementation scenario is provided below, in which the method is applied to a screen positioning device, the target object is a rectangular display screen of an electronic device, and the designated image is a shot screen photo with the rectangular display screen.
After the screen positioning device obtains the designated image, the screen positioning device performs filtering processing on the screen photo to reduce image noise, and obtains a filtered image as shown in fig. 4.
Then, the device performs binarization processing on the image by using an inter-class variance method (OTSU) to obtain a binarized image as shown in fig. 5, performs connected domain segmentation on the binarized image to obtain a plurality of connected domain regions, sorts each connected domain region from large to small according to the region area, and determines the connected domain region with the largest region area as a first screen candidate region.
Then, calculating the minimum bounding rectangle of the first screen candidate region, and obtaining the coordinates of four vertexes of the minimum bounding rectangle shown in fig. 6; judging whether the aspect ratio of the minimum circumscribed rectangle meets the aspect ratio range of the screen or not according to the preset condition of the aspect ratio; judging whether the area of the first screen candidate region meets the area range of the screen or not according to a preset condition of the area; and judging whether the position deviation of the first screen candidate region meets the deviation range of the image center position according to the preset condition of the position deviation. And if the second screen candidate region still does not satisfy the conditions, sequentially judging a third screen candidate region and a fourth screen candidate region until the screen regions satisfying the conditions are obtained, and details are not repeated below. Further, when the area of the region is smaller than the area minimum threshold, the subsequent candidate region may not be determined. And the segmentation parameters or the binarization parameters can be adjusted to segment the connected domain again.
Then, four line scanning areas related to the boundary line of the screen area are calculated and obtained according to the vertex coordinates of the minimum bounding rectangle of the screen area, and the four line scanning areas are shown in fig. 7. The maximum gray scale jumping point coordinates shown in fig. 8 are obtained by performing line scanning in the four line scanning areas, respectively. The maximum gray-scale jump point coordinates are fitted using the least squares method to obtain four straight lines as shown in fig. 9. The intersection points of the four straight lines are determined as four vertex coordinates of the screen, and after the proportion and the position are corrected through affine transformation according to the four vertex coordinates, a screen image as shown in fig. 10 is obtained.
Fig. 11 is a schematic diagram of an implementation module of a positioning apparatus according to an embodiment of the present invention.
Referring to fig. 11, another aspect of the embodiment of the present invention provides a positioning apparatus, including: a processing module 1101, configured to perform binarization processing on a designated image to obtain a binarized image, where the designated image includes a target object; a determining module 1102, configured to determine a specified region in the binarized image according to prior information corresponding to the target object; a positioning module 1103, configured to perform positioning processing on the designated area, and determine a target image corresponding to the target object.
In one embodiment, the determining module 1102 includes: a segmentation submodule 11021 configured to perform connected domain segmentation on the binarized image to obtain a plurality of connected domain regions; a sorting submodule 11022, configured to sort the connected domain regions based on a set rule, and determine a candidate region; a judging sub-module 11023, configured to judge the candidate region based on a threshold condition corresponding to the prior information, so as to determine a specified region meeting the threshold condition.
In one embodiment, the threshold condition includes at least one of the following conditions: a first threshold condition corresponding to a size prior information of the target object; a second threshold condition corresponding to area prior information of the target object; a third threshold condition corresponding to the target object and the specified image degree of deviation prior information.
In one embodiment, the determining sub-module 11023 includes: judging whether a first threshold condition is met or not according to the size information of the minimum circumscribed image corresponding to the candidate area, and obtaining a first judgment result; wherein the shape of the minimum circumscribed image is consistent with the shape of the target object; and/or judging whether a second threshold condition is met according to the area information of the candidate region to obtain a second judgment result; and/or judging whether a third threshold condition is met or not according to the deviation degree of the candidate region and the specified image, and obtaining a third judgment result; and when the corresponding threshold condition is judged to be met according to the first judgment result, the second judgment result and/or the third judgment result, determining the candidate area as the specified area meeting the threshold condition.
In one embodiment, the positioning module 1103 includes: a scanning submodule 11032 for determining a line scanning area based on the designated area, performing line scanning processing on the line scanning area, and acquiring a maximum gray level jump point in the line scanning area; a fitting submodule 11032, configured to perform line fitting processing on the maximum grayscale trip point to obtain a boundary line corresponding to the target object; a determining sub-module 11033, configured to determine a target image corresponding to the target object according to the boundary line.
In one embodiment, the apparatus further comprises: a filtering module 1104, configured to perform filtering processing on the specified image to obtain a filtered image; the filtered image is used for binarization processing.
In one embodiment, the apparatus further comprises: a transforming module 1105, configured to perform affine transformation on the target image to obtain a corrected image corresponding to the target image.
Another aspect of the embodiments of the present invention provides a computer-readable storage medium, which includes a set of computer-executable instructions, and when executed, is configured to perform any one of the above positioning methods.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of positioning, the method comprising:
carrying out binarization processing on a designated image to obtain a binarized image, wherein the designated image comprises a target object;
determining a designated area in the binary image according to prior information corresponding to the target object;
and positioning the designated area, and determining a target image corresponding to the target object.
2. The method according to claim 1, wherein the determining a specified region in the binarized image based on prior information corresponding to a target object comprises:
carrying out connected domain segmentation on the binary image to obtain a plurality of connected domain areas;
sequencing the connected domain regions based on a set rule to determine candidate regions;
and judging the candidate region based on a threshold condition corresponding to the prior information to determine a specified region meeting the threshold condition.
3. The method of claim 2, wherein the threshold condition comprises at least one of:
a first threshold condition corresponding to a size prior of the target object;
a second threshold condition corresponding to area prior information of the target object;
a third threshold condition corresponding to the target object and the specified image degree of deviation prior information.
4. The method according to claim 3, wherein the determining the candidate region based on a threshold condition corresponding to the prior information to determine the specified region meeting the threshold condition comprises:
judging whether a first threshold condition is met or not according to the size information of the minimum circumscribed image corresponding to the candidate area, and obtaining a first judgment result; wherein a shape of the minimal enclosing image is consistent with a shape of the target object;
And/or judging whether a second threshold condition is met according to the area information of the candidate region to obtain a second judgment result;
and/or judging whether a third threshold condition is met or not according to the deviation degree of the candidate region and the specified image to obtain a third judgment result;
and when judging that the corresponding threshold condition is met according to the first judgment result, the second judgment result and/or the third judgment result, determining the candidate area as the designated area meeting the threshold condition.
5. The method according to claim 1, wherein the positioning the designated area and determining the target image corresponding to the target object comprises:
determining a line scanning area based on the designated area, and performing line scanning processing on the line scanning area to obtain a maximum gray level jump point in the line scanning area;
performing line fitting processing on the maximum gray level jump point to obtain a boundary line corresponding to the target object;
and determining a target image corresponding to the target object according to the boundary line.
6. The method according to claim 1, wherein before the binarizing processing on the specified image to obtain a binarized image, the method further comprises:
Filtering the designated image to obtain a filtered image; and the filtered image is used for carrying out binarization processing.
7. The method according to claim 1, wherein after the positioning processing of the designated area and the determination of the target image corresponding to the target object, the method further comprises:
and carrying out affine transformation on the target image to obtain a corrected image corresponding to the target image.
8. A positioning apparatus, characterized in that the apparatus comprises:
the processing module is used for carrying out binarization processing on the specified image to obtain a binarized image, wherein the specified image comprises a target object;
a determining module, configured to determine a specified region in the binarized image according to prior information corresponding to the target object;
and the positioning module is used for positioning the designated area and determining a target image corresponding to the target object.
9. The apparatus of claim 8, wherein the determining module comprises:
the segmentation submodule is used for carrying out connected domain segmentation on the binary image to obtain a plurality of connected domain areas;
the sorting submodule is used for sorting the connected domain areas based on a set rule and determining candidate areas;
And the judgment submodule is used for judging the candidate region based on a threshold condition corresponding to the prior information so as to determine the specified region meeting the threshold condition.
10. A computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the positioning method of any one of claims 1-7.
CN202010473979.XA 2020-05-29 2020-05-29 Positioning method, positioning equipment and computer readable storage medium Withdrawn CN111861979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010473979.XA CN111861979A (en) 2020-05-29 2020-05-29 Positioning method, positioning equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010473979.XA CN111861979A (en) 2020-05-29 2020-05-29 Positioning method, positioning equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111861979A true CN111861979A (en) 2020-10-30

Family

ID=72985887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010473979.XA Withdrawn CN111861979A (en) 2020-05-29 2020-05-29 Positioning method, positioning equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111861979A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598746A (en) * 2020-12-18 2021-04-02 北京声智科技有限公司 Elevator door opening and closing detection method and device, readable medium and electronic equipment
CN112883963A (en) * 2021-02-01 2021-06-01 合肥联宝信息技术有限公司 Positioning correction method, device and computer readable storage medium
CN113033550A (en) * 2021-03-15 2021-06-25 合肥联宝信息技术有限公司 Image detection method and device and computer readable medium
CN117746027A (en) * 2024-02-07 2024-03-22 深圳新视智科技术有限公司 Edge searching method, device and equipment in diaphragm detection

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598746A (en) * 2020-12-18 2021-04-02 北京声智科技有限公司 Elevator door opening and closing detection method and device, readable medium and electronic equipment
CN112598746B (en) * 2020-12-18 2022-03-11 北京声智科技有限公司 Elevator door opening and closing detection method and device, readable medium and electronic equipment
CN112883963A (en) * 2021-02-01 2021-06-01 合肥联宝信息技术有限公司 Positioning correction method, device and computer readable storage medium
CN112883963B (en) * 2021-02-01 2022-02-01 合肥联宝信息技术有限公司 Positioning correction method, device and computer readable storage medium
CN113033550A (en) * 2021-03-15 2021-06-25 合肥联宝信息技术有限公司 Image detection method and device and computer readable medium
CN113033550B (en) * 2021-03-15 2022-04-15 合肥联宝信息技术有限公司 Image detection method and device and computer readable medium
CN117746027A (en) * 2024-02-07 2024-03-22 深圳新视智科技术有限公司 Edge searching method, device and equipment in diaphragm detection

Similar Documents

Publication Publication Date Title
CN111861979A (en) Positioning method, positioning equipment and computer readable storage medium
CN108629775B (en) Thermal state high-speed wire rod surface image processing method
CN114140679B (en) Defect fusion method, device, recognition system and storage medium
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN111179243A (en) Small-size chip crack detection method and system based on computer vision
CN111968144B (en) Image edge point acquisition method and device
CN109472271B (en) Printed circuit board image contour extraction method and device
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN116559183B (en) Method and system for improving defect judging efficiency
CN111539938B (en) Method, system, medium and electronic terminal for detecting curvature of rolled strip steel strip head
CN109886960A (en) The method of glass edge defects detection based on machine vision
CN104331695A (en) Robust round identifier shape quality detection method
CN112330678A (en) Product edge defect detection method
JPH03257304A (en) Position recognizing apparatus
CN114022439A (en) Flexible circuit board defect detection method based on morphological image processing
CN114972575A (en) Linear fitting algorithm based on contour edge
US11216905B2 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
KR102242996B1 (en) Method for atypical defects detect in automobile injection products
CN112419225B (en) SOP type chip detection method and system based on pin segmentation
CN109977959B (en) Train ticket character area segmentation method and device
CN106951902B (en) Image binarization processing method and device
CN112419207A (en) Image correction method, device and system
US10970592B2 (en) Adhering substance detection apparatus and adhering substance detection method
CN116664570A (en) Flat connector production defect detection method based on industrial vision
CN115393290A (en) Edge defect detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201030