CN111079723B - Target positioning method and device, computer equipment and storage medium - Google Patents

Target positioning method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111079723B
CN111079723B CN202010205449.7A CN202010205449A CN111079723B CN 111079723 B CN111079723 B CN 111079723B CN 202010205449 A CN202010205449 A CN 202010205449A CN 111079723 B CN111079723 B CN 111079723B
Authority
CN
China
Prior art keywords
edge point
target
point
edge
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010205449.7A
Other languages
Chinese (zh)
Other versions
CN111079723A (en
Inventor
周柔刚
周才健
杨亮亮
盛锦华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Guangyuan Intelligent Technology Co ltd
Jinhua Mstar Intelligent Technology Co ltd
Suzhou Huicui Intelligent Technology Co ltd
Hangzhou Huicui Intelligent Technology Co ltd
Original Assignee
Guangdong Guangyuan Intelligent Technology Co ltd
Jinhua Mstar Intelligent Technology Co ltd
Suzhou Huicui Intelligent Technology Co ltd
Hangzhou Huicui Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Guangyuan Intelligent Technology Co ltd, Jinhua Mstar Intelligent Technology Co ltd, Suzhou Huicui Intelligent Technology Co ltd, Hangzhou Huicui Intelligent Technology Co ltd filed Critical Guangdong Guangyuan Intelligent Technology Co ltd
Priority to CN202010205449.7A priority Critical patent/CN111079723B/en
Publication of CN111079723A publication Critical patent/CN111079723A/en
Application granted granted Critical
Publication of CN111079723B publication Critical patent/CN111079723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The application relates to a target object positioning method, a target object positioning device, computer equipment and a storage medium. The method comprises the following steps: establishing an edge reference data table according to the template image; inquiring the edge reference data table according to the included angle of the gradient vector of the edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point; calculating the position of a preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and a reference point and the angle of the gradient vector of the first target edge point; and calculating the positions of preselected target points for all the edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object. By adopting the method, the calculation efficiency can be improved.

Description

Target positioning method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for positioning an object, a computer device, and a storage medium.
Background
With the development of image processing technology, position calculation of a target object in an image is performed through an image algorithm, for example, an obstacle is positioned through machine vision in the field of unmanned driving. In the field of machine vision application, the real-time requirement on an image algorithm is high, the algorithm faces huge challenges with increasing image resolution, and a target can be quickly positioned by continuously increasing hardware cost.
The existing positioning algorithm has huge calculated amount, long time consumption and high requirement on hardware, and increases the cost of machine vision application.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, and a storage medium capable of improving the efficiency of calculating the position of a target object.
A method of target object localization, the method comprising:
establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point;
inquiring the edge reference data table according to the included angle of the gradient vectors of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point;
calculating the position of a preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and a reference point and the angle of the gradient vector of the first target edge point;
and calculating the positions of preselected target points of all edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object.
In one embodiment, the creating an edge reference data table according to a template image, where the edge reference data table includes an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point, includes: acquiring the position of a first edge point and a gradient vector of the first edge point according to a template image; the first edge point is any edge point of the template image; calculating the position of a second edge point and the gradient vector of the second edge point according to the position of the first edge point and the gradient vector of the first edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point; calculating an included angle of the gradient vector according to the gradient vector of the first edge point and the gradient vector of the second edge point; calculating the distance between the first edge point and the second edge point according to the position of the first edge point and the position of the second edge point; acquiring the position of a reference point in the template image; the reference point is any one point selected from the interior of the template image; calculating the distance between the first edge point and the reference point according to the position of the first edge point and the position of the reference point; and calculating a reference included angle according to the position of the first edge point, the position of the reference point and the gradient vector of the first edge point.
In one embodiment, the calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point includes: calculating the distance between the first target edge point and the second target edge point according to the position of the first target edge point and the position of the second target edge point; and calculating the ratio of the distance between the first edge point and the second edge point to the distance between the first target edge point and the second target edge point to obtain a voting scaling value.
In one embodiment, before the querying the edge reference data table according to an included angle between gradient vectors of a first target edge point and a second target edge point of a target image to obtain the reference included angle, a distance between the first edge point and the second edge point, and a distance between the first edge point and a reference point, the method includes: acquiring a first target edge point and a second target edge point of the target image, and a gradient vector of the first target edge point and a gradient vector of the second target edge point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
In one embodiment, the calculating the position of the preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and the reference point, and the angle of the gradient vector of the first target edge point includes: zooming the distance between the first edge point and the reference point according to the voting zoom value to obtain the distance between a preselected target point and the first target edge point; and calculating the position of a preselected target point according to the distance between the preselected target point and the first target edge point, the reference included angle, the angle of the gradient vector of the first target edge point and the included angle of the gradient vector of the first target edge point and the second target edge point.
In one embodiment, the calculating the position of the preselected target point according to the distance between the preselected target point and the first target edge point, the reference included angle, the angle of the gradient vector of the first target edge point, and the included angle of the gradient vector of the first target edge point and the second target edge point includes: calculating a direction angle of a connecting line of a preselected target point and the first target edge point according to the angle of the gradient vector of the first target edge point and the included angle of the gradient vector of the first target edge point and the gradient vector of the second target edge point; and calculating the position of the preselected target point according to the direction angle and the distance between the preselected target point and the first target edge point.
In one embodiment, the calculating the positions of the preselected target points for all the edge points of the target image, and using the position of the preselected target point with the highest score as the position of the target object, includes: repeating the step of calculating the positions of preselected target points for all the edge points of the target image, and adding 1 to the score of the preselected target points when the preselected target points are obtained by calculating the edge points of the target image; and taking the position of the preselected target point with the highest score as the position of the target object.
An object positioning device, the device comprising:
the reference data table establishing module is used for establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point;
the query module is used for querying the edge reference data table according to the included angle of the gradient vector of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
the voting scaling value calculation module is used for calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point;
a position calculation module of a preselected target point, configured to calculate a position of the preselected target point according to the voting zoom value, the reference angle, the distance between the first edge point and the reference point, and the angle of the gradient vector of the first target edge point;
and the deletion module is used for calculating the positions of preselected target points of all edge points of the target image and taking the position of the preselected target point with the highest score as the position of the target object.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point;
inquiring the edge reference data table according to the included angle of the gradient vectors of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point;
calculating the position of a preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and a reference point and the angle of the gradient vector of the first target edge point;
and calculating the positions of preselected target points of all edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point;
inquiring the edge reference data table according to the included angle of the gradient vectors of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point;
calculating the position of a preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and a reference point and the angle of the gradient vector of the first target edge point;
and calculating the positions of preselected target points of all edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object.
According to the target object positioning method, the target object positioning device, the computer equipment and the storage medium, the voting scaling value is calculated for the online target image through the edge reference data table provided offline, so that the redundant calculation of the reference included angle of the target image, the distance between the edge point and the reference point and the distance between the edge point and the edge point can be reduced, and the calculation efficiency of the position of the target object is improved; meanwhile, the edge reference data table is inquired by calculating the included angle between the voting scaling value and the gradient vector, so that scaling invariance and rotation invariance in the calculation process can be ensured; the position of the target object is calculated according to the edge points of the template image and the target image, and the position of the target object is calculated by a calculation method of traversing the whole image at fixed intervals, so that the calculation amount is reduced, and the calculation efficiency is improved.
Drawings
FIG. 1 is a schematic flow chart diagram of a method for locating a target object in one embodiment;
FIG. 2 is a flowchart illustrating the steps of creating an edge reference data table from a template image according to one embodiment;
FIG. 3 is a diagram of a template image in one embodiment;
FIG. 4 is an edge reference data representation in one embodiment;
FIG. 5 is a block diagram of a target locating device in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a target object positioning method, comprising the steps of:
s110, establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point and a distance between the first edge point and a second edge point; and the connecting line of the first edge point and the second edge point is perpendicular to the gradient vector of the first edge point.
The positions of the edge points of the template image are known, the positions of the edge points of the template image can be obtained in a manual calibration mode, and the gradient vectors of the edge points are calculated according to the positions of the edge points of the template image. The edge reference data table is used for referring to the included angle of the gradient vector and the reference corresponding to the included angle of the gradient vectorEstablishing an included angle, a distance between a first edge point and a reference point and a distance between the first edge point and a second edge point; the first edge point can be any edge point on the template image, the second edge point is a point which is perpendicular to a gradient vector of the first edge point and is intersected with the template image by a straight line passing through the first edge point, the included angle of the gradient vector is the included angle of the gradient vector of the first edge point and the gradient vector of the second edge point, the reference point can be any point in the template image, the optional reference point is the center of gravity of the template image, and the reference included angle is the included angle of the gradient vector of the first edge point and a connecting line of the first edge point and the reference point; for example, as shown in FIG. 3, the gray area is the template image and the first edge point is C (X)c,Yc),VckIs a first edge point C (X)c,Yc) The second edge point is D (X)d,Yd),VdkIs a first edge point C (X)d,Yd) The CD is perpendicular to the gradient vector V of the first edge pointckThe reference point is O (X)o,Yo) The gradient vector of the first edge point and the gradient vector of the second edge point form an included angle of βkReference angle is αckDistance L of the first edge point from the reference point1kDistance L between the first edge point and the second edge point2k. Optionally, the range of the included angle of the gradient vector is 0 ° to 359 °, as shown in fig. 4, the included angle i of each gradient vector corresponds to a reference included angle, and the distance L between the first edge point and the reference point1kAnd the distance L between the first edge point and the second edge point2k. Of course, the range of the included angle of the gradient vector can be set as required, such as 0 to 180 °.
S120, inquiring the edge reference data table according to the included angle of the gradient vectors of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
The edge reference data table comprises an included angle of the gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between the first edge point and the reference point and a distance between the first edge point and the second edge point, as shown in 4, the included angle of the gradient vector of the first object edge point and the gradient vector of the second object edge point is i, and the included angle is obtained by inquiring β, wherein the included angle is I, and the included angle is obtained by inquiring βkEqual to i, to obtain a reference angle αck(i) Distance L of the first edge point from the reference point1k(i) A distance L between the first edge point and the second edge point2k(i)。
S130, calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point.
Wherein the voting scale value is equal to the ratio of the distance between the first edge point and the second edge point to the distance between the first target edge point and the second target edge point, e.g., each edge point on the target image
Figure 349684DEST_PATH_IMAGE001
According to its gradient vector
Figure 698626DEST_PATH_IMAGE002
Calculating a passing point
Figure 697806DEST_PATH_IMAGE001
Edge points intersecting the edge of the target image along a direction perpendicular to the gradient
Figure 700266DEST_PATH_IMAGE003
And obtaining edge point gradient vectors
Figure 468687DEST_PATH_IMAGE005
Thereby calculating two vectors
Figure 40483DEST_PATH_IMAGE002
And
Figure 167839DEST_PATH_IMAGE005
included angle
Figure 956410DEST_PATH_IMAGE006
And the length L of the line segment C' Dc’d’Calculating a voting scaling value: scale = Lc’d’/L2kAnd calculating voting scaling values of the edge points of all the target images, and taking the maximum scale value as the voting scaling value.
And S140, calculating the position of the preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and the reference point and the angle of the gradient vector of the first target edge point.
And the voting zoom value is to zoom the distance between the first edge point and the reference point to obtain the distance between a preselected target point and the first target edge point, and then obtain the direction of the preselected target point relative to the first target edge point according to the reference included angle and the angle of the gradient vector of the first target edge point, so as to determine the preselected target point.
S150, calculating the positions of the preselected target points of all the edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object.
All the edge points of the target image have corresponding preselected target points, the preselected target points calculated by some edge points are overlapped, the calculated times are used as scores of the preselected target points, the position of the preselected target point with the highest score is used as the position of the target object, and the preselected target point with the most corresponding edge points is used as the position of the target object. For example, the target image has four edge points a1, a2, A3, a4 in total, the edge point a1 calculates a preselected target point B1, the edge point a2 also calculates a preselected target point B1, the edge point A3 also calculates a preselected target point B1, the score corresponding to the preselected target point B1 is 3, the score corresponding to the edge point a4 calculates a preselected target point B2, the score corresponding to the preselected target point B2 is 1, and the position of the preselected target point B1 is taken as the position of the target object.
In the target object positioning method, the voting scaling value is calculated for the online target image through the edge reference data table provided offline, so that the redundant calculation of the reference included angle of the target image, the distance between the edge point and the reference point and the distance between the edge point and the edge point can be reduced, and the calculation efficiency of the position of the target object is improved; meanwhile, the edge reference data table is inquired by calculating the included angle between the voting scaling value and the gradient vector, so that scaling invariance and rotation invariance in the calculation process can be ensured; the position of the target object is calculated according to the edge points of the template image and the target image, and the position of the target object is calculated by a calculation method of traversing the whole image at fixed intervals, so that the calculation amount is reduced, and the calculation efficiency is improved.
In one embodiment, as shown in fig. 2, step S110 includes: s111, acquiring the position of a first edge point and a gradient vector of the first edge point according to a template image; the first edge point is any edge point of the template image; s112, calculating the position of a second edge point and the gradient vector of the second edge point according to the position of the first edge point and the gradient vector of the first edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point; s113, calculating an included angle of the gradient vector according to the gradient vector of the first edge point and the gradient vector of the second edge point; s114, calculating the distance between the first edge point and the second edge point according to the position of the first edge point and the position of the second edge point; s115, acquiring the position of a reference point in the template image; the reference point is any one point selected from the interior of the template image; s116, calculating the distance between the first edge point and the reference point according to the position of the first edge point and the position of the reference point; and S117, calculating a reference included angle according to the position of the first edge point, the position of the reference point and the gradient vector of the first edge point.
The position of the first edge point and the gradient vector of the first edge point can be obtained through calculation of a canny operator, the first edge point is any edge point selected according to the template image, the second edge point is a point which is perpendicular to the gradient vector of the first edge point, a straight line passing through the first edge point intersects the template image, and the second edge point and the gradient vector of the second edge point are obtained by inquiring a point which is perpendicular to a line connecting the first edge point and the gradient vector of the first edge point. The reference point is any point selected in the template image, and for calculation, the reference point is the gravity center of the template image. The length and the direction of a connecting line of the first edge point and the reference point can be calculated according to the position of the first edge point and the position of the reference point, and the reference included angle is equal to the included angle of the gradient vector of the connecting line of the first edge point and the reference point and the first edge point.
In one embodiment, step S130 includes: calculating the distance between the first target edge point and the second target edge point according to the position of the first target edge point and the position of the second target edge point; and calculating the ratio of the distance between the first edge point and the second edge point to the distance between the first target edge point and the second target edge point to obtain a voting scaling value.
In one embodiment, step S120 is preceded by: acquiring a first target edge point and a second target edge point of the target image, and a gradient vector of the first target edge point and a gradient vector of the second target edge point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
In one embodiment, step S140 includes: zooming the distance between the first edge point and the reference point according to the voting zoom value to obtain the distance between a preselected target point and the first target edge point; and calculating the position of the preselected target point according to the distance between the preselected target point and the first target edge point, the reference included angle and the angle of the gradient vector of the first target edge point.
In one embodiment, the calculating the position of the preselected target point according to the distance between the preselected target point and the first target edge point, the reference included angle, and the angle of the gradient vector of the first target edge point includes: calculating a direction angle of a connecting line of a preselected target point and the first target edge point according to the angle of the gradient vector of the first target edge point and the reference included angle; and calculating the position of the preselected target point according to the direction angle and the distance between the preselected target point and the first target edge point.
Specifically, the first target edge point C' (x) is knownc’,yc’),θ’ckIs the angle of the gradient vector of the first target edge point, and is set as the preselected target point O' (x)o’,yo’) And the direction of C 'O' is thetac’o’=θ’ckck,αckFor reference angles, then:
Figure 11140DEST_PATH_IMAGE008
in one embodiment, the step S150 includes: repeating the step of calculating the positions of preselected target points for all the edge points of the target image, and adding 1 to the score of the preselected target points when the preselected target points are obtained by calculating the edge points of the target image; and taking the position of the preselected target point with the highest score as the position of the target object. Wherein the initial score of the preselected target point is 0.
In the target object positioning method in the embodiment, when a scene with a small target object and a large search range in a target image is faced, compared with a global search mode, the method for calculating the position of the target object can greatly reduce the calculation amount and improve the calculation efficiency.
It should be understood that although the various steps in the flow charts of fig. 1-2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 5, there is provided an object positioning device including: a reference data table creation module 210, a query module 220, a vote scaling value calculation module 230, a preselected target point location calculation module 240, and a deletion module 250, wherein:
a reference data table establishing module 210, configured to establish an edge reference data table according to a template image, where the edge reference data table includes an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; and the connecting line of the first edge point and the second edge point is perpendicular to the gradient vector of the first edge point.
The query module 220 is configured to query the edge reference data table according to an included angle between gradient vectors of a first target edge point and a second target edge point of a target image, to obtain the reference included angle, a distance between the first edge point and the second edge point, and a distance between the first edge point and a reference point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
And the voting scaling value calculation module 230 is configured to calculate a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point.
And a preselected target point position calculating module 240, configured to calculate a position of the preselected target point according to the voting zoom value, the reference angle, the distance between the first edge point and the reference point, and the angle of the gradient vector of the first target edge point.
And a deletion module 250, configured to perform position calculation of preselected target points on all edge points of the target image, and use the position of the preselected target point with the highest score as the position of the target object.
In one embodiment, the reference data table establishing module 210 includes: a first edge point acquisition unit which acquires the position of a first edge point and a gradient vector of the first edge point according to a template image; the first edge point is any edge point of the template image; a second edge point calculation unit configured to calculate a position of a second edge point and a gradient vector of the second edge point according to the position of the first edge point and the gradient vector of the first edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point; the included angle calculation unit of the gradient vector is used for calculating the included angle of the gradient vector according to the gradient vector of the first edge point and the gradient vector of the second edge point; a first distance calculating unit, configured to calculate a distance between the first edge point and the second edge point according to the position of the first edge point and the position of the second edge point; a reference point acquiring unit, configured to acquire a position of a reference point in the template image; the reference point is any one point selected from the interior of the template image; a second distance calculation unit configured to calculate a distance between the first edge point and the reference point according to the position of the first edge point and the position of the reference point; and the reference included angle calculation unit is used for calculating a reference included angle according to the position of the first edge point, the position of the reference point and the gradient vector of the first edge point.
In one embodiment, the vote scaling value calculation module 230 includes: a first target distance calculating unit, configured to calculate a distance between the first target edge point and the second target edge point according to the position of the first target edge point and the position of the second target edge point; and the voting scaling value calculating unit is used for calculating the ratio of the distance between the first edge point and the second edge point to the distance between the first target edge point and the second target edge point to obtain a voting scaling value.
In one embodiment, the target positioning device further comprises: a target edge point obtaining module, configured to obtain a first target edge point and a second target edge point of the target image, and a gradient vector of the first target edge point and a gradient vector of the second target edge point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
In one embodiment, the location calculation module 240 for the preselected target point comprises: the second target distance calculation unit is used for scaling the distance between the first edge point and the reference point according to the voting scaling value to obtain the distance between a preselected target point and the first target edge point; and the position calculation unit of the preselected target point is used for calculating the position of the preselected target point according to the distance between the preselected target point and the first target edge point, the reference included angle and the angle of the gradient vector of the first target edge point.
In one embodiment, the location calculation unit of the preselected target point comprises: the direction angle calculating subunit is used for calculating a direction angle of a connecting line between a preselected target point and the first target edge point according to the angle of the gradient vector of the first target edge point and the reference included angle; and the position calculating subunit is used for calculating the position of the preselected target point according to the direction angle and the distance between the preselected target point and the first target edge point.
In one embodiment, the deleting module 250 includes: a scoring unit for repeating the step of calculating the positions of preselected target points for all edge points of the target image, and adding 1 to the score of the preselected target points when the edge points of the target image are calculated to obtain the preselected target points; and the deleting unit is used for taking the position of the preselected target point with the highest score as the position of the target object.
For the specific definition of the object positioning device, reference may be made to the above definition of the object positioning method, which is not described herein again. The various modules in the object locating device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data of the edge reference data table. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of object localization.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point;
inquiring the edge reference data table according to the included angle of the gradient vectors of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point;
calculating the position of a preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and a reference point and the angle of the gradient vector of the first target edge point;
and calculating the positions of preselected target points of all edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a first target edge point and a second target edge point of the target image, and a gradient vector of the first target edge point and a gradient vector of the second target edge point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point;
inquiring the edge reference data table according to the included angle of the gradient vectors of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point;
calculating the position of a preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and a reference point and the angle of the gradient vector of the first target edge point;
and calculating the positions of preselected target points of all edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a first target edge point and a second target edge point of the target image, and a gradient vector of the first target edge point and a gradient vector of the second target edge point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for locating an object, the method comprising:
establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point; the included angle of the gradient vector is the included angle of the gradient vector of the first edge point and the gradient vector of the second edge point;
inquiring the edge reference data table according to the included angle of the gradient vectors of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point; wherein the voting scaling value is equal to a ratio of a distance between the first edge point and the second edge point to a distance between the first target edge point and the second target edge point;
calculating the position of a preselected target point according to the voting zoom value, the reference included angle, the distance between the first edge point and a reference point and the angle of the gradient vector of the first target edge point;
calculating the positions of preselected target points of all edge points of the target image, and taking the position of the preselected target point with the highest score as the position of the target object; and the preselected target point with the highest score is the point with the most corresponding edge points.
2. The method according to claim 1, wherein the building an edge reference data table according to the template image, the edge reference data table including an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point comprises:
acquiring the position of a first edge point and a gradient vector of the first edge point according to a template image; the first edge point is any edge point of the template image;
calculating the position of a second edge point and the gradient vector of the second edge point according to the position of the first edge point and the gradient vector of the first edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point;
calculating an included angle of the gradient vector according to the gradient vector of the first edge point and the gradient vector of the second edge point;
calculating the distance between the first edge point and the second edge point according to the position of the first edge point and the position of the second edge point;
acquiring the position of a reference point in the template image; the reference point is any one point selected from the interior of the template image;
calculating the distance between the first edge point and the reference point according to the position of the first edge point and the position of the reference point;
and calculating a reference included angle according to the position of the first edge point, the position of the reference point and the gradient vector of the first edge point.
3. The method of claim 2, wherein calculating a vote scaling value based on the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point comprises:
calculating the distance between the first target edge point and the second target edge point according to the position of the first target edge point and the position of the second target edge point;
and calculating the ratio of the distance between the first edge point and the second edge point to the distance between the first target edge point and the second target edge point to obtain a voting scaling value.
4. The method according to claim 1, wherein before querying the edge reference data table according to an angle between gradient vectors of a first target edge point and a second target edge point of a target image to obtain the reference angle, a distance between the first edge point and the second edge point, and a distance between the first edge point and a reference point, the method comprises:
acquiring a first target edge point and a second target edge point of the target image, and a gradient vector of the first target edge point and a gradient vector of the second target edge point; and the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point.
5. The method of claim 1, wherein said calculating the location of a preselected target point from said vote scaling value, said reference angle, the distance of said first edge point from a reference point, and the angle of the gradient vector of said first target edge point comprises:
zooming the distance between the first edge point and the reference point according to the voting zoom value to obtain the distance between a preselected target point and the first target edge point;
and calculating the position of the preselected target point according to the distance between the preselected target point and the first target edge point, the reference included angle and the angle of the gradient vector of the first target edge point.
6. The method of claim 5, wherein calculating the position of the preselected target point based on the distance of the preselected target point from the first target edge point, the reference angle, and the angle of the gradient vector of the first target edge point comprises:
calculating a direction angle of a connecting line of a preselected target point and the first target edge point according to the angle of the gradient vector of the first target edge point and the reference included angle;
and calculating the position of the preselected target point according to the direction angle and the distance between the preselected target point and the first target edge point.
7. The method of claim 1, wherein the calculating the positions of the preselected target points for all edge points of the target image, and the position of the preselected target point with the highest score as the position of the target object comprises:
repeating the step of calculating the positions of preselected target points for all the edge points of the target image, and adding 1 to the score of the preselected target points when the preselected target points are obtained by calculating the edge points of the target image;
and taking the position of the preselected target point with the highest score as the position of the target object.
8. An object positioning device, the device comprising:
the reference data table establishing module is used for establishing an edge reference data table according to the template image, wherein the edge reference data table comprises an included angle of a gradient vector, a reference included angle corresponding to the included angle of the gradient vector, a distance between a first edge point and a reference point, and a distance between the first edge point and a second edge point; the connecting line of the first edge point and the second edge point is vertical to the gradient vector of the first edge point; the included angle of the gradient vector is the included angle of the gradient vector of the first edge point and the gradient vector of the second edge point;
the query module is used for querying the edge reference data table according to the included angle of the gradient vector of the first target edge point and the second target edge point of the target image to obtain the reference included angle, the distance between the first edge point and the second edge point and the distance between the first edge point and the reference point; wherein, the connecting line of the first target edge point and the second target edge point is vertical to the gradient vector of the first target edge point;
the voting scaling value calculation module is used for calculating a voting scaling value according to the distance between the first edge point and the second edge point and the distance between the first target edge point and the second target edge point; wherein the voting scaling value is equal to a ratio of a distance between the first edge point and the second edge point to a distance between the first target edge point and the second target edge point;
a position calculation module of a preselected target point, configured to calculate a position of the preselected target point according to the voting zoom value, the reference angle, the distance between the first edge point and the reference point, and the angle of the gradient vector of the first target edge point;
the deletion module is used for calculating the positions of preselected target points of all edge points of the target image and taking the position of the preselected target point with the highest score as the position of the target object; and the preselected target point with the highest score is the point with the most corresponding edge points.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010205449.7A 2020-03-23 2020-03-23 Target positioning method and device, computer equipment and storage medium Active CN111079723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010205449.7A CN111079723B (en) 2020-03-23 2020-03-23 Target positioning method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010205449.7A CN111079723B (en) 2020-03-23 2020-03-23 Target positioning method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111079723A CN111079723A (en) 2020-04-28
CN111079723B true CN111079723B (en) 2020-06-30

Family

ID=70324665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010205449.7A Active CN111079723B (en) 2020-03-23 2020-03-23 Target positioning method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111079723B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008003800A (en) * 2006-06-21 2008-01-10 Toyota Motor Corp Image processing apparatus and image processing program
CN106485284A (en) * 2016-10-19 2017-03-08 哈尔滨工业大学 A kind of element localization method based on template matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214256A (en) * 2017-07-07 2019-01-15 深圳市保千里电子有限公司 A kind of communication chart object detection method, device and vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008003800A (en) * 2006-06-21 2008-01-10 Toyota Motor Corp Image processing apparatus and image processing program
CN106485284A (en) * 2016-10-19 2017-03-08 哈尔滨工业大学 A kind of element localization method based on template matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于快速广义Hough 变换的倾斜车牌定位;林树青等;《电脑开发与应用》;20131231;第26卷(第3期);75-78 *

Also Published As

Publication number Publication date
CN111079723A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN109727288B (en) System and method for monocular simultaneous localization and mapping
CN112179330B (en) Pose determination method and device of mobile equipment
CN108955718B (en) Visual odometer and positioning method thereof, robot and storage medium
WO2019119328A1 (en) Vision-based positioning method and aerial vehicle
CN111754579B (en) Method and device for determining external parameters of multi-view camera
WO2018120040A1 (en) Obstacle detection method and device
CN109186618B (en) Map construction method and device, computer equipment and storage medium
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN108648141B (en) Image splicing method and device
Das et al. Entropy based keyframe selection for multi-camera visual slam
Fanani et al. Keypoint trajectory estimation using propagation based tracking
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN109313809B (en) Image matching method, device and storage medium
WO2022062355A1 (en) Fusion positioning method and apparatus
CN111079723B (en) Target positioning method and device, computer equipment and storage medium
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN115861860B (en) Target tracking and positioning method and system for unmanned aerial vehicle
CN116630442A (en) Visual SLAM pose estimation precision evaluation method and device
CN112000109A (en) Position correction method for power inspection robot, power inspection robot and medium
CN115272248B (en) Intelligent detection method for fan gesture and electronic equipment
CN116662600A (en) Visual positioning method based on lightweight structured line map
CN114757834B (en) Panoramic image processing method and panoramic image processing device
CN111079535A (en) Human skeleton action recognition method and device and terminal
CN112419399A (en) Image ranging method, device, equipment and storage medium
CN110020577B (en) Face key point expansion calculation method, storage medium, electronic device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant