CN105719271A - Method and apparatus for determination of target object - Google Patents

Method and apparatus for determination of target object Download PDF

Info

Publication number
CN105719271A
CN105719271A CN201410736411.7A CN201410736411A CN105719271A CN 105719271 A CN105719271 A CN 105719271A CN 201410736411 A CN201410736411 A CN 201410736411A CN 105719271 A CN105719271 A CN 105719271A
Authority
CN
China
Prior art keywords
point
coordinate
characteristic
feature point
collection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410736411.7A
Other languages
Chinese (zh)
Other versions
CN105719271B (en
Inventor
郑杰
段思九
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN201410736411.7A priority Critical patent/CN105719271B/en
Publication of CN105719271A publication Critical patent/CN105719271A/en
Application granted granted Critical
Publication of CN105719271B publication Critical patent/CN105719271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and apparatus for determination of a target object, for solving the problem that the prior art is not accurate in determining the position of a target object and is low in the efficiency for determining the position of the target object. The method comprises the steps: respectively extracting characteristic points from panorama street scenery images corresponding to each acquisition point; aiming at every two adjacent acquisition points, matching the characteristic points of the panorama street scenery images corresponding to the two adjacent acquisition points to obtain at least one characteristic point pair; aiming at each characteristic point pair, according to the relative latitude and longitude coordinate of the characteristic points in the characteristic point pair and the latitude and longitude coordinate of the acquisition points of the panorama street scenery image where the characteristic points exist, determining whether the characteristic points are an effective characteristic point pair, and generating three dimensional coordinate points corresponding to the effective characteristic point pair when the effective characteristic point pair is determined; and clustering the generated three dimensional coordinate points to obtain at least one cluster, and according to the three dimensional coordinate points contained in the cluster, determining the position of the corresponding target object.

Description

A kind of target object defining method and device
Technical field
The present invention relates to technical field of image processing, particularly relate to a kind of target object defining method and device.
Background technology
Along with developing rapidly of science and technology, camera technique is widely used in every field, for special scenes being monitored, target object being identified.
When the gathered image of photographic head is processed, generally require from gathering the position determining target object image.Consult shown in Fig. 1, when target object (such as a certain building) is positioned, it usually needs manual site gathers the street view image of (as gathered) target object by streetscape collecting vehicle, and records position, collection point;And, follow-up when determining the position of target object, the manually position according to target object relative position in street view image and collection point this target object of location estimation.
Generally, target object is not built up on roadside, but distance road has certain distance, therefore by the collection point of collecting vehicle collection street view image, actually there is with target object certain distance, if and collecting vehicle gathers shooting angle during street view image, direction is different, from visual effect, also can there is certain difference in target object relative position in street view image, therefore, the artificial position according to collection point and target object relative position in street view image estimate that the position of target object can exist bigger error, and owing to passing through artificial estimation, probably due to anthropic factor brings extra error.It addition, the street view image data volume that collects of streetscape collecting vehicle is relatively big, inefficient by artificial treatment, speed is slower.
Summary of the invention
The embodiment of the present invention provides a kind of target object defining method and device, determines the problem that target object location is inaccurate, inefficient in order to solving in prior art.
The concrete technical scheme that the embodiment of the present invention provides is as follows:
A kind of target object defining method, including:
Obtain the panorama street view image that each collection point of collecting on the way of streetscape collecting vehicle is corresponding;
From panorama street view image corresponding to each collection point, extract characteristic point respectively, and determine the characteristic point relative latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to characteristic point;
For the adjacent collection point of each two, the characteristic point of panorama street view image corresponding for two adjacent collection points is matched, obtains at least one feature point pairs;
For each feature point pairs, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with described validity feature point corresponding three-dimensional coordinate point;
The three-dimensional coordinate point generated being clustered, obtains at least one cluster, the three-dimensional coordinate point that each of which cluster comprises describes same target object;
For each cluster, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.
A kind of target object determines device, including:
Street view image acquiring unit, the panorama street view image corresponding for obtaining each collection point that streetscape collecting vehicle collects on the way;
Feature point extraction unit, for extracting characteristic point respectively from panorama street view image corresponding to each collection point;
Determine unit, for determining the characteristic point relative latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to the characteristic point extracted by feature point extraction unit;
Feature point pairs acquiring unit, for for the adjacent collection point of each two, matches the characteristic point of panorama street view image corresponding for two adjacent collection points, obtains at least one feature point pairs;
Three-dimensional coordinate point generates unit, for each feature point pairs obtained for feature point pairs acquiring unit, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point determined in this feature point pairs that unit obtains, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with described validity feature point corresponding three-dimensional coordinate point;
Cluster cell, the three-dimensional coordinate point that three-dimensional coordinate point generates unit generation clusters, and obtains at least one cluster, and the three-dimensional coordinate point that each of which cluster comprises describes same target object;
Target object determines unit, and for each cluster obtained for cluster cell, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.
In the embodiment of the present invention, obtain the panorama street view image that each collection point of collecting on the way of streetscape collecting vehicle is corresponding;From panorama street view image corresponding to each collection point, extract characteristic point respectively, and determine characteristic point latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to characteristic point;For the adjacent collection point of each two, the characteristic point of panorama street view image corresponding for two adjacent collection points is matched, obtains at least one feature point pairs;For each feature point pairs, latitude and longitude coordinates according to the latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with validity feature point corresponding three-dimensional coordinate point;The three-dimensional coordinate point generated is clustered, obtains at least one cluster;For each cluster, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.Adopt technical solution of the present invention, owing to streetscape collecting vehicle is in gatherer process, collection point is comparatively intensive, same target object is likely to be present in the panorama street view image that adjacent collection point collects (namely can comprise same target object in the streetscape panoramic picture of adjacent collection point), after panorama street view image that each collection point is corresponding extracts characteristic point, the validity feature point obtained that undertaken the characteristic point of panorama street view image corresponding for adjacent for each two collection point matching is exactly the same characteristic point on same target object to being likely to;Therefore, according to this validity feature point, the three-dimensional coordinate point of generation is constituted to a great extent exactly a shape point of this target object;And after three-dimensional coordinate point is clustered, the three-dimensional coordinate point got together is all the shape point constituting same target object to a great extent, therefore obtain the position of target object according to the shape point building this target object comparatively accurate.It addition, this programme automatization can determine the position of target object according to the panorama street view image of collection point, it is not necessary to manually each Zhang Quanjing street view image is processed, improve treatment effeciency and speed.
Accompanying drawing explanation
Fig. 1 is that in the embodiment of the present invention, target object determines flow chart;
Fig. 2 is the panorama street view image schematic diagram collected in the embodiment of the present invention;
Fig. 3 is the schematic diagram that panorama street view image is divided in the embodiment of the present invention grid;
Fig. 4 is the schematic diagram after adopting the SURF method panorama street view image to collecting to extract characteristic point and carry out characteristic point pairing in the embodiment of the present invention;
Fig. 5 is the flow chart generating three-dimensional coordinate point in the embodiment of the present invention;
Fig. 6 is the mathematical model figure obtaining the three-dimensional coordinate point in panorama street view image in the embodiment of the present invention;
Fig. 7 is the three-dimensional coordinate point schematic diagram obtained according to the street view image corresponding to adjacent two collection points in the embodiment of the present invention;
Fig. 8 is the three-dimensional street view image obtained in the embodiment of the present invention;
Fig. 9 is that in the embodiment of the present invention, target object determines apparatus structure schematic diagram.
Detailed description of the invention
In order to solve that prior art is determined the problem that target object location is inaccurate, inefficient.In the embodiment of the present invention, the panorama street view image that each collection point of gathering on the way for streetscape collecting vehicle is corresponding, extract the panorama street view image characteristic point that each collection point is corresponding respectively;When collection point is comparatively intensive, the panorama street view image that the adjacent collection point of each two is corresponding will comprise same target object, the characteristic point of panorama street view image corresponding for adjacent for each two collection point being matched, will there is the validity feature point pair of the same characteristic point characterized on same target object in all validity feature point centerings obtained;Therefore, according to this validity feature point, the three-dimensional coordinate point of generation is constituted to a great extent exactly a shape point of this target object;And after three-dimensional coordinate point is clustered, the three-dimensional coordinate point got together is all the shape point constituting same target object to a great extent, therefore obtain the position of target object according to the shape point building this target object comparatively accurate.Additionally, the embodiment of the present invention automatization can determine the position of target object according to the panorama street view image of collection point, it is not necessary to manually each Zhang Quanjing street view image is processed, improve treatment effeciency and speed.
Below in conjunction with accompanying drawing, the preferred embodiment of the present invention is described in detail.
Consult shown in Fig. 1, in the embodiment of the present invention, it is determined that the flow process of target object includes:
Step 100: obtain the panorama street view image that each collection point of collecting on the way of streetscape collecting vehicle is corresponding.
In the embodiment of the present invention, streetscape collecting vehicle gathers panorama street view image successively according to predetermined paths, frequency acquisition can previously according to requirements set, as can often 5s gather once.Streetscape collecting vehicle preserves the latitude and longitude coordinates of panorama street view image that each collection point collects and collection point.
Step 110: extract characteristic point respectively from panorama street view image corresponding to each collection point, and determine the characteristic point relative latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to characteristic point.
In the embodiment of the present invention, the characteristic point extracted from panorama street view image is the point that gray-value variation is bigger, and the point that gray-value variation is bigger is usually and describes the marginal point of target object in panorama street view image.
Optionally, in the embodiment of the present invention, gather the characteristic point of panorama street view image, can adopt it is not limited to several below: SIFT (Scale-invariantfeaturetransform;Scale invariant feature change), SURF (SpeededUpRobustFeatures;Accelerate robust features), FAST (FeaturesfromAcceleratedSegmentTest;Accelerated fractionation detection feature), BRIEF (BinaryRobustIndependentElementaryFeatures;Binary robust uncorrelated features) or ORG (OrientedFASTandRotatedBRIEF, directivity FAST and rotation BRIEF).In addition to the above methods, also having other multiple methods can extract the characteristic point of panorama street view image, this is no longer going to repeat them.
In the embodiment of the present invention, determining the characteristic point relative latitude and longitude coordinates in panorama street view image according to characteristic point in the position of panorama street view image, implementing can be as follows:
Panorama street view image is divided into 180 row * 360 grid arranged, and 180 row are corresponding latitude 0 °~180 ° respectively, and 360 row are corresponding longitude 0 °~360 ° respectively, as shown in Figure 3;For each characteristic point in this panorama street view image, it is judged that this characteristic point is arranged in the grid of panorama street view image, and row and column corresponding for this grid is defined as this characteristic point relative latitude and longitude coordinates in panorama street view image.Such as, characteristic point A is arranged in the 20th row the 300th row of panorama street view image, then the relative latitude and longitude coordinates of this characteristic point is (300 °, 20 °).
Step 120: for the adjacent collection point of each two, matches the characteristic point of panorama street view image corresponding for two adjacent collection points, obtains at least one feature point pairs.
In the embodiment of the present invention, step 120, implementing can be as follows: for each characteristic point of panorama street view image corresponding to a collection point in described adjacent collection point, being mated by the characteristic information of each characteristic point of panorama street view image corresponding with another collection point respectively for the characteristic information of this characteristic point, if coupling, the described characteristic point in the panorama street view image corresponding with another collection point described by this characteristic point constitutes a feature point pairs.
In the embodiment of the present invention, the characteristic information of characteristic point can include the pixel value of characteristic point.If the pixel value of a certain characteristic point b of the panorama street view image corresponding with another collection point for a certain characteristic point a of the panorama street view image that an aforementioned collection point is corresponding is consistent, then determining that the characteristic information of characteristic point a and characteristic point b mates, this characteristic point a and characteristic point b constitutes a feature point pairs.
In the embodiment of the present invention, the characteristic information of characteristic point can be the sequential value of the pixel value composition of the pixel value of characteristic point, characteristic point peripheral image vegetarian refreshments (such as four, characteristic point upper and lower, left and right pixel);If the sequential value of a certain characteristic point b of the panorama street view image corresponding with another collection point for a certain characteristic point a of the panorama street view image that an aforementioned collection point is corresponding is consistent or the Euclidean distance of sequential value is less than or equal to preset distance threshold value, then determining that the characteristic information of characteristic point a and characteristic point b mates, this characteristic point a and characteristic point b constitutes a feature point pairs.
As: assume that the characteristic point extracted in the streetscape panoramic picture of collection point A is { a1, a2, a3, ..., an}, the characteristic point extracted in the streetscape panoramic picture of the collection point B adjacent with collection point A is { b1, b2, b3 ..., bk}, then each characteristic point that each corresponding for collection point A characteristic point is corresponding with collection point B respectively is mated, as by characteristic point a1 respectively with characteristic point b1, b2, b3, ..., bk mates, if the characteristic information of characteristic point a1 and characteristic point b3 mates, then using characteristic point a1 and b3 as a pair feature point pairs.Consult Fig. 4 to show the panorama street view image shown in Fig. 2 is extracted characteristic point the image generated after carrying out characteristic point pairing.
Step 130: for each feature point pairs, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with validity feature point corresponding three-dimensional coordinate point.
Step 130, consults shown in Fig. 5, is implemented as follows:
Step a1, latitude and longitude coordinates according to two collection points corresponding to two characteristic point place panorama street view image of feature point pairs set up three-dimensional system of coordinate;Wherein, initial point in three-dimensional system of coordinate is the coordinate points of the previous collection point of two collection points, the x coordinate axle of three-dimensional system of coordinate is the line of previous collection point and the coordinate points of a rear collection point, three-dimensional system of coordinate z coordinate axle is the coordinate axes being perpendicular to horizontal plane with initial point for starting point, and the y-coordinate axle of three-dimensional system of coordinate is the coordinate axes being perpendicular to above-mentioned x coordinate axle and z coordinate axle with above-mentioned initial point for starting point;
Step a2, latitude and longitude coordinates according to these two collection points, calculate the air line distance of two collection points;
Step a3, relative longitude coordinate according to this air line distance, two characteristic points of feature point pairs, calculate features described above point to corresponding three-dimensional coordinate point latitude and longitude coordinates in three-dimensional system of coordinate;
Step a4, according to above-mentioned air line distance, the relative altitude coordinate of two characteristic points of feature point pairs, three-dimensional coordinate point latitude and longitude coordinates in above-mentioned three-dimensional system of coordinate, calculate two characteristic points height value in above-mentioned three-dimensional system of coordinate respectively of feature point pairs;
Step a5, judge that whether the difference of two characteristic points of features described above point pair height value in three-dimensional system of coordinate is be more than or equal to preset height threshold, if, then determine that features described above point is to for invalid feature point pairs, if not, it is determined that features described above point is to for validity feature point pair.
Assume that characteristic point a and characteristic point b is a feature point pairs, the collection point of the streetscape panoramic picture at characteristic point a and characteristic point b place respectively is collection point A (x1, and collection point B (x2 y1), y2), and collection point A is adjacent with collection point B, collection point A is previous collection point, and relative longitude and latitude corresponding for characteristic point a is (θ1, α1), relative longitude and latitude corresponding for characteristic point b is (θ2, α2).Three-dimensional system of coordinate is set up as shown in Figure 6 according to collection point A and collection point B.Using the collection point A initial point as three-dimensional system of coordinate, using the line between collection point A and collection point B as x-axis, z coordinate axle is the coordinate axes being perpendicular to horizontal plane with collection point A for starting point, and the y-coordinate axle of three-dimensional system of coordinate is the coordinate axes being perpendicular to above-mentioned x coordinate axle and z coordinate axle with above-mentioned collection point A for starting point.It is L that latitude and longitude coordinates according to collection point A and collection point B calculates the distance obtaining two collection points, and A coordinate in three-dimensional system of coordinate in collection point is (0,0,0), and B coordinate in three-dimensional system of coordinate in collection point is (L, 0,0).
Relative longitude coordinate according to this air line distance, two characteristic points of feature point pairs in step a3, calculates features described above point to the three-dimensional coordinate point V corresponding for a and b latitude and longitude coordinates in three-dimensional system of coordinate, can adopt below equation (1):
x = L + tan θ 2 tan θ 2 - tan θ 1
y = L + tan θ 1 + tan θ 2 tan θ 2 - tan θ 1 Formula (1)
Wherein, in formula (1), x is the three-dimensional coordinate point (such as aforementioned three-dimensional coordinate point V) longitude coordinate in three-dimensional system of coordinate, y is three-dimensional coordinate point V latitude coordinate in three-dimensional system of coordinate, L is the air line distance of two collection points (such as aforementioned collection point A and collection point B), θ1It is the relative longitude coordinate of previous collection point characteristic of correspondence point (such as preceding feature point a) in two collection points, θ2It it is the relative longitude coordinate of rear collection point characteristic of correspondence point (such as preceding feature point b) in two collection points.
Optionally, according to above-mentioned air line distance, the latitude coordinate of two characteristic points of feature point pairs, three-dimensional coordinate point latitude and longitude coordinates in above-mentioned three-dimensional system of coordinate in step a4, calculate two characteristic points height value in above-mentioned three-dimensional system of coordinate respectively of feature point pairs, adopt equation below (2) to calculate and obtain:
z 1 = L × tan α 1 + x 2 + y 2
z 2 = L × tan α 2 + ( x / L - 1 ) 2 + y 2 Formula (2)
Wherein, in formula (2), z1For previous collection point (such as aforementioned collection point A) characteristic of correspondence point (such as preceding feature point a) height value in three-dimensional system of coordinate, z2For a rear collection point (such as aforementioned collection point B) characteristic of correspondence point (such as preceding feature point b) height value in three-dimensional system of coordinate, α1For the relative altitude coordinate of previous collection point characteristic of correspondence point, α2Relative altitude coordinate for a rear collection point characteristic of correspondence point, x is three-dimensional coordinate point longitude coordinate in three-dimensional system of coordinate, y is three-dimensional coordinate point latitude coordinate in three-dimensional system of coordinate, and L is the air line distance of two collection points (such as aforementioned collection point A and collection point B).
Based on above-mentioned formula, work as z1And z2Difference be more than or equal to default height threshold time, it is determined that feature point pairs is invalid feature point pairs, it should give up;Work as z1And z2Difference less than default height threshold time, it is determined that this feature point pairs is validity feature point pair.Wherein, above-mentioned default height threshold can pre-set according to concrete application scenarios.
Adopt technique scheme, by the validity feature point of the panorama street view image that often adjacent two collection points are corresponding to screening, undertaken giving up process by invalid feature point pairs, be effectively ensured the accuracy of the positional information of the target object of final acquisition.
In abovementioned steps 130, generate with validity feature point corresponding three-dimensional coordinate point, be implemented as follows: by the meansigma methods of two characteristic points height value in three-dimensional system of coordinate respectively of validity feature point pair, it is determined that for the height value of three-dimensional coordinate point;By validity feature point to corresponding three-dimensional coordinate point longitude coordinate in three-dimensional system of coordinate with in two collection points the longitude coordinate of previous collection point and be worth, it is determined that for the longitude coordinate of three-dimensional coordinate point;And, by validity feature point to the latitude coordinate of corresponding three-dimensional coordinate point latitude coordinate in three-dimensional system of coordinate and previous collection point and value, it is determined that for the latitude coordinate of above-mentioned three-dimensional coordinate point.Such as, in earlier figures 6, longitude coordinate x (v) of three-dimensional coordinate point V=x1+x, latitude coordinate y (v)=y1+y, height coordinate z (v)=(z1+z2)/2, wherein x and y is three-dimensional coordinate point latitude and longitude coordinates in three-dimensional system of coordinate.
Step 140: the three-dimensional coordinate point generated is clustered, obtains at least one cluster.
In the embodiment of the present invention, all three-dimensional coordinate points to above-mentioned generation, perform cluster computing, obtain at least one cluster;Wherein, each cluster obtained comprises three-dimensional coordinate point and describes same target object.
Step 150: for each cluster, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.
In the embodiment of the present invention, step 150 implements can be as follows: for each cluster obtained above, calculate longitude coordinate meansigma methods and the latitude coordinate meansigma methods of its three-dimensional coordinate point comprised, using latitude and longitude coordinates as the position of target object of calculated longitude coordinate meansigma methods and latitude coordinate meansigma methods.Preferably, target object can also be rebuild by the embodiment of the present invention, obtains the threedimensional model of target object.Therefore, after abovementioned steps 150, also include:
Step 160, for each cluster, the three-dimensional coordinate point that this cluster is comprised carries out trigonometric ratio process, rebuilds the threedimensional model of target object, thus obtaining the profile of target object.
Based on said process, consult Fig. 7 and show in the embodiment of the present invention to the image shown in Fig. 3 to carry out after the some acquisition of above-mentioned validity feature processes, the three-dimensional coordinate point schematic diagram under the depression angle of generation;Consult shown in Fig. 8, for, in the embodiment of the present invention, after three-dimensional coordinate point shown in Fig. 7 is carried out trigonometric ratio process, generating the threedimensional model of target object.
Based on technique scheme, consult shown in Fig. 9, determining device for target object in the embodiment of the present invention, this device includes street view image acquiring unit 90, feature point extraction unit 91, determine unit 92, feature point pairs acquiring unit 93, three-dimensional coordinate point generates unit 94, cluster cell 95, and target object determines unit 96, wherein:
Street view image acquiring unit 90, the panorama street view image corresponding for obtaining each collection point that streetscape collecting vehicle collects on the way;
Feature point extraction unit 91, for extracting characteristic point respectively from panorama street view image corresponding to each collection point;
Determine unit 92, for determining the characteristic point relative latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to the characteristic point extracted by feature point extraction unit 91;
Feature point pairs acquiring unit 93, for for the adjacent collection point of each two, matches the characteristic point of panorama street view image corresponding for two adjacent collection points, obtains at least one feature point pairs;
Three-dimensional coordinate point generates unit 94, for each feature point pairs obtained for feature point pairs acquiring unit 93, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point determined in this feature point pairs that unit 92 obtains, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with described validity feature point corresponding three-dimensional coordinate point;
Cluster cell 95, the three-dimensional coordinate point that three-dimensional coordinate point generates unit 94 generation clusters, and obtains at least one cluster, and the three-dimensional coordinate point that each of which cluster comprises describes same target object;
Target object determines unit 96, and for each cluster obtained for cluster cell 95, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.
Wherein, feature point pairs acquiring unit 93 specifically for: for each characteristic point of panorama street view image corresponding to a collection point in described adjacent collection point, being mated by the characteristic information of each characteristic point of panorama street view image corresponding with another collection point respectively for the characteristic information of this characteristic point, if coupling, the described characteristic point in the panorama street view image corresponding with another collection point described by this characteristic point constitutes a feature point pairs.
Optionally, three-dimensional coordinate point generates the unit 94 latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point, specifically include: set up three-dimensional system of coordinate according to the latitude and longitude coordinates of two collection points corresponding to two characteristic point place panorama street view image in described feature point pairs;Wherein, initial point in described three-dimensional system of coordinate is the coordinate points of the previous collection point of said two collection point, the x coordinate axle of described three-dimensional system of coordinate is the line of described previous collection point and the coordinate points of a rear collection point, described three-dimensional system of coordinate z coordinate axle is the coordinate axes being perpendicular to horizontal plane with described initial point for starting point, and the y-coordinate axle of described three-dimensional system of coordinate is the coordinate axes being perpendicular to described x coordinate axle and z coordinate axle with described initial point for starting point;Latitude and longitude coordinates according to said two collection point, calculates the air line distance of two collection points;Relative longitude coordinate according to described air line distance, two characteristic points of described feature point pairs, calculates three-dimensional coordinate point corresponding to described feature point pairs longitude coordinate in described three-dimensional system of coordinate and latitude coordinate;According to described air line distance, the relative altitude coordinate of two characteristic points of described feature point pairs, three-dimensional coordinate point latitude and longitude coordinates in described three-dimensional system of coordinate, calculate two characteristic points height value in described three-dimensional system of coordinate respectively of described feature point pairs;Judge that whether the difference of two characteristic points of described feature point pairs height value in three-dimensional system of coordinate is be more than or equal to preset height threshold, if then determining that described feature point pairs is invalid feature point pairs, then determines that described feature point pairs is validity feature point pair if not.
Optionally, three-dimensional coordinate point generates the unit 94 relative longitude coordinate according to described air line distance, two characteristic points of described feature point pairs, calculate three-dimensional coordinate point corresponding to described feature point pairs latitude and longitude coordinates in described three-dimensional system of coordinate, obtain with specific reference to following formula:
x = L + tan θ 2 tan θ 2 - tan θ 1
y = L + tan θ 1 + tan θ 2 tan θ 2 - tan θ 1
Wherein, x is described three-dimensional coordinate point longitude coordinate in described three-dimensional system of coordinate, and y is described three-dimensional coordinate point latitude coordinate in described three-dimensional system of coordinate, and L is the air line distance of said two collection point, θ1For the relative longitude coordinate of the previous collection point characteristic of correspondence point in said two collection point, θ2Relative longitude coordinate for the rear collection point characteristic of correspondence point in said two collection point;
Three-dimensional coordinate point generates unit 94 according to described air line distance, the relative altitude coordinate of two characteristic points of described feature point pairs, three-dimensional coordinate point latitude and longitude coordinates in described three-dimensional system of coordinate, calculate two characteristic points height value in described three-dimensional system of coordinate respectively of described feature point pairs, obtain with specific reference to following formula:
z 1 = L + tg α 1 × x 2 + y 2
z 2 = L × tg α 2 × ( x L - 1 ) 2 + y 2
Wherein, above formula z1For previous collection point characteristic of correspondence point height value in described three-dimensional system of coordinate, z2For rear collection point characteristic of correspondence point height value in three-dimensional system of coordinate, α1For the relative altitude coordinate of previous collection point characteristic of correspondence point, α2For the relative altitude coordinate of a rear collection point characteristic of correspondence point, x is described three-dimensional coordinate point longitude coordinate in described three-dimensional system of coordinate, and y is described three-dimensional coordinate point latitude coordinate in described three-dimensional system of coordinate, and L is the air line distance of said two collection point.
Optionally, three-dimensional coordinate point generates unit 94 and generates with described validity feature point corresponding three-dimensional coordinate point, specifically include: by the meansigma methods of two characteristic points height value in described three-dimensional system of coordinate respectively of described validity feature point pair, it is determined that for the height value of described three-dimensional coordinate point;By described validity feature point to corresponding three-dimensional coordinate point longitude coordinate in described three-dimensional system of coordinate with in said two collection point the longitude coordinate of previous collection point and value, it is determined that for the longitude coordinate of described three-dimensional coordinate point;And, by described validity feature point to the latitude coordinate of corresponding three-dimensional coordinate point latitude coordinate in described three-dimensional system of coordinate and described previous collection point and value, it is determined that for the latitude coordinate of described three-dimensional coordinate point.
Optionally, for each cluster, target object determines that unit 96 determines the position of respective objects object according to the three-dimensional coordinate point that this cluster comprises, specifically for: for each cluster, calculate longitude coordinate meansigma methods and the latitude coordinate meansigma methods of the three-dimensional coordinate point that described cluster comprises, using latitude and longitude coordinates as the position of described target object of calculated longitude coordinate meansigma methods and latitude coordinate meansigma methods.
Described device also includes: target object reconstruction unit 97, and for for each cluster, the three-dimensional coordinate point that described cluster is comprised carries out trigonometric ratio process, obtains the profile of described target object.
In the embodiment of the present invention, obtain target object location and the process according to panorama street view image reconstruction target object threedimensional model according to panorama street view image, all can be performed by the terminal possessing image procossing and data-handling capacity.
In sum, the panorama street view image that each collection point of collecting on the way of streetscape collecting vehicle is corresponding is obtained;From panorama street view image corresponding to each collection point, extract characteristic point respectively, and determine the characteristic point relative latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to characteristic point;For the adjacent collection point of each two, the characteristic point of panorama street view image corresponding for two adjacent collection points is matched, obtains at least one feature point pairs;For each feature point pairs, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with validity feature point corresponding three-dimensional coordinate point;The three-dimensional coordinate point generated is clustered, obtains at least one cluster;For each cluster, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.Adopt technical solution of the present invention, owing to streetscape collecting vehicle is in gatherer process, collection point is comparatively intensive, same target object is likely to be present in the panorama street view image that adjacent collection point collects (namely can comprise same target object in the streetscape panoramic picture of adjacent collection point), after panorama street view image that each collection point is corresponding extracts characteristic point, the validity feature point obtained that undertaken the characteristic point of panorama street view image corresponding for adjacent for each two collection point matching is exactly the same characteristic point on same target object to being likely to;Therefore, according to this validity feature point, the three-dimensional coordinate point of generation is constituted to a great extent exactly a shape point of this target object;And after three-dimensional coordinate point is clustered, the three-dimensional coordinate point got together is all the shape point constituting same target object to a great extent, therefore obtain the position of target object according to the shape point building this target object comparatively accurate.It addition, this programme automatization can determine the position of target object according to the panorama street view image of collection point, it is not necessary to manually each Zhang Quanjing street view image is processed, improve treatment effeciency and speed.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, complete software implementation or the embodiment in conjunction with software and hardware aspect.And, the present invention can adopt the form at one or more upper computer programs implemented of computer-usable storage medium (including but not limited to disk memory, CD-ROM, optical memory etc.) wherein including computer usable program code.
The present invention is that flow chart and/or block diagram with reference to method according to embodiments of the present invention, equipment (system) and computer program describe.It should be understood that can by the combination of the flow process in each flow process in computer program instructions flowchart and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can be provided to produce a machine to the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device so that the instruction performed by the processor of computer or other programmable data processing device is produced for realizing the device of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide in the computer-readable memory that computer or other programmable data processing device work in a specific way, the instruction making to be stored in this computer-readable memory produces to include the manufacture of command device, and this command device realizes the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices provides for realizing the step of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
Although preferred embodiments of the present invention have been described, but those skilled in the art are once know basic creative concept, then these embodiments can be made other change and amendment.So, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of the scope of the invention.
Obviously, the embodiment of the present invention can be carried out various change and the modification spirit and scope without deviating from the embodiment of the present invention by those skilled in the art.So, if these amendments of the embodiment of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (10)

1. a target object defining method, it is characterised in that including:
Obtain the panorama street view image that each collection point of collecting on the way of streetscape collecting vehicle is corresponding;
From panorama street view image corresponding to each collection point, extract characteristic point respectively, and determine the characteristic point relative latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to characteristic point;
For the adjacent collection point of each two, the characteristic point of panorama street view image corresponding for two adjacent collection points is matched, obtains at least one feature point pairs;
For each feature point pairs, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with described validity feature point corresponding three-dimensional coordinate point;
The three-dimensional coordinate point generated being clustered, obtains at least one cluster, the three-dimensional coordinate point that each of which cluster comprises describes same target object;
For each cluster, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.
2. method according to claim 1, it is characterised in that for the adjacent collection point of each two, matches the characteristic point of panorama street view image corresponding for two adjacent collection points, obtains at least one feature point pairs, specifically include:
Each characteristic point for panorama street view image corresponding to a collection point in described adjacent collection point, being mated by the characteristic information of each characteristic point of panorama street view image corresponding with another collection point respectively for the characteristic information of this characteristic point, if coupling, the described characteristic point in the panorama street view image corresponding with another collection point described by this characteristic point constitutes a feature point pairs.
3. method according to claim 1 and 2, it is characterized in that, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, it is determined that whether this feature point pairs is validity feature point, specifically includes:
Latitude and longitude coordinates according to two collection points corresponding to two characteristic point place panorama street view image in described feature point pairs sets up three-dimensional system of coordinate;
Latitude and longitude coordinates according to said two collection point, calculates the air line distance of two collection points;
Relative longitude coordinate according to described air line distance, two characteristic points of described feature point pairs, calculates three-dimensional coordinate point corresponding to described feature point pairs latitude and longitude coordinates in described three-dimensional system of coordinate;
According to described air line distance, the relative altitude coordinate of two characteristic points of described feature point pairs, three-dimensional coordinate point latitude and longitude coordinates in described three-dimensional system of coordinate, calculate two characteristic points height value in described three-dimensional system of coordinate respectively of described feature point pairs;
Judge that whether the difference of two characteristic points of described feature point pairs height value in three-dimensional system of coordinate is be more than or equal to preset height threshold, if then determining that described feature point pairs is invalid feature point pairs, then determines that described feature point pairs is validity feature point pair if not.
4. method according to claim 3, it is characterized in that, relative longitude coordinate according to described air line distance, two characteristic points of described feature point pairs, calculates three-dimensional coordinate point corresponding to described feature point pairs latitude and longitude coordinates in described three-dimensional system of coordinate, obtains with specific reference to following formula:
x = L + tan θ 2 tan θ 2 - tan θ 1
y = L + tan θ 1 + tan θ 2 tan θ 2 - tan θ 1
Wherein, x is described three-dimensional coordinate point longitude coordinate in described three-dimensional system of coordinate, and y is described three-dimensional coordinate point latitude coordinate in described three-dimensional system of coordinate, and L is the air line distance of said two collection point, θ1For the relative longitude coordinate of the previous collection point characteristic of correspondence point in said two collection point, θ2Relative longitude coordinate for the rear collection point characteristic of correspondence point in said two collection point;
According to described air line distance, the relative altitude coordinate of two characteristic points of described feature point pairs, three-dimensional coordinate point latitude and longitude coordinates in described three-dimensional system of coordinate, calculate two characteristic points height value in described three-dimensional system of coordinate respectively of described feature point pairs, obtain with specific reference to following formula:
Z 1 = L + tg α 1 × x 2 + y 2
Z 2 = L × tg α 2 × ( x L - 1 ) 2 + y 2
Wherein, above formula z1For previous collection point characteristic of correspondence point height value in described three-dimensional system of coordinate, z in said two collection point2For Zhong Houyi collection point, said two collection point characteristic of correspondence point height value in described three-dimensional system of coordinate, α1For the relative altitude coordinate of previous collection point characteristic of correspondence point, α in said two collection point2Relative altitude coordinate for Zhong Houyi collection point, said two collection point characteristic of correspondence point, x is described three-dimensional coordinate point longitude coordinate in described three-dimensional system of coordinate, y is described three-dimensional coordinate point latitude coordinate in described three-dimensional system of coordinate, and L is the air line distance of said two collection point.
5. method according to claim 3, it is characterised in that generate with described validity feature point corresponding three-dimensional coordinate point, specifically include:
Meansigma methods by two characteristic points height value in described three-dimensional system of coordinate respectively of described validity feature point pair, it is determined that for the height value of described three-dimensional coordinate point;
By described validity feature point to corresponding three-dimensional coordinate point longitude coordinate in described three-dimensional system of coordinate with in said two collection point the longitude coordinate of previous collection point and value, it is determined that for the longitude coordinate of described three-dimensional coordinate point;And, by described validity feature point to the latitude coordinate of corresponding three-dimensional coordinate point latitude coordinate in described three-dimensional system of coordinate and described previous collection point and value, it is determined that for the latitude coordinate of described three-dimensional coordinate point.
6. method according to claim 1, it is characterised in that for each cluster, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object, specifically includes:
For each cluster, calculate longitude coordinate meansigma methods and the latitude coordinate meansigma methods of the three-dimensional coordinate point that described cluster comprises, using latitude and longitude coordinates as the position of described target object of calculated longitude coordinate meansigma methods and latitude coordinate meansigma methods.
7. method according to claim 1 and 2, it is characterised in that also include:
For each cluster, the three-dimensional coordinate point that described cluster is comprised carries out trigonometric ratio process, obtains the profile of described target object.
8. a target object determines device, it is characterised in that including:
Street view image acquiring unit, the panorama street view image corresponding for obtaining each collection point that streetscape collecting vehicle collects on the way;
Feature point extraction unit, for extracting characteristic point respectively from panorama street view image corresponding to each collection point;
Determine unit, for determining the characteristic point relative latitude and longitude coordinates in panorama street view image in the position of panorama street view image according to the characteristic point extracted by feature point extraction unit;
Feature point pairs acquiring unit, for for the adjacent collection point of each two, matches the characteristic point of panorama street view image corresponding for two adjacent collection points, obtains at least one feature point pairs;
Three-dimensional coordinate point generates unit, for each feature point pairs obtained for feature point pairs acquiring unit, latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point determined in this feature point pairs that unit obtains, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point pair, and when being defined as validity feature point pair, generate with described validity feature point corresponding three-dimensional coordinate point;
Cluster cell, the three-dimensional coordinate point that three-dimensional coordinate point generates unit generation clusters, and obtains at least one cluster, and the three-dimensional coordinate point that each of which cluster comprises describes same target object;
Target object determines unit, and for each cluster obtained for cluster cell, the three-dimensional coordinate point comprised according to this cluster determines the position of respective objects object.
9. device according to claim 8, it is characterized in that, three-dimensional coordinate point generates the unit latitude and longitude coordinates according to the relative latitude and longitude coordinates of the characteristic point in this feature point pairs, the collection point of characteristic point place panorama street view image, determine whether this feature point pairs is validity feature point, specifically for:
Latitude and longitude coordinates according to two collection points corresponding to two characteristic point place panorama street view image in described feature point pairs sets up three-dimensional system of coordinate;
Latitude and longitude coordinates according to said two collection point, calculates the air line distance of two collection points;
Relative longitude coordinate according to described air line distance, two characteristic points of described feature point pairs, calculates three-dimensional coordinate point corresponding to described feature point pairs latitude and longitude coordinates in described three-dimensional system of coordinate;
According to described air line distance, the relative altitude coordinate of two characteristic points of described feature point pairs, three-dimensional coordinate point latitude and longitude coordinates in described three-dimensional system of coordinate, calculate two characteristic points height value in described three-dimensional system of coordinate respectively of described feature point pairs;
Judge that whether the difference of two characteristic points of described feature point pairs height value in three-dimensional system of coordinate is be more than or equal to preset height threshold, if then determining that described feature point pairs is invalid feature point pairs, then determines that described feature point pairs is validity feature point pair if not.
10. device according to claim 8 or claim 9, it is characterised in that also include:
Target object reconstruction unit, for for each cluster, the three-dimensional coordinate point that described cluster is comprised carries out trigonometric ratio process, obtains the profile of described target object.
CN201410736411.7A 2014-12-04 2014-12-04 A kind of target object determines method and device Active CN105719271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410736411.7A CN105719271B (en) 2014-12-04 2014-12-04 A kind of target object determines method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410736411.7A CN105719271B (en) 2014-12-04 2014-12-04 A kind of target object determines method and device

Publications (2)

Publication Number Publication Date
CN105719271A true CN105719271A (en) 2016-06-29
CN105719271B CN105719271B (en) 2018-09-28

Family

ID=56144168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410736411.7A Active CN105719271B (en) 2014-12-04 2014-12-04 A kind of target object determines method and device

Country Status (1)

Country Link
CN (1) CN105719271B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228509A (en) * 2016-07-22 2016-12-14 网易(杭州)网络有限公司 Performance methods of exhibiting and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070008312A1 (en) * 2005-07-08 2007-01-11 Hui Zhou Method for determining camera position from two-dimensional images that form a panorama
CN101901481A (en) * 2010-08-11 2010-12-01 深圳市蓝韵实业有限公司 Image mosaic method
CN103198488A (en) * 2013-04-16 2013-07-10 北京天睿空间科技有限公司 PTZ surveillance camera realtime posture rapid estimation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070008312A1 (en) * 2005-07-08 2007-01-11 Hui Zhou Method for determining camera position from two-dimensional images that form a panorama
CN101901481A (en) * 2010-08-11 2010-12-01 深圳市蓝韵实业有限公司 Image mosaic method
CN103198488A (en) * 2013-04-16 2013-07-10 北京天睿空间科技有限公司 PTZ surveillance camera realtime posture rapid estimation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YAZHOU LIU 等: "Nonparametric Background Generation", 《THE 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION(ICPR"06)》 *
成宝芝 等: "基于粒子群优化聚类的高光谱图像异常目标检测", 《光电子.激光》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228509A (en) * 2016-07-22 2016-12-14 网易(杭州)网络有限公司 Performance methods of exhibiting and device

Also Published As

Publication number Publication date
CN105719271B (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN110458895B (en) Image coordinate system conversion method, device, equipment and storage medium
CN108876804B (en) Matting model training and image matting method, device and system and storage medium
JP6111745B2 (en) Vehicle detection method and apparatus
CN110220493A (en) A kind of binocular distance measuring method and its device
CN102236798B (en) Image matching method and device
CN109035327B (en) Panoramic camera attitude estimation method based on deep learning
CN112070782B (en) Method, device, computer readable medium and electronic equipment for identifying scene contour
CN109636820B (en) Electronic map lane line correction method, device and computer readable storage medium
CN110930503B (en) Clothing three-dimensional model building method, system, storage medium and electronic equipment
CN112287865B (en) Human body posture recognition method and device
CN109871829A (en) A kind of detection model training method and device based on deep learning
CN113759338B (en) Target detection method and device, electronic equipment and storage medium
CN108734773A (en) A kind of three-dimensional rebuilding method and system for mixing picture
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
CN114332134B (en) Building facade extraction method and device based on dense point cloud
EP2990995A2 (en) Line parametric object estimation
CN113256683B (en) Target tracking method and related equipment
CN114429530A (en) Method, system, storage medium and device for automatically extracting three-dimensional model of building
CN116778094B (en) Building deformation monitoring method and device based on optimal viewing angle shooting
JP2017130049A (en) Image processor, image processing method and program
CN105719271A (en) Method and apparatus for determination of target object
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
CN104616035A (en) Visual Map rapid matching method based on global image feature and SURF algorithm
JP2003187220A (en) Object detector and its detecting method
CN115115847B (en) Three-dimensional sparse reconstruction method and device and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200423

Address after: 310012 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Alibaba (China) Co.,Ltd.

Address before: 102200, No. 8, No., Changsheng Road, Changping District science and Technology Park, Beijing, China. 1-5

Patentee before: AUTONAVI SOFTWARE Co.,Ltd.