CN104422441B - A kind of electronic equipment and localization method - Google Patents
A kind of electronic equipment and localization method Download PDFInfo
- Publication number
- CN104422441B CN104422441B CN201310392868.6A CN201310392868A CN104422441B CN 104422441 B CN104422441 B CN 104422441B CN 201310392868 A CN201310392868 A CN 201310392868A CN 104422441 B CN104422441 B CN 104422441B
- Authority
- CN
- China
- Prior art keywords
- target point
- circle
- information
- point
- spatial position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000004807 localization Effects 0.000 title abstract 2
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000012216 screening Methods 0.000 claims description 9
- 238000013480 data collection Methods 0.000 abstract 1
- 230000007613 environmental effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of electronic equipment and localization method, method includes:At least two target points are extracted from the image of input;By carrying out feature extraction to the target point and searching characteristic point data collection, spatial positional information corresponding with the target point is obtained;According to the spatial positional information of at least two target point of acquisition and the depth information of respective objects point, location information is generated.By means of the invention it is possible to realize that precision is higher, the preferable electronic equipment positioning of affected by environment smaller and versatility.
Description
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to an electronic device and a positioning method.
Background
Under an unknown environment, the position of the electronic equipment is accurately and quickly obtained, which is a key point for realizing the intellectualization of the electronic equipment and is a basis for the electronic equipment to execute other tasks. Currently, commonly used positioning methods include: a Positioning method based on a Global Positioning System (GPS), a Positioning method based on an inertial navigation System, a Positioning method based on visual information, and the like.
The positioning method based on the GPS has low positioning precision and is difficult to adapt to high-precision application scenes, and the method can realize positioning only by receiving satellite signals, so that the application environment is limited and the real-time performance is poor; the positioning method based on the inertial navigation system needs to continuously accumulate the information of the sensor to realize positioning, and accumulated errors appear along with the increase of time, so that the positioning precision is influenced; the positioning method based on visual information relies on a visual sensor carried by electronic equipment, the position of the positioning method is automatically sensed by processing image data in real time, the positioning is realized by utilizing mark identification in a common method, and the universality is poor because marks need to be determined in advance and learned.
Therefore, how to provide a positioning method for electronic equipment with high precision, less environmental impact and better versatility is a problem to be solved.
Disclosure of Invention
In view of the above, the present invention is directed to an electronic device and a positioning method thereof, so as to achieve positioning of an electronic device with at least high accuracy, less environmental impact, and better versatility.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a positioning method, which is applied to electronic equipment and comprises the following steps:
extracting at least two target points from the input image;
obtaining spatial position information corresponding to the target point by performing feature extraction on the target point and searching a feature point data set;
and generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points.
The present invention also provides an electronic device, comprising:
a target point extracting unit for extracting at least two target points from an input image;
a position information obtaining unit, configured to obtain spatial position information corresponding to the target point by performing feature extraction on the target point and searching a feature point data set;
and the positioning information generating unit is used for generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points.
The electronic equipment and the positioning method provided by the invention can realize the positioning of the electronic equipment with higher precision, less environmental influence and better universality.
Drawings
Fig. 1 is a flowchart of a positioning method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a positioning information generating method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further elaborated below with reference to the drawings and the specific embodiments.
The positioning method provided by the embodiment of the invention is applied to electronic equipment, and mainly comprises the following steps as shown in figure 1:
step 101, extracting at least two target points from an input image.
Since the method is applied to an electronic device, the execution subject of step 101 is the electronic device, and then step 101 can also be described as: the electronic device extracts at least two target points from the input image.
And 102, extracting the features of the target point and searching a feature point data set to obtain spatial position information corresponding to the target point.
Since the method is applied to an electronic device, the execution subject of step 102 is the electronic device, and then step 102 can also be described as: and the electronic equipment obtains the spatial position information corresponding to the target point by performing feature extraction on the target point and searching a feature point data set. The feature extraction may adopt a feature extraction algorithm based on fast Robust Features (SURF), a feature extraction algorithm based on Scale-invariant feature Transform (SIFT), and the like.
Specifically, feature extraction is performed on the target point to obtain feature description information of the target point;
searching a feature point data set according to the feature description information of the target point, wherein the feature point data set comprises feature description information and spatial position information of known feature points; and obtaining the spatial position information of the feature point matched with the feature description information of the target point as the spatial position information of the target point by searching the feature point data set. The feature description information includes RGB color information and the like corresponding to the feature points.
And 103, generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points.
Since the method is applied to an electronic device, the execution subject of step 103 is the electronic device, and then step 103 can also be described as: and the electronic equipment generates positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points. The depth information of the target point may be obtained from the depth image.
Preferably, when performing two-dimensional positioning, the step 103 includes:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a common cross feature point among the circumference taking the first target point as the center of the circle, the circumference taking the second target point as the center of the circle, and the circumference taking the third target point as the center of the circle, and determining spatial position information corresponding to the common cross feature point as the positioning information.
Specifically referring to fig. 2, the image captured by the camera a is uploaded to the electronic device located in the cloud through the device where the camera a is located, and the electronic device extracts three target points a1, a2 and a3 from the received image; extracting the features of the three target points to obtain feature description information of the target points, and searching a feature point data set according to the feature description information of the target points, wherein the feature point data set comprises the feature description information and the spatial position information of known feature points; obtaining the spatial position information of the feature point matched with the feature description information of the target point by searching the feature point data set, wherein the spatial position information is used as the spatial position information of the target point; the electronic device further obtains depth information d1, d2 and d3 corresponding to the three target points a1, a2 and a3 according to the acquired corresponding depth images.
Firstly, according to the spatial position information of the target point a1 and the depth information d1 corresponding to the target point a1, a circle with the target point a1 as the center and d1 as the radius is determined; determining a circle with the target point a2 as the center and d2 as the radius according to the spatial position information of the target point a2 and the depth information d2 corresponding to the target point a 2; there are two intersecting feature points (as shown in fig. 2) for these two circles. Determining a circle with the target point a3 as the center of a circle and d3 as the radius according to the spatial position information of the target point a3 and the depth information d3 corresponding to the target point a 3; there are also two intersecting feature points (as shown in FIG. 2) for a circle centered at d3 and a circle centered at d 2. Two crossing feature points existing between the circle with the center a1 and the circle with the center a2, and two crossing feature points existing between the circle with the center a2 and the circle with the center a3 are determined, and of the four crossing feature points, the positions of the two crossing feature points are necessarily very close to each other, and in the case of no error, the positions of the two crossing feature points should be overlapped, which is the common crossing feature point, so that the electronic device determines the spatial position information corresponding to the common crossing feature point as the positioning information. The positioning information is used to represent the position of the camera a when the image is captured.
Preferably, when performing two-dimensional positioning, the step 103 includes:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle taking the first target point as a circle center and a first common cross feature point and a second common cross feature point between circles taking the second target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
Still referring to fig. 2, the image captured by the camera a is uploaded to the electronic device located in the cloud through the device where the camera a is located, and the electronic device extracts two target points a1 and a2 from the received image; extracting the features of the two target points to obtain feature description information of the target points, and searching a feature point data set according to the feature description information of the target points, wherein the feature point data set comprises the feature description information and the spatial position information of known feature points; obtaining the spatial position information of the feature point matched with the feature description information of the target point by searching the feature point data set, wherein the spatial position information is used as the spatial position information of the target point; the electronic device further obtains depth information d1 and d2 corresponding to the two target points a1 and a2, respectively, from the acquired corresponding depth images.
Firstly, according to the spatial position information of the target point a1 and the depth information d1 corresponding to the target point a1, a circle with the target point a1 as the center and d1 as the radius is determined; determining a circle with the target point a2 as the center and d2 as the radius according to the spatial position information of the target point a2 and the depth information d2 corresponding to the target point a 2; there are two intersecting feature points (as shown in fig. 2) for these two circles. If the equipment where the camera A is located is continuously located within a period of time, screening the spatial position information of two crossed characteristic points existing in the two circles according to the locating information determined in at least one previous frame; since the position of the device where the camera a is located in the previous frame is not too far away from the current position, the spatial position of the cross feature point which is closer to the position of the previous frame (the cross feature point with the farther position is eliminated) of the two cross feature points is selected as the spatial position of the feature point when the camera a takes the image, that is, the current positioning information.
Preferably, when performing three-dimensional positioning, the step 103 includes:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a circle with the fourth target point as a circle center and the depth information of the fourth target point as a radius according to the spatial position information of the fourth target point and the depth information corresponding to the fourth target point;
determining common cross feature points among a circle taking the first target point as a circle center, a circle taking the second target point as a circle center, a circle taking the third target point as a circle center, and a circle taking the fourth target point as a circle center, and determining spatial position information corresponding to the common cross feature points as the positioning information.
Preferably, when performing three-dimensional positioning, the step 103 includes:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a first common cross feature point and a second common cross feature point between a circumference taking the first target point as a circle center, a circumference taking the second target point as a circle center and a circumference taking the third target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
It should be noted that, the positioning information determination during three-dimensional positioning may also be as shown in fig. 2, but during three-dimensional positioning, the circumference becomes a spherical surface, and since the intersection of two spherical surfaces is an arc, at least three spherical surfaces are required to screen out the first common intersection feature point and the second common intersection feature point, that is, at least three target points need to be extracted. For the method for determining the specific positioning information in the three-dimensional space, reference is made to the above description, and details are not repeated here.
In addition, as a preferred implementation manner of the present invention, the method according to the embodiment of the present invention further includes:
extracting different at least two target points for multiple times, and correspondingly generating multiple positioning information;
and selecting the optimal positioning information from the plurality of positioning information according to the corresponding position distribution of the plurality of positioning information as the finally determined positioning information.
That is, the electronic device may repeatedly perform the above steps 101, 102, and 103 for multiple times, and then each time the electronic device repeatedly performs the above steps, a corresponding positioning information is obtained; the electronic device may select the optimal positioning information from the region with the most dense position distribution as the finally determined positioning information according to the position distribution of the positioning information. Thus, the accuracy of the positioning information can be improved. Preferably, a voting mechanism of WTA (Winner-Take-All) can be used for final determination of the positioning information. The determined positioning information is information characterizing the spatial position of the device that captured the corresponding image.
Corresponding to the above positioning method, an embodiment of the present invention further provides an electronic device, as shown in fig. 3, which mainly includes:
a target point extracting unit 10 for extracting at least two target points from an input image;
a position information obtaining unit 20, configured to obtain spatial position information corresponding to the target point by performing feature extraction on the target point and searching a feature point data set;
and a positioning information generating unit 30, configured to generate positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target point.
Preferably, the position information obtaining unit 20 is further configured to perform feature extraction on the target point to obtain feature description information of the target point; searching a feature point data set according to the feature description information of the target point, wherein the feature point data set comprises feature description information and spatial position information of known feature points; and obtaining the spatial position information of the feature point matched with the feature description information of the target point as the spatial position information of the target point by searching the feature point data set.
Preferably, when performing two-dimensional positioning, the positioning information generating unit 30 is further configured to,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a common cross feature point among the circumference taking the first target point as the center of the circle, the circumference taking the second target point as the center of the circle, and the circumference taking the third target point as the center of the circle, and determining spatial position information corresponding to the common cross feature point as the positioning information.
Preferably, when performing two-dimensional positioning, the positioning information generating unit 30 is further configured to,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle taking the first target point as a circle center and a first common cross feature point and a second common cross feature point between circles taking the second target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
Preferably, when performing three-dimensional positioning, the positioning information generating unit 30 is further configured to,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a circle with the fourth target point as a circle center and the depth information of the fourth target point as a radius according to the spatial position information of the fourth target point and the depth information corresponding to the fourth target point;
determining common cross feature points among a circle taking the first target point as a circle center, a circle taking the second target point as a circle center, a circle taking the third target point as a circle center, and a circle taking the fourth target point as a circle center, and determining spatial position information corresponding to the common cross feature points as the positioning information.
Preferably, when performing three-dimensional positioning, the positioning information generating unit 30 is further configured to,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a first common cross feature point and a second common cross feature point between a circumference taking the first target point as a circle center, a circumference taking the second target point as a circle center and a circumference taking the third target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
Preferably, the positioning information generating unit 30 is further configured to extract different at least two target points for multiple times, and generate multiple pieces of positioning information accordingly; and selecting the optimal positioning information from the plurality of positioning information according to the corresponding position distribution of the plurality of positioning information as the finally determined positioning information.
It should be noted that the electronic device according to the embodiment of the present invention may be disposed in a cloud, and the functions of the target point extracting Unit 10, the position information obtaining Unit 20, and the positioning information generating Unit 30 according to the embodiment of the present invention may be implemented by a Central Processing Unit (CPU), a microprocessor Unit (MPU), or a Digital Signal Processing (DSP) chip in the electronic device.
In summary, the method for generating the positioning information based on the spatial position information and the depth information of the at least two target points in the embodiments of the present invention can achieve device positioning with higher precision, less environmental impact, and better versatility; the positioning method of the embodiment of the invention is simple and convenient to realize, accurate and effective in positioning information and higher in robustness.
In the embodiments provided in the present invention, it should be understood that the disclosed method, apparatus and electronic device may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit according to the embodiment of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (12)
1. A positioning method is applied to electronic equipment, and is characterized by comprising the following steps:
extracting at least two target points from the input image;
obtaining spatial position information corresponding to the target point by performing feature extraction on the target point and searching a feature point data set;
generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points; wherein the method further comprises:
extracting different at least two target points for multiple times, and correspondingly generating multiple positioning information;
and selecting the optimal positioning information from the plurality of positioning information according to the corresponding position distribution of the plurality of positioning information as the finally determined positioning information.
2. The positioning method according to claim 1, wherein the obtaining spatial position information corresponding to the target point by performing feature extraction on the target point and searching a feature point data set comprises:
extracting features of the target point to obtain feature description information of the target point;
searching a feature point data set according to the feature description information of the target point, wherein the feature point data set comprises feature description information and spatial position information of known feature points; and obtaining the spatial position information of the feature point matched with the feature description information of the target point as the spatial position information of the target point by searching the feature point data set.
3. The positioning method according to claim 2, wherein when performing two-dimensional positioning, the generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points comprises:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a common cross feature point among the circumference taking the first target point as the center of the circle, the circumference taking the second target point as the center of the circle, and the circumference taking the third target point as the center of the circle, and determining spatial position information corresponding to the common cross feature point as the positioning information.
4. The positioning method according to claim 2, wherein when performing two-dimensional positioning, the generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points comprises:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle taking the first target point as a circle center and a first common cross feature point and a second common cross feature point between circles taking the second target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
5. The positioning method according to claim 2, wherein when performing three-dimensional positioning, the generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points comprises:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a circle with the fourth target point as a circle center and the depth information of the fourth target point as a radius according to the spatial position information of the fourth target point and the depth information corresponding to the fourth target point;
determining common cross feature points among a circle taking the first target point as a circle center, a circle taking the second target point as a circle center, a circle taking the third target point as a circle center, and a circle taking the fourth target point as a circle center, and determining spatial position information corresponding to the common cross feature points as the positioning information.
6. The positioning method according to claim 2, wherein when performing three-dimensional positioning, the generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points comprises:
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a first common cross feature point and a second common cross feature point between a circumference taking the first target point as a circle center, a circumference taking the second target point as a circle center and a circumference taking the third target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
7. An electronic device, comprising:
a target point extracting unit for extracting at least two target points from an input image;
a position information obtaining unit, configured to obtain spatial position information corresponding to the target point by performing feature extraction on the target point and searching a feature point data set;
the positioning information generating unit is used for generating positioning information according to the obtained spatial position information of the at least two target points and the depth information of the corresponding target points; wherein,
the positioning information generating unit is further configured to extract the at least two different target points for multiple times and generate multiple pieces of positioning information accordingly; and selecting the optimal positioning information from the plurality of positioning information according to the corresponding position distribution of the plurality of positioning information as the finally determined positioning information.
8. The electronic device according to claim 7, wherein the location information obtaining unit is further configured to perform feature extraction on the target point to obtain feature description information of the target point; searching a feature point data set according to the feature description information of the target point, wherein the feature point data set comprises feature description information and spatial position information of known feature points; and obtaining the spatial position information of the feature point matched with the feature description information of the target point as the spatial position information of the target point by searching the feature point data set.
9. The electronic device according to claim 8, wherein when performing two-dimensional positioning, the positioning information generating unit is further configured to,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a common cross feature point among the circumference taking the first target point as the center of the circle, the circumference taking the second target point as the center of the circle, and the circumference taking the third target point as the center of the circle, and determining spatial position information corresponding to the common cross feature point as the positioning information.
10. The electronic device according to claim 8, wherein when performing two-dimensional positioning, the positioning information generating unit is further configured to,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle taking the first target point as a circle center and a first common cross feature point and a second common cross feature point between circles taking the second target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
11. The electronic device according to claim 8, wherein the positioning information generating unit is further configured to, when performing three-dimensional positioning,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a circle with the fourth target point as a circle center and the depth information of the fourth target point as a radius according to the spatial position information of the fourth target point and the depth information corresponding to the fourth target point;
determining common cross feature points among a circle taking the first target point as a circle center, a circle taking the second target point as a circle center, a circle taking the third target point as a circle center, and a circle taking the fourth target point as a circle center, and determining spatial position information corresponding to the common cross feature points as the positioning information.
12. The electronic device according to claim 8, wherein the positioning information generating unit is further configured to, when performing three-dimensional positioning,
determining a circle which takes the first target point as a circle center and the depth information of the first target point as a radius according to the spatial position information of the first target point and the depth information corresponding to the first target point;
determining a circle with the second target point as a circle center and the depth information of the second target point as a radius according to the spatial position information of the second target point and the depth information corresponding to the second target point;
determining a circle with the third target point as a circle center and the depth information of the third target point as a radius according to the spatial position information of the third target point and the depth information corresponding to the third target point;
determining a first common cross feature point and a second common cross feature point between a circumference taking the first target point as a circle center, a circumference taking the second target point as a circle center and a circumference taking the third target point as a circle center, and acquiring spatial position information of the first common cross feature point and the second common cross feature point;
and screening the spatial position information of the first common cross feature point and the second common cross feature point according to the positioning information determined in at least one previous frame, and determining the current positioning information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310392868.6A CN104422441B (en) | 2013-09-02 | 2013-09-02 | A kind of electronic equipment and localization method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310392868.6A CN104422441B (en) | 2013-09-02 | 2013-09-02 | A kind of electronic equipment and localization method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104422441A CN104422441A (en) | 2015-03-18 |
CN104422441B true CN104422441B (en) | 2017-12-26 |
Family
ID=52972106
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310392868.6A Active CN104422441B (en) | 2013-09-02 | 2013-09-02 | A kind of electronic equipment and localization method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104422441B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657600B (en) * | 2016-10-31 | 2019-10-15 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN106960453B (en) * | 2017-03-22 | 2018-04-13 | 海南职业技术学院 | Photograph taking fixing by gross bearings method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101922928A (en) * | 2009-06-16 | 2010-12-22 | 纬创资通股份有限公司 | Method, device and electronic device for measuring distance and identifying position of intelligent handheld device |
CN101968940A (en) * | 2009-07-28 | 2011-02-09 | 宝定科技股份有限公司 | Handheld device with positioning and picture taking capability and geographical positioning method thereof |
CN102262724A (en) * | 2010-05-31 | 2011-11-30 | 汉王科技股份有限公司 | Object image characteristic points positioning method and object image characteristic points positioning system |
CN103252778A (en) * | 2011-12-23 | 2013-08-21 | 三星电子株式会社 | Apparatus for estimating the robot pose and method thereof |
-
2013
- 2013-09-02 CN CN201310392868.6A patent/CN104422441B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101922928A (en) * | 2009-06-16 | 2010-12-22 | 纬创资通股份有限公司 | Method, device and electronic device for measuring distance and identifying position of intelligent handheld device |
CN101968940A (en) * | 2009-07-28 | 2011-02-09 | 宝定科技股份有限公司 | Handheld device with positioning and picture taking capability and geographical positioning method thereof |
CN102262724A (en) * | 2010-05-31 | 2011-11-30 | 汉王科技股份有限公司 | Object image characteristic points positioning method and object image characteristic points positioning system |
CN103252778A (en) * | 2011-12-23 | 2013-08-21 | 三星电子株式会社 | Apparatus for estimating the robot pose and method thereof |
Non-Patent Citations (2)
Title |
---|
一种基于RSSI校正的三角形质心定位算法;吕振等;《传感器与微系统》;20100520;第29卷(第5期);第122-124页 * |
用于飞行器导航的边缘匹配方法研究;丁明跃等;《宇航学报》;19980731;第19卷(第3期);第72-78页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104422441A (en) | 2015-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | City-scale landmark identification on mobile devices | |
CN106920279B (en) | Three-dimensional map construction method and device | |
EP3100210B1 (en) | Dynamically updating a feature database that contains features corresponding to a known target object | |
EP2614487B1 (en) | Online reference generation and tracking for multi-user augmented reality | |
CN103530881B (en) | Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal | |
US9560273B2 (en) | Wearable information system having at least one camera | |
CN111028358B (en) | Indoor environment augmented reality display method and device and terminal equipment | |
US20150095360A1 (en) | Multiview pruning of feature database for object recognition system | |
EP2770783A2 (en) | A wearable information system having at least one camera | |
CN105074776A (en) | In situ creation of planar natural feature targets | |
JPWO2012046671A1 (en) | Positioning system | |
CN103003843A (en) | Dataset creation for tracking targets with dynamically changing portions | |
CN111832579B (en) | Map interest point data processing method and device, electronic equipment and readable medium | |
CN112487979A (en) | Target detection method, model training method, device, electronic device and medium | |
JP2017003525A (en) | Three-dimensional measuring device | |
Ruiz-Ruiz et al. | A multisensor LBS using SIFT-based 3D models | |
CN110163914B (en) | Vision-based positioning | |
KR101586071B1 (en) | Apparatus for providing marker-less augmented reality service and photographing postion estimating method therefor | |
CN104422441B (en) | A kind of electronic equipment and localization method | |
JP5536124B2 (en) | Image processing system and image processing method | |
CN104166995A (en) | Harris-SIFT binocular vision positioning method based on horse pace measurement | |
CN111223139B (en) | Target positioning method and terminal equipment | |
US9870514B2 (en) | Hypotheses line mapping and verification for 3D maps | |
Wallbridge et al. | Qualitative review of object recognition techniques for tabletop manipulation | |
US10878278B1 (en) | Geo-localization based on remotely sensed visual features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |