CN107153831A - Localization method, system and the intelligent terminal of intelligent terminal - Google Patents
Localization method, system and the intelligent terminal of intelligent terminal Download PDFInfo
- Publication number
- CN107153831A CN107153831A CN201710190798.4A CN201710190798A CN107153831A CN 107153831 A CN107153831 A CN 107153831A CN 201710190798 A CN201710190798 A CN 201710190798A CN 107153831 A CN107153831 A CN 107153831A
- Authority
- CN
- China
- Prior art keywords
- characteristic point
- point set
- intelligent terminal
- described image
- average
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Abstract
The present invention proposes a kind of localization method of intelligent terminal, system and intelligent terminal, and the localization method of intelligent terminal includes:Obtain image;Image characteristic point is extracted in the picture;According to image characteristic point, characteristics of image point set is obtained;According to characteristics of image point set and environmental characteristic point set, position auto―control is obtained;By position auto―control, the camera position of intelligent terminal is obtained.The present invention with external information source due to that need not carry out data exchange too much, and location efficiency is higher, individually can also can be combined with other location technologies, improves the precision and speed of indoor positioning, beneficial power-assisted can be provided for the indoor positioning of intelligent terminal.
Description
Technical field
The present invention relates to terminal positioning technical field, in particular to a kind of localization method of intelligent terminal, system and
Intelligent terminal.
Background technology
Currently, in outdoor positioning technology based on GPS (Global Positioning System, global positioning system)
Under the conditions of while disclosure satisfy that LBS (Location based Service, based on mobile location-based service) demand, indoor positioning
Due to environment limitation, in the urgent need to that can replace GPS and disclosure satisfy that the technology that indoor positioning is required.Presently, industry institute
The indoor positioning technologies of proposition have following several classes:Bluetooth, WLAN, architecture, active RFID (Radio
Frequency Identification, radio frequency identification), passive RFID, UWB (Ultra Wideband, ultra wide band), light
Track and localization, Magnetic oriented, NFC (Near Field Communication, the short distance wireless communication technology).
Either utilize wireless sensor network or other location technologies, at present for all exist it is respective unavoidable
First its orientation range is smaller for limitation, such as Bluetooth technology, and second most of mobile terminal user is blue without opening at any time
The custom that tooth is positioned, practical application effect is not good;Cellular network signals of the architecture based on terminal in itself carry out triangle
Determine, although ensure that most of interior spaces have a terminal signaling, but its positioning precision be tens meters to rice up to a hundred,
Such positioning precision obviously can not meet indoor positioning demand;WLAN is expected to turn into the master of following indoor positioning technologies
Stream, but for now, the WLAN focus signal of public arena strong, skewness and the problems such as personal secrets is not
The key factor for hindering it to develop.Other indoor positioning technologies are there is also such as precision is too low, cost is too high, technology not enough into
Ripe the problems such as.
The content of the invention
It is contemplated that at least solving one of technical problem present in prior art or correlation technique.
Therefore, it is an object of the present invention to propose a kind of localization method of intelligent terminal.
It is another object of the present invention to propose a kind of alignment system of intelligent terminal.
Another object of the present invention is to propose a kind of intelligent terminal.
In view of this, according to one object of the present invention, it is proposed that a kind of localization method of intelligent terminal, including:Obtain
Image;Image characteristic point is extracted in the picture;According to image characteristic point, characteristics of image point set is obtained;According to characteristics of image point set
And environmental characteristic point set, obtain position auto―control;By position auto―control, the camera position of intelligent terminal is obtained.
The localization method for the intelligent terminal that the present invention is provided, intelligent terminal camera shoots the image of current environment, first
The two dimensional image characteristic point of RGB (Red, Green, Blue) image of image is extracted, extracting method can select Shi-Tomasi
Method carries out two dimensional image feature point extraction, and it has taken into account speed and robustness to a certain extent.Obtained according to image characteristic point
Characteristics of image point set is taken, the position auto―control of intelligent terminal camera is obtained according to characteristics of image point set, will be taken the photograph according to position auto―control
As the evolution of head is under global coordinate system, the particular location of camera is obtained.The present invention due to need not too much with outside
Boundary's information source carries out data exchange, and location efficiency is higher, individually can also can be combined with other location technologies, improves indoor fixed
The precision and speed of position, can provide beneficial power-assisted for the indoor positioning of intelligent terminal.
According to the localization method of the above-mentioned intelligent terminal of the present invention, there can also be following technical characteristic:
In the above-mentioned technical solutions, it is preferable that according to image characteristic point, the step of obtaining characteristics of image point set, including:Obtain
Take the two-dimensional coordinate and color information of image characteristic point;The two-dimensional coordinate of image characteristic point is converted to the shooting of image characteristic point
Three-dimensional coordinate under head coordinate system;The average and image for obtaining image characteristic point are calculated according to three-dimensional coordinate and color information
The covariance of characteristic point;According to the average and covariance of image characteristic point, characteristics of image point set is obtained.
In the technical scheme, two-dimensional coordinate and color information are obtained by image characteristic point, then the two-dimensional coordinate is reflected
Be incident upon in three dimensions, according to three-dimensional coordinate and color information by gauss hybrid models obtain the average of image characteristic point with
And covariance, and then characteristics of image point set is obtained, realize the identification of the image obtained to terminal camera.
In any of the above-described technical scheme, it is preferable that calculated according to three-dimensional coordinate and color information and obtain characteristics of image
Point average and image characteristic point covariance the step of, including:By gauss hybrid models, the depth of three-dimensional coordinate is calculated
The average of value and the variance of depth value;According to the average of the depth value of three-dimensional coordinate, the average of three-dimensional coordinate is obtained;Pass through height
This mixed model, calculates the average of color information and the variance of color information;According to the average of three-dimensional coordinate, color information
Average, the variance of three-dimensional coordinate, the variance of color information, obtain the average of image characteristic point and the association side of image characteristic point
Difference.
In the technical scheme, because the image characteristic point of extraction is significantly local positioned at object edge or color change,
Saltus step occurs for depth value and color at characteristic point, so only obtaining the three-dimensional coordinate and color of scene using pixel, can lead
Cause the depth and color measuring error of characteristic point larger, the depth value that the present invention passes through gauss hybrid models calculating three-dimensional coordinate
Average, and then obtain the average of three-dimensional coordinate, by gauss hybrid models calculate the variance of depth value, the variance of color information,
The average of color information, therefore image characteristic point can be approximated to be average, the multivariate Gaussian distribution of covariance matrix, reduce figure
As identification error, seamless, the accurate positioning of terminal is realized.
In any of the above-described technical scheme, it is preferable that according to characteristics of image point set and environmental characteristic point set, pose is obtained
The step of matrix, including:According to characteristics of image point set and environmental characteristic point set, characteristics of image point set and environmental characteristic are obtained
Transformation matrix between point set;By being sampled to transformation matrix, optimal transform matrix is obtained;Using optimal transform matrix as
Position auto―control.
In the technical scheme, the matrix that position auto―control is constituted for the position of camera and the posture of camera, camera
Posture represent the roll, pitching and course angle of camera respectively, what it is due to acquisition is for terminal camera coordinate system
Three-dimensional coordinate, in order to obtain the position auto―control of camera, defines an environmental characteristic point set relative to global coordinate system, passes through
ICP (Iterative Closest Point, iteration closest approach algorithm) algorithm calculates characteristics of image point set and environmental characteristic point
Transformation matrix between collection, the transformation matrix now obtained is the rough estimate evaluation of current time camera position auto―control.Work as feature
When the number of feature points that point set can be matched with environmental model feature point set is less, the transformation matrix error that ICP algorithm is obtained
It is larger.There is the possibility for converging on locally optimal solution in itself in other ICP algorithm.Therefore transformation matrix may not be optimal position
Appearance estimate.But transformation matrix has been in the high probability region of camera pose, by being spread around transformation matrix
Point sampling, can try to achieve and observe optimal camera pose, as camera t position auto―control.Due to characteristics of image
Point set and the sparse features point that environmental characteristic point set is extraction, data volume is little, therefore efficiency is very fast.
In any of the above-described technical scheme, it is preferable that by position auto―control, the camera position of intelligent terminal, tool are obtained
Body includes:By position auto―control, the camera position of intelligent terminal is transformed into global coordinate system in camera coordinate system.
In the technical scheme, camera position is transformed under global coordinate system according to position auto―control, end is just obtained
The particular location coordinate of camera (user) is held, so as to be that indoor positioning and navigation Service application provide accurately positional information.
In any of the above-described technical scheme, it is preferable that also include:Update environmental characteristic point set.
In the technical scheme, the environmental characteristic point set at existing t-1 moment is updated, so as to obtain t
Environmental characteristic point set, and then continue to calculate the position auto―control at t+1 moment.
According to another object of the present invention, it is proposed that a kind of alignment system of intelligent terminal, including:Image obtains single
Member, for obtaining image;Extraction unit, for extracting image characteristic point in the picture;Feature point set acquiring unit, for basis
Image characteristic point, obtains characteristics of image point set;Pose acquiring unit, for according to characteristics of image point set and environmental characteristic point
Collection, obtains position auto―control;Position acquisition unit, for by position auto―control, obtaining the camera position of intelligent terminal.
The alignment system for the intelligent terminal that the present invention is provided, intelligent terminal camera shoots the image of current environment, first
The two dimensional image characteristic point of the RGB image of image is extracted by extraction unit, extracting method can select Shi-Tomasi methods
Two dimensional image feature point extraction is carried out, it has taken into account speed and robustness to a certain extent.Feature point set acquiring unit according to
Image characteristic point obtains characteristics of image point set, and pose acquiring unit obtains the position of intelligent terminal camera according to characteristics of image point set
Appearance matrix, position acquisition unit, by under the evolution of camera to global coordinate system, obtains camera according to position auto―control
Particular location.The present invention with external information source due to that need not carry out data exchange too much, and location efficiency is higher, can be independent
Also it can be combined with other location technologies, improve the precision and speed of indoor positioning, can carried for the indoor positioning of intelligent terminal
It is provided with the power-assisted of benefit.
According to the alignment system of the above-mentioned intelligent terminal of the present invention, there can also be following technical characteristic:
In the above-mentioned technical solutions, it is preferable that feature point set acquiring unit, specifically for:Obtain the two of image characteristic point
Dimension coordinate and color information;The two-dimensional coordinate of image characteristic point is converted to the three-dimensional under the camera coordinate system of image characteristic point
Coordinate;Calculated according to three-dimensional coordinate and color information and obtain the average of image characteristic point and the covariance of image characteristic point;
According to the average and covariance of image characteristic point, characteristics of image point set is obtained.
In the technical scheme, feature point set acquiring unit obtains two-dimensional coordinate and color information by image characteristic point,
Again by the two dimensional coordinate map into three dimensions, figure is obtained by gauss hybrid models according to three-dimensional coordinate and color information
As the average and covariance of characteristic point, and then characteristics of image point set is obtained, realize the knowledge of the image obtained to terminal camera
Not.
In any of the above-described technical scheme, it is preferable that feature point set acquiring unit, it is additionally operable to:By gauss hybrid models,
Calculate the average and the variance of depth value of the depth value of three-dimensional coordinate;According to the average of the depth value of three-dimensional coordinate, three are obtained
The average of dimension coordinate;By gauss hybrid models, the average of color information and the variance of color information are calculated;According to three-dimensional seat
Target average, the average of color information, the variance of three-dimensional coordinate, the variance of color information, obtain image characteristic point average with
And the covariance of image characteristic point.
In the technical scheme, because the image characteristic point of extraction is significantly local positioned at object edge or color change,
Saltus step occurs for depth value and color at characteristic point, so only obtaining the three-dimensional coordinate and color of scene using pixel, can lead
The depth and color measuring error of cause characteristic point are larger, and feature point set acquiring unit calculates three-dimensional coordinate by gauss hybrid models
Depth value average, and then obtain the average of three-dimensional coordinate, pass through gauss hybrid models and calculate the variance of depth value, color and believe
The variance of breath, the average of color information, therefore image characteristic point can be approximated to be average, the multivariate Gaussian point of covariance matrix
Cloth, reduces image recognition error, realizes seamless, the accurate positioning of terminal.
In any of the above-described technical scheme, it is preferable that pose acquiring unit, specifically for:According to characteristics of image point set with
And environmental characteristic point set, obtain the transformation matrix between characteristics of image point set and environmental characteristic point set;By to transformation matrix
Sampled, obtain optimal transform matrix;It regard optimal transform matrix as position auto―control.
In the technical scheme, the matrix that position auto―control is constituted for the position of camera and the posture of camera, camera
Posture represent the roll, pitching and course angle of camera respectively, what it is due to acquisition is for terminal camera coordinate system
Three-dimensional coordinate, in order to obtain the position auto―control of camera, defines an environmental characteristic point set relative to global coordinate system, pose
Acquiring unit calculates the transformation matrix between characteristics of image point set and environmental characteristic point set by ICP algorithm, now obtains
Transformation matrix is the rough estimate evaluation of current time camera position auto―control.When feature point set and environmental model feature point set can
When the number of feature points matched somebody with somebody is less, the transformation matrix error that ICP algorithm is obtained is larger.There is convergence in itself in other ICP algorithm
In the possibility of locally optimal solution.Therefore transformation matrix may not be optimal pose estimate.But transformation matrix has been located
In the high probability region of camera pose, by carrying out spreading point sampling around transformation matrix, it can try to achieve and observe optimal take the photograph
As head pose, as camera t position auto―control.Because characteristics of image point set and environmental characteristic point set are extraction
Sparse features point, data volume is little, therefore efficiency is very fast.
In any of the above-described technical scheme, it is preferable that position acquisition unit, specifically for:, will intelligence by position auto―control
The camera position of terminal is transformed into global coordinate system in camera coordinate system.
In the technical scheme, camera position is transformed under global coordinate system according to position auto―control, end is just obtained
The particular location coordinate of camera (user) is held, so as to be that indoor positioning and navigation Service application provide accurately positional information.
In any of the above-described technical scheme, it is preferable that also include:Updating block, for updating environmental characteristic point set.
In the technical scheme, the environmental characteristic point set at existing t-1 moment is updated, so as to obtain t
Environmental characteristic point set, and then continue to calculate the position auto―control at t+1 moment.
According to another object of the present invention, it is proposed that a kind of intelligent terminal, include the intelligent terminal of any of the above-described
Alignment system.
The intelligent terminal that the present invention is provided includes the alignment system of intelligent terminal, and intelligent terminal camera shoots current environment
Image, the two dimensional image characteristic point of the RGB image of image is extracted by extraction unit first, extracting method can select Shi-
Tomasi methods carry out two dimensional image feature point extraction, and it has taken into account speed and robustness to a certain extent.Feature point set is obtained
Unit is taken to obtain characteristics of image point set according to image characteristic point, pose acquiring unit obtains intelligent terminal according to characteristics of image point set
The position auto―control of camera, position acquisition unit, by under the evolution of camera to global coordinate system, is obtained according to position auto―control
Obtain the particular location of camera.The present invention due to need not too much with external information source carry out data exchange, location efficiency compared with
Height, individually can also can be combined with other location technologies, improve the precision and speed of indoor positioning, can be intelligent terminal
Indoor positioning provides beneficial power-assisted.
The additional aspect and advantage of the present invention will become obvious in following description section, or pass through the practice of the present invention
Recognize.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become from description of the accompanying drawings below to embodiment is combined
Substantially and be readily appreciated that, wherein:
Fig. 1 shows a kind of schematic flow sheet of the localization method of intelligent terminal of one embodiment of the present of invention;
Fig. 2 shows the schematic flow sheet of the localization method of the intelligent terminal of an alternative embodiment of the invention;
Fig. 3 shows the schematic flow sheet of the localization method of the intelligent terminal of yet another embodiment of the present invention;
Fig. 4 shows the schematic block diagram of the alignment system of the intelligent terminal of one embodiment of the present of invention;
Fig. 5 shows the schematic flow sheet of the localization method of the intelligent terminal of the specific embodiment of the present invention;
Fig. 6 shows the structural representation of the intelligent terminal of the specific embodiment of the present invention.
Embodiment
It is below in conjunction with the accompanying drawings and specific real in order to be more clearly understood that the above objects, features and advantages of the present invention
Mode is applied the present invention is further described in detail.It should be noted that in the case where not conflicting, the implementation of the application
Feature in example and embodiment can be mutually combined.
Many details are elaborated in the following description to facilitate a thorough understanding of the present invention, still, the present invention may be used also
Implemented with being different from other modes described here using other, therefore, protection scope of the present invention is not limited to following public affairs
The limitation for the specific embodiment opened.
The embodiment of first aspect present invention, proposes a kind of localization method of intelligent terminal, and Fig. 1 shows the one of the present invention
The schematic flow sheet of the localization method of the intelligent terminal of individual embodiment:
Step 102, image is obtained;
Step 104, image characteristic point is extracted in the picture;
Step 106, according to image characteristic point, characteristics of image point set is obtained;
Step 108, according to characteristics of image point set and environmental characteristic point set, position auto―control is obtained;
Step 110, by position auto―control, the camera position of intelligent terminal is obtained.
The localization method for the intelligent terminal that the present invention is provided, intelligent terminal camera shoots the image of current environment, first
The two dimensional image characteristic point of the RGB image of image is extracted, extracting method can carry out two dimensional image from Shi-Tomasi methods
Feature point extraction, it has taken into account speed and robustness to a certain extent.Characteristics of image point set, root are obtained according to image characteristic point
The position auto―control of intelligent terminal camera is obtained according to characteristics of image point set, according to position auto―control by the evolution of camera to entirely
Under office's coordinate system, the particular location of camera is obtained.The present invention with external information source due to that need not carry out data friendship too much
Change, location efficiency is higher, individually can also can be combined with other location technologies, improve the precision and speed of indoor positioning, energy
Indoor positioning enough for intelligent terminal provides beneficial power-assisted.
Fig. 2 shows the schematic flow sheet of the localization method of the intelligent terminal of an alternative embodiment of the invention, joins below
The localization method of intelligent terminal described according to some embodiments of the invention is described according to Fig. 2.
In one embodiment of the invention, as shown in Figure 2, it is preferable that the localization method of intelligent terminal includes:
Step 202, image is obtained;
Step 204, image characteristic point is extracted in the picture;
Step 206, the two-dimensional coordinate and color information of image characteristic point are obtained;
Step 208, the two-dimensional coordinate of image characteristic point is converted to the three-dimensional under the camera coordinate system of image characteristic point
Coordinate;
Step 210, the average and characteristics of image for obtaining image characteristic point are calculated according to three-dimensional coordinate and color information
The covariance of point;
Step 212, according to the average and covariance of image characteristic point, characteristics of image point set is obtained;
Step 214, according to characteristics of image point set and environmental characteristic point set, position auto―control is obtained;
Step 216, by position auto―control, the camera position of intelligent terminal is obtained.
In this embodiment, two-dimensional coordinate and color information are obtained by image characteristic point, then by the two dimensional coordinate map
Into three dimensions, according to three-dimensional coordinate and color information by gauss hybrid models obtain image characteristic point average and
Covariance, and then characteristics of image point set is obtained, realize the identification of the image obtained to terminal camera.
In one embodiment of the invention, as shown in Figure 2, it is preferable that the localization method of intelligent terminal includes:
Step 202, image is obtained;
Step 204, image characteristic point is extracted in the picture;
Step 206, the two-dimensional coordinate and color information of image characteristic point are obtained;
Step 208, the two-dimensional coordinate of image characteristic point is converted to the three-dimensional under the camera coordinate system of image characteristic point
Coordinate;
Step 2102, by gauss hybrid models, the average and the variance of depth value of the depth value of three-dimensional coordinate are calculated;
Step 2104, according to the average of the depth value of three-dimensional coordinate, the average of three-dimensional coordinate is obtained;
Step 2106, by gauss hybrid models, the average of color information and the variance of color information are calculated;
Step 2108, according to the average of three-dimensional coordinate, the average of color information, the variance of three-dimensional coordinate, color information
Variance, obtains the average of image characteristic point and the covariance of image characteristic point;
Step 212, according to the average and covariance of image characteristic point, characteristics of image point set is obtained;
Step 214, according to characteristics of image point set and environmental characteristic point set, position auto―control is obtained;
Step 216, by position auto―control, the camera position of intelligent terminal is obtained.
In this embodiment, it is special because the image characteristic point of extraction is located at object edge or color change is significantly local
Saltus step occurs for a depth value and color at levying, so only obtaining the three-dimensional coordinate and color of scene using pixel, can cause
The depth and color measuring error of characteristic point are larger, and the present invention calculates the equal of the depth value of three-dimensional coordinate by gauss hybrid models
Value, and then the average of three-dimensional coordinate is obtained, the variance, the variance of color information, color of depth value are calculated by gauss hybrid models
The average of multimedia message, therefore image characteristic point can be approximated to be average, the multivariate Gaussian distribution of covariance matrix, reduce image
Identification error, realizes seamless, the accurate positioning of terminal.
In one embodiment of the invention, as shown in Figure 2, it is preferable that the localization method of intelligent terminal includes:
Step 202, image is obtained;
Step 204, image characteristic point is extracted in the picture;
Step 206, the two-dimensional coordinate and color information of image characteristic point are obtained;
Step 208, the two-dimensional coordinate of image characteristic point is converted to the three-dimensional under the camera coordinate system of image characteristic point
Coordinate;
Step 2102, by gauss hybrid models, the average and the variance of depth value of the depth value of three-dimensional coordinate are calculated;
Step 2104, according to the average of the depth value of three-dimensional coordinate, the average of three-dimensional coordinate is obtained;
Step 2106, by gauss hybrid models, the average of color information and the variance of color information are calculated;
Step 2108, according to the average of three-dimensional coordinate, the average of color information, the variance of three-dimensional coordinate, color information
Variance, obtains the average of image characteristic point and the covariance of image characteristic point.
Step 212, according to the average and covariance of image characteristic point, characteristics of image point set is obtained;
Step 2142, according to characteristics of image point set and environmental characteristic point set, obtain characteristics of image point set and environment is special
Levy the transformation matrix between point set;
Step 2144, by being sampled to transformation matrix, optimal transform matrix is obtained;
Step 2146, it regard optimal transform matrix as position auto―control;
Step 216, by position auto―control, the camera position of intelligent terminal is obtained.
In this embodiment, the matrix that position auto―control is constituted for the position of camera and the posture of camera, camera
Posture represents the roll, pitching and course angle of camera respectively, due to acquisition be for terminal camera coordinate system three
Dimension coordinate, in order to obtain the position auto―control of camera, defines an environmental characteristic point set relative to global coordinate system, passes through
ICP algorithm calculates the transformation matrix between characteristics of image point set and environmental characteristic point set, and the transformation matrix now obtained is to work as
The rough estimate evaluation of preceding moment camera position auto―control.The characteristic point pair that can be matched with environmental model feature point set when feature point set
During negligible amounts, the transformation matrix error that ICP algorithm is obtained is larger.Other ICP algorithm exists in itself converges on locally optimal solution
Possibility.Therefore transformation matrix may not be optimal pose estimate.But transformation matrix is in camera pose
High probability region, by carrying out spreading point sampling around transformation matrix, can try to achieve and observe optimal camera pose, as
Position auto―control of the camera in t.Because characteristics of image point set and environmental characteristic point set are the sparse features point of extraction, number
It is little according to amount, therefore efficiency is very fast.
In one embodiment of the invention, it is preferable that by position auto―control, obtain the camera position of intelligent terminal,
Specifically include:By position auto―control, the camera position of intelligent terminal is transformed into global coordinate system in camera coordinate system.
In this embodiment, camera position is transformed under global coordinate system according to position auto―control, just obtains terminal
The particular location coordinate of camera (user), so as to be that indoor positioning and navigation Service application provide accurately positional information.
Fig. 3 shows the schematic flow sheet of the localization method of the intelligent terminal of yet another embodiment of the present invention:
Step 302, image is obtained;
Step 304, image characteristic point is extracted in the picture;
Step 306, according to image characteristic point, characteristics of image point set is obtained;
Step 308, according to characteristics of image point set and environmental characteristic point set, position auto―control is obtained;
Step 310, by position auto―control, the camera position of intelligent terminal is obtained;
Step 312, environmental characteristic point set is updated.
In this embodiment, the environmental characteristic point set at existing t-1 moment is updated, so as to obtain the ring of t
Border feature point set, and then continue to calculate the position auto―control at t+1 moment.
The embodiment of second aspect of the present invention, proposes a kind of alignment system 400 of intelligent terminal, Fig. 4 shows the present invention
One embodiment intelligent terminal alignment system 400 schematic block diagram:
Image acquisition unit 402, for obtaining image;
Extraction unit 404, for extracting image characteristic point in the picture;
Feature point set acquiring unit 406, for according to image characteristic point, obtaining characteristics of image point set;
Pose acquiring unit 408, for according to characteristics of image point set and environmental characteristic point set, obtaining position auto―control;
Position acquisition unit 410, for by position auto―control, obtaining the camera position of intelligent terminal.
The alignment system 400 for the intelligent terminal that the present invention is provided, intelligent terminal camera shoots the image of current environment, first
The two dimensional image characteristic point that extraction unit 404 extracts the RGB image of image is first passed through, extracting method can select Shi-Tomasi
Method carries out two dimensional image feature point extraction, and it has taken into account speed and robustness to a certain extent.Feature point set acquiring unit
406 obtain characteristics of image point set according to image characteristic point, and pose acquiring unit 408 obtains intelligent terminal according to characteristics of image point set
The position auto―control of camera, position acquisition unit 410 according to position auto―control by under the evolution of camera to global coordinate system,
Obtain the particular location of camera.The present invention with external information source due to that need not carry out data exchange, location efficiency too much
It is higher, it individually can be also combined with other location technologies, improve the precision and speed of indoor positioning, can be intelligent terminal
Indoor positioning beneficial power-assisted is provided.
In one embodiment of the invention, it is preferable that feature point set acquiring unit 406, specifically for:Obtain image special
Levy two-dimensional coordinate a little and color information;The two-dimensional coordinate of image characteristic point is converted to the camera coordinate system of image characteristic point
Under three-dimensional coordinate;The average for obtaining image characteristic point and image characteristic point are calculated according to three-dimensional coordinate and color information
Covariance;According to the average and covariance of image characteristic point, characteristics of image point set is obtained.
In this embodiment, feature point set acquiring unit 406 obtains two-dimensional coordinate and color information by image characteristic point,
Again by the two dimensional coordinate map into three dimensions, figure is obtained by gauss hybrid models according to three-dimensional coordinate and color information
As the average and covariance of characteristic point, and then characteristics of image point set is obtained, realize the knowledge of the image obtained to terminal camera
Not.
In one embodiment of the invention, it is preferable that feature point set acquiring unit 406, it is additionally operable to:Pass through Gaussian Mixture
Model, calculates the average and the variance of depth value of the depth value of three-dimensional coordinate;According to the average of the depth value of three-dimensional coordinate, obtain
Take the average of three-dimensional coordinate;By gauss hybrid models, the average of color information and the variance of color information are calculated;According to three
The average of dimension coordinate, the average of color information, the variance of three-dimensional coordinate, the variance of color information, obtain the equal of image characteristic point
The covariance of value and image characteristic point.
In this embodiment, it is special because the image characteristic point of extraction is located at object edge or color change is significantly local
Saltus step occurs for a depth value and color at levying, so only obtaining the three-dimensional coordinate and color of scene using pixel, can cause
The depth and color measuring error of characteristic point are larger, and feature point set acquiring unit 406 calculates three-dimensional sit by gauss hybrid models
The average of target depth value, and then the average of three-dimensional coordinate is obtained, variance, the color of depth value are calculated by gauss hybrid models
The variance of information, the average of color information, therefore image characteristic point can be approximated to be average, the multivariate Gaussian point of covariance matrix
Cloth, reduces image recognition error, realizes seamless, the accurate positioning of terminal.
In one embodiment of the invention, it is preferable that pose acquiring unit 408, specifically for:According to image characteristic point
Collection and environmental characteristic point set, obtain the transformation matrix between characteristics of image point set and environmental characteristic point set;By to conversion
Matrix is sampled, and obtains optimal transform matrix;It regard optimal transform matrix as position auto―control.
In this embodiment, the matrix that position auto―control is constituted for the position of camera and the posture of camera, camera
Posture represents the roll, pitching and course angle of camera respectively, due to acquisition be for terminal camera coordinate system three
Dimension coordinate, in order to obtain the position auto―control of camera, defines an environmental characteristic point set relative to global coordinate system, pose is obtained
Take unit 408 to calculate the transformation matrix between characteristics of image point set and environmental characteristic point set by ICP algorithm, now obtain
Transformation matrix is the rough estimate evaluation of current time camera position auto―control.When feature point set and environmental model feature point set can
When the number of feature points matched somebody with somebody is less, the transformation matrix error that ICP algorithm is obtained is larger.There is convergence in itself in other ICP algorithm
In the possibility of locally optimal solution.Therefore transformation matrix may not be optimal pose estimate.But transformation matrix has been located
In the high probability region of camera pose, by carrying out spreading point sampling around transformation matrix, it can try to achieve and observe optimal take the photograph
As head pose, as camera t position auto―control.Because characteristics of image point set and environmental characteristic point set are extraction
Sparse features point, data volume is little, therefore efficiency is very fast.
In one embodiment of the invention, it is preferable that position acquisition unit 410, specifically for:By position auto―control,
The camera position of intelligent terminal is transformed into global coordinate system in camera coordinate system.
In this embodiment, camera position is transformed under global coordinate system according to position auto―control, just obtains terminal
The particular location coordinate of camera (user), so as to be that indoor positioning and navigation Service application provide accurately positional information.
In one embodiment of the invention, as shown in Figure 4, it is preferable that also include:Updating block 412, for updating ring
Border feature point set.
In this embodiment, the environmental characteristic point set at existing t-1 moment is updated, so as to obtain the ring of t
Border feature point set, and then continue to calculate the position auto―control at t+1 moment.
The embodiment of third aspect present invention, propose a kind of intelligent terminal, including the intelligent terminal of any of the above-described is determined
Position system 400.
The intelligent terminal that the present invention is provided includes the alignment system 400 of intelligent terminal, and intelligent terminal camera shoots current
The image of environment, extracts the two dimensional image characteristic point of the RGB image of image by extraction unit 404 first, and extracting method can be with
Two dimensional image feature point extraction is carried out from Shi-Tomasi methods, it has taken into account speed and robustness to a certain extent.It is special
Levy point set acquiring unit 406 and characteristics of image point set is obtained according to image characteristic point, pose acquiring unit 408 is according to image characteristic point
Collection obtains the position auto―control of intelligent terminal camera, and position acquisition unit 410 is according to position auto―control by the evolution of camera
To under global coordinate system, the particular location of camera is obtained.The present invention with external information source due to that need not enter line number too much
According to exchange, location efficiency is higher, individually can also can be combined with other location technologies, improves the precision and speed of indoor positioning
Degree, can provide beneficial power-assisted for the indoor positioning of intelligent terminal.
Key technology of the present invention is that the pose of image recognition and camera is calculated, and camera (i.e. mobile terminal device) can lead to
Mobile terminal device and use is precisely located in the location matches service crossed with the indoor geographic information database offer of server
Indoor location where family.The service request (indoor navigation, inquiry etc.) that can be then inputted according to user in application terminal, it is raw
Into final solution and feed back to user.Concrete scheme of the present invention is intended to explanation to be determined by the photo of terminal camera shooting
The process of position.
Image recognitions of one, based on RGB image
Terminal camera can shoot resolution ratio very high true color image, for image carry out image recognition be divided into as
Under several steps:
1. two dimensional character point is extracted
The photo for the current environment that terminal camera is shot, it is necessary first to extract the two dimensional character point of its RGB image is conventional
Feature extraction and description method have SIFT (Scale Invariant Feature Transform, Scale invariant features transform),
SURF (Speed Up Robust Features accelerate robust features), ORB (Oriented FAST and Rotated
BRIEF) algorithm and Shi-Tomasi methods etc..Because SIFT and SURF method extraction rates are slower, although and ORB algorithms are fast
Degree is fast but robustness is not high enough, therefore the present invention carries out two dimensional character point extraction from Shi-Tomasi methods, and it is in certain journey
Speed and robustness have been taken into account on degree.After the two dimensional character point of image is extracted, each characteristic point can be obtained in the picture
Two-dimensional coordinate (u, v) and corresponding color information (r, g, b), then can be by the two dimensional coordinate map into three dimensions.
2. three-dimensional mapping
Characteristic point (u, v) in RGB image, its depth value is z, then three-dimensional position of this under camera coordinate system is
Wherein, f is camera focus, and (cx, cy) be image center.So that image is distinguished as 640 × 480 as an example, then cxFor
320, and cyFor 240.
According to formula (1), the three-dimensional coordinate (x, y, z) of (u, v, z) relative to camera coordinate system can be obtained.But the position
Put with certain uncertainty, the measured value of depth is actually that an average is μzGaussian random variable, its standard deviation sigmaz
=1.45 × 10-3μz 2, wherein μzFor the measured value of depth.Equally, there is also certain deviation for the measurement of color information (r, g, b).
Due to the characteristic point extracted, to be usually located at object edge or color change significantly local therefore deep at characteristic point
Saltus step easily occurs for angle value and color.So only obtaining the three-dimensional coordinate and color of scene using pixel (u, v), spy can be caused
The depth and color measuring error levied a little are larger, and then cause x and y error larger.While the extraction position (u, v) of characteristic point
There is certain error, therefore the present invention calculates the depth value and rgb value of characteristic point using gauss hybrid models.
3. gauss hybrid models
It is assumed that there is the error of 1 pixel, i.e. u standard deviation and v standard deviation sigma in characteristic point position (u, v)u=σv=1,
3 × 3 windows around selected characteristic point (u, v), totally 9 pixels calculate the depth value and its variance at characteristic point.It is assumed that every
Individual pixel depth value ziIt is that average is μzi, variance be σzi 2Gaussian Profile.So according to gauss hybrid models, characteristic point depth
Mean μzAnd variances sigmaz 2As shown in formula (2):
Wherein, ω is that weighted value at the above-mentioned respective characteristic point weighted value of 9 pixels, characteristic point takes 1/4, characteristic point
4 weighted values are 1/8 up and down, and the pixel weighted value at diagonally opposing corner takes 1/16.
It is similar with depth value, the weighted sum of r, g, b also for 9 pixels at characteristic point, current invention assumes that color standard
Poor σcFor constant, then by taking r passages as an example:
So far, the three-dimensional coordinate and color value of characteristic point, and depth variance and color variance have been obtained.
4. the covariance matrix of characteristic point
The μ that formula (2) is obtainedzAnd (u, v) brings formula (1) into can obtain three of characteristic point after gauss hybrid models
Dimension coordinate μxyz=[μx, μy, μz], it can similarly obtain the three-dimensional coordinate μ of color informationrgb=[μr, μg, μb].It is μ to make characteristic point
=[μxyz, μrgb] T, then its error co-variance matrix Σ is
It can be pushed away according to formula (1):
AndObtained by formula (2), (3).Therefore characteristic point p=[x, y, z, r, g, b]TCan be near
It is seemingly that average is μ=[μxyz, μrgb]T, covariance matrix for Σ multivariate Gaussian be distributed.
After the characteristic point information that can be used for recognizing in getting Photograph image, the identification of Photograph image is the most key
Part is also just completed, a crucial step afterwards i.e. according to these characteristic point informations are counter resolve camera residing for pose (i.e. terminal
Pose, can be further approximate or be modified to user present position).
Two, are based on SLAM (Simultaneous Localization And Mapping, even if positioning and map structuring)
The terminal pose of technology is calculated
SLAM is the technology for being widely used in field in intelligent robotics, and it needs the spy of Real time identification environment for robot
Different situation, the pose of resolving is all real-time continuous.And for the use environment of the present invention, although in theory to terminal pose
Accomplish that it is also to give no cause for much criticism to calculate in real time, it is contemplated that indoor positioning is relative to intelligent robot, to the precision of real-time pose
It is required that it is lower slightly, and terminal may also rely on sensor progress gait amendment and the other modes supplement positioning accurate of itself
Degree.In addition, terminal will also ensure that other substantial amounts of functions unrelated with positioning are normally used.So, terminal is directed in the present invention
The special circumstances of indoor positioning, the calculating frequency of SLAM technologies is reduced, is defined by the frequency for meeting terminal indoor position accuracy,
Interval carries out a terminal pose calculating at regular intervals, and terminal pose is updated and corrected.
The image feature point set of t is defined as data characteristics point set, Dt={ μDi,ΣDi, whereinΣDiForm such as formula (4).(μDi, ΣDi) represent that data characteristics point concentrates the position face of ith feature point
Color average and covariance matrix.Due toBe relative to the three-dimensional coordinate of terminal camera coordinate system,
Therefore DtIn each point be relative to terminal camera coordinate system.In order to obtain the pose of camera (terminal), definition
One environmental model feature point set M relative to global coordinate systemt={ μMi,ΣMi}。
The pose of camera t isWherein txyz=[tx,ty,tz] be camera position
Put,For the posture of camera, roll, pitching and the course angle of camera are represented respectively.And camera pose xtCan
To use position auto―control PtRepresent:
Wherein, R3×3For the attitude matrix of camera, byDetermine.
1. camera pose is estimated
The present invention calculates feature point set D using ICP algorithmtWith environmental model feature point set Mt-1Between transformation matrix
Tt, then carrying out sampling again obtains observing optimal pose value, as camera t position auto―control Pt。
ICP algorithm is also known as iteration closest approach algorithm, is the common method of two frame point clouds of alignment, and the algorithm has two keys
Point:One is the corresponding points pair between two frame point clouds of searching;These corresponding points make it that two frame point clouds distance is minimum to calculating according to two
Transformation matrix.ICP algorithm can relatively accurately obtain the transformation matrix between two frame point clouds, but algorithm is in itself to initial value ratio
More sensitive, when initial transformation selection is incorrect, algorithm may be absorbed in locally optimal solution.In addition, when cloud data is intensive,
Because data volume is huge, Riming time of algorithm is long, can not meet the requirement of real-time.
The present invention calculates feature point set D using ICP algorithmtWith environmental model feature point set Mt-1Between transformation matrix
Tt, because data set and Models Sets are the sparse features point of extraction, data volume is little, therefore efficiency is very fast.It is assumed that user
In the case of carrying out an image collection with fixed stride walking and the camera that regularly opens a terminal, between t-2 to the t-1 times
Every interior, the movement change amount Δ T=P of usert-1/Pt-2, in order to calculate Pt, ICP algorithm initial value T=P in the present invention is sett-1Δ
T.Continuous iteration finally gives transformation matrix T of the camera relative to global coordinate systemt。
The present invention is when using ICP algorithm, because the frequency of sampling is not very high, and cutting initial value can be indoor by other
Location technology carries out precision supplement, it is possible to avoid the inferior position of above-mentioned ICP algorithm.
2. camera pose optimizes
The transformation matrix T now obtainedtIt is current time camera position auto―control PtRough estimate evaluation.In some cases,
When the number of feature points that feature point set can be matched with environmental model feature point set is less, the conversion square that ICP algorithm is obtained
Battle array error is larger.In addition, it is previously noted that there is the possibility for converging on locally optimal solution in ICP algorithm in itself.Therefore TtMay be not
It is optimal pose estimate.But, TtThe high probability region of camera pose is in, by TtSurrounding spread a little
Sampling, can try to achieve and observe optimal camera pose Pt≈Tt.Then according to existing environmental model feature point set Mt-1, to it
It is updated, so as to obtain the Models Sets M of tt。
Camera position is transformed under global coordinate system finally according to position auto―control Pt, terminal camera is just obtained
The particular location coordinate of (user), so as to provide accurately positional information to need to carry out indoor positioning and navigation Service application.
Fig. 5 shows the schematic flow sheet of the localization method of the intelligent terminal of the specific embodiment of the present invention:
Step 502, image is obtained by terminal camera;
Step 504, image characteristic point is extracted in the picture;
Step 506, by gauss hybrid models, image characteristic point covariance matrix is obtained;
Step 508, environmental characteristic point set is obtained;
Step 510, environmental characteristic point set is updated;
Step 512, according to characteristics of image point set and environmental characteristic point set, by ICP algorithm, position auto―control is obtained;
Step 514, position auto―control optimizes;
Step 516, by position auto―control, the camera position of intelligent terminal is obtained.
Fig. 6 shows the structural representation of the intelligent terminal 600 of the specific embodiment of the present invention, intelligent terminal 600
Including:Processor 602, memory 604, bus 606, display 608, camera 610, processor 602, memory 604, display
Device 608, camera 610 are connected by bus 606, and memory 604 is stored with computer instruction, and processor 602 is by performing meter
Following methods are realized in the instruction of calculation machine:
Obtain image;
Image characteristic point is extracted in the picture;
According to image characteristic point, characteristics of image point set is obtained;
According to characteristics of image point set and environmental characteristic point set, position auto―control is obtained;
By position auto―control, the camera position of intelligent terminal is obtained.
In the description of this specification, the description of term " one embodiment ", " some embodiments ", " specific embodiment " etc.
Mean that combining the embodiment or specific features, structure, material or the feature of example description is contained at least one reality of the invention
Apply in example or example.In this manual, identical embodiment or reality are not necessarily referring to the schematic representation of above-mentioned term
Example.Moreover, description specific features, structure, material or feature can in any one or more embodiments or example with
Suitable mode is combined.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area
For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies
Change, equivalent substitution, improvement etc., should be included in the scope of the protection.
Claims (13)
1. a kind of localization method of intelligent terminal, it is characterised in that including:
Obtain image;
Image characteristic point is extracted in described image;
According to described image characteristic point, characteristics of image point set is obtained;
According to described image feature point set and environmental characteristic point set, position auto―control is obtained;
By the position auto―control, the camera position of the intelligent terminal is obtained.
2. the localization method of intelligent terminal according to claim 1, it is characterised in that according to described image characteristic point, obtain
The step of taking described image feature point set, including:
Obtain the two-dimensional coordinate and color information of described image characteristic point;
The two-dimensional coordinate of described image characteristic point is converted to the three-dimensional coordinate under the camera coordinate system of described image characteristic point;
The average and described image for obtaining described image characteristic point are calculated according to the three-dimensional coordinate and the color information
The covariance of characteristic point;
According to the average and covariance of described image characteristic point, described image feature point set is obtained.
3. the localization method of intelligent terminal according to claim 2, it is characterised in that according to the three-dimensional coordinate and institute
State color information calculate obtain described image characteristic point average and described image characteristic point covariance the step of, including:
By gauss hybrid models, the average and the variance of the depth value of the depth value of the three-dimensional coordinate are calculated;
According to the average of the depth value of the three-dimensional coordinate, the average of the three-dimensional coordinate is obtained;
By the gauss hybrid models, the average of the color information and the variance of the color information are calculated;
According to the average of the three-dimensional coordinate, the average of the color information, the variance of the three-dimensional coordinate, the color information
Variance, obtain described image characteristic point average and described image characteristic point covariance.
4. the localization method of intelligent terminal according to claim 1, it is characterised in that according to described image feature point set with
And the environmental characteristic point set, the step of obtaining the position auto―control, including:
According to described image feature point set and the environmental characteristic point set, described image feature point set and the environment are obtained
Transformation matrix between feature point set;
By being sampled to the transformation matrix, optimal transform matrix is obtained;
It regard the optimal transform matrix as the position auto―control.
5. the localization method of intelligent terminal according to claim 1, it is characterised in that by the position auto―control, is obtained
The camera position of the intelligent terminal, is specifically included:
By the position auto―control, the camera position of the intelligent terminal is transformed into global seat in the camera coordinate system
Mark system.
6. the localization method of intelligent terminal according to any one of claim 1 to 5, it is characterised in that also include:
Update the environmental characteristic point set.
7. a kind of alignment system of intelligent terminal, it is characterised in that including:
Image acquisition unit, for obtaining image;
Extraction unit, for extracting image characteristic point in described image;
Feature point set acquiring unit, for according to described image characteristic point, obtaining characteristics of image point set;
Pose acquiring unit, for according to described image feature point set and environmental characteristic point set, obtaining position auto―control;
Position acquisition unit, for by the position auto―control, obtaining the camera position of the intelligent terminal.
8. the alignment system of intelligent terminal according to claim 7, it is characterised in that the feature point set acquiring unit,
Specifically for:
Obtain the two-dimensional coordinate and color information of described image characteristic point;
The two-dimensional coordinate of described image characteristic point is converted to the three-dimensional coordinate under the camera coordinate system of described image characteristic point;
The average and described image for obtaining described image characteristic point are calculated according to the three-dimensional coordinate and the color information
The covariance of characteristic point;
According to the average and covariance of described image characteristic point, described image feature point set is obtained.
9. the alignment system of intelligent terminal according to claim 8, it is characterised in that the feature point set acquiring unit,
It is additionally operable to:
By gauss hybrid models, the average and the variance of the depth value of the depth value of the three-dimensional coordinate are calculated;
According to the average of the depth value of the three-dimensional coordinate, the average of the three-dimensional coordinate is obtained;
By the gauss hybrid models, the average of the color information and the variance of the color information are calculated;
According to the average of the three-dimensional coordinate, the average of the color information, the variance of the three-dimensional coordinate, the color information
Variance, obtain described image characteristic point average and described image characteristic point covariance.
10. the alignment system of intelligent terminal according to claim 7, it is characterised in that the pose acquiring unit, specifically
For:
According to described image feature point set and the environmental characteristic point set, described image feature point set and the environment are obtained
Transformation matrix between feature point set;
By being sampled to the transformation matrix, optimal transform matrix is obtained;
It regard the optimal transform matrix as the position auto―control.
11. the alignment system of intelligent terminal according to claim 7, it is characterised in that the position acquisition unit, specifically
For:
By the position auto―control, the camera position of the intelligent terminal is transformed into global seat in the camera coordinate system
Mark system.
12. the alignment system of the intelligent terminal according to any one of claim 7 to 11, it is characterised in that also include:
Updating block, for updating the environmental characteristic point set.
13. a kind of intelligent terminal, it is characterised in that including:
The alignment system of intelligent terminal as any one of claim 7 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710190798.4A CN107153831A (en) | 2017-03-28 | 2017-03-28 | Localization method, system and the intelligent terminal of intelligent terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710190798.4A CN107153831A (en) | 2017-03-28 | 2017-03-28 | Localization method, system and the intelligent terminal of intelligent terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107153831A true CN107153831A (en) | 2017-09-12 |
Family
ID=59792609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710190798.4A Withdrawn CN107153831A (en) | 2017-03-28 | 2017-03-28 | Localization method, system and the intelligent terminal of intelligent terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107153831A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198217A (en) * | 2017-12-29 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | Indoor orientation method, device, equipment and computer-readable medium |
CN109035303A (en) * | 2018-08-03 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | SLAM system camera tracking and device, computer readable storage medium |
WO2019104732A1 (en) * | 2017-12-01 | 2019-06-06 | 深圳市沃特沃德股份有限公司 | Vision cleaning robot and obstacle detection method |
CN109917404A (en) * | 2019-02-01 | 2019-06-21 | 中山大学 | A kind of indoor positioning environmental characteristic point extracting method |
CN110176034A (en) * | 2019-05-27 | 2019-08-27 | 盎锐(上海)信息科技有限公司 | Localization method and end of scan for VSLAM |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441769A (en) * | 2008-12-11 | 2009-05-27 | 上海交通大学 | Real time vision positioning method of monocular camera |
CN103345751A (en) * | 2013-07-02 | 2013-10-09 | 北京邮电大学 | Visual positioning method based on robust feature tracking |
CN103900583A (en) * | 2012-12-25 | 2014-07-02 | 联想(北京)有限公司 | Device and method used for real-time positioning and map building |
-
2017
- 2017-03-28 CN CN201710190798.4A patent/CN107153831A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441769A (en) * | 2008-12-11 | 2009-05-27 | 上海交通大学 | Real time vision positioning method of monocular camera |
CN103900583A (en) * | 2012-12-25 | 2014-07-02 | 联想(北京)有限公司 | Device and method used for real-time positioning and map building |
CN103345751A (en) * | 2013-07-02 | 2013-10-09 | 北京邮电大学 | Visual positioning method based on robust feature tracking |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019104732A1 (en) * | 2017-12-01 | 2019-06-06 | 深圳市沃特沃德股份有限公司 | Vision cleaning robot and obstacle detection method |
CN108198217A (en) * | 2017-12-29 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | Indoor orientation method, device, equipment and computer-readable medium |
CN109035303A (en) * | 2018-08-03 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | SLAM system camera tracking and device, computer readable storage medium |
CN109035303B (en) * | 2018-08-03 | 2021-06-08 | 百度在线网络技术(北京)有限公司 | SLAM system camera tracking method and device, and computer readable storage medium |
CN109917404A (en) * | 2019-02-01 | 2019-06-21 | 中山大学 | A kind of indoor positioning environmental characteristic point extracting method |
CN109917404B (en) * | 2019-02-01 | 2023-02-03 | 中山大学 | Indoor positioning environment feature point extraction method |
CN110176034A (en) * | 2019-05-27 | 2019-08-27 | 盎锐(上海)信息科技有限公司 | Localization method and end of scan for VSLAM |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107153831A (en) | Localization method, system and the intelligent terminal of intelligent terminal | |
CN111652179B (en) | Semantic high-precision map construction and positioning method based on point-line feature fusion laser | |
CN109074667B (en) | Predictor-corrector based pose detection | |
CN107131883B (en) | Full-automatic mobile terminal indoor positioning system based on vision | |
CN109671119A (en) | A kind of indoor orientation method and device based on SLAM | |
CN106197422A (en) | A kind of unmanned plane based on two-dimensional tag location and method for tracking target | |
CN111028358B (en) | Indoor environment augmented reality display method and device and terminal equipment | |
CN110073362A (en) | System and method for lane markings detection | |
DE112011102132T5 (en) | Method and device for image-based positioning | |
CN110470295B (en) | Indoor walking navigation system and method based on AR positioning | |
CN112799096B (en) | Map construction method based on low-cost vehicle-mounted two-dimensional laser radar | |
CN106871906A (en) | A kind of blind man navigation method, device and terminal device | |
CN106370160A (en) | Robot indoor positioning system and method | |
CN105279769A (en) | Hierarchical particle filtering tracking method combined with multiple features | |
CN109459759B (en) | Urban terrain three-dimensional reconstruction method based on quad-rotor unmanned aerial vehicle laser radar system | |
CN110361005A (en) | Positioning method, positioning device, readable storage medium and electronic equipment | |
CN110006444A (en) | A kind of anti-interference visual odometry construction method based on optimization mixed Gauss model | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
CN109636897A (en) | A kind of Octomap optimization method based on improvement RGB-D SLAM | |
Shu et al. | 3d point cloud-based indoor mobile robot in 6-dof pose localization using a wi-fi-aided localization system | |
CN112405526A (en) | Robot positioning method and device, equipment and storage medium | |
CN112884841B (en) | Binocular vision positioning method based on semantic target | |
JP6580286B2 (en) | Image database construction device, position and inclination estimation device, and image database construction method | |
CN115629386A (en) | High-precision positioning system and method for automatic parking | |
Wang et al. | Indoor position algorithm based on the fusion of wifi and image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170912 |