CN104657389A - Positioning method, system and mobile terminal - Google Patents

Positioning method, system and mobile terminal Download PDF

Info

Publication number
CN104657389A
CN104657389A CN201310598195.XA CN201310598195A CN104657389A CN 104657389 A CN104657389 A CN 104657389A CN 201310598195 A CN201310598195 A CN 201310598195A CN 104657389 A CN104657389 A CN 104657389A
Authority
CN
China
Prior art keywords
reference substance
image
recognition feature
initial position
gathered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310598195.XA
Other languages
Chinese (zh)
Inventor
郑杰
段思九
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN201310598195.XA priority Critical patent/CN104657389A/en
Publication of CN104657389A publication Critical patent/CN104657389A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a positioning method, a positioning system and a mobile terminal. Map data is stored in advance, and the map data is recognition feature and position of several reference substances. During the operation of positioning, the positioning method comprises the steps: firstly, obtaining an initial position of a current position, and then confirming reference substances of which recognition feature is matched up with the recognition feature of an image collected by an image collecting device within the scale of the initial position; confirming the position of which recognition feature is matched up with the recognition feature of the collected image as the positioning position of current position; moreover, correcting the pre-stored position of the reference substance to an indoor floor or one specific position in the floor, therefore, the positioning precision is relatively high. From the positioning process, the positioning method provided by the application has not transformed the mobile terminal and also not performed additional calibration; the accurate positioning can be realized only when the mobile terminal has the image collecting function and the traditional positioning function, therefore, the positioning precision is improved while the implementing cost is reduced.

Description

Localization method, system and mobile terminal
Technical field
The present invention relates to field of locating technology, more particularly, relate to a kind of localization method, system and mobile terminal.
Background technology
Along with socioeconomic high speed development, the demand of people to locating information increases day by day, especially in the indoor environment of complexity, as in the environment such as airport hall, exhibition room, warehouse, supermarket, library, usually needs to determine the position of mobile terminal in indoor.
Traditional indoor positioning technologies mainly uses the modes such as WIFI, GPS, base station signal to position, positioning precision is low, and current solution otherwise to the mobile device of user transform (as, embedded in mobile phone positioning chip positions), extra demarcation is carried out (as at indoor location localizing emission platform in indoor, use bluetooth modules etc. communicate with localizing emission platform and position), realize cost higher.
Summary of the invention
The object of this invention is to provide a kind of localization method, realize cost with what reduce localization method.
For achieving the above object, the invention provides following technical scheme:
A kind of localization method, comprising:
By the image of image acquisition device current position object;
Obtain the initial position of described current location;
In the map data base prestored, obtain recognition feature and the position of the reference substance within the scope of described initial position;
From the reference substance within the scope of described initial position, determine the reference substance that recognition feature matches with the recognition feature of the image gathered;
The position of the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
Said method, preferably, described from the reference substance within the scope of described initial position, determine that the reference substance that recognition feature and the recognition feature of the image gathered match comprises:
From gathered image, extract recognition feature, described recognition feature comprises character features and/or characteristics of image;
If the recognition feature extracted is character features, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
If the recognition feature extracted is characteristics of image, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
If the recognition feature extracted comprises character features and characteristics of image, then from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From the reference substance within the scope of described initial position, determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered; The reference substance simultaneously appeared in described first reference substance set and the second reference substance set is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.
Said method, preferably, described from the reference substance within the scope of described initial position, determine that the reference substance that recognition feature and the recognition feature of the image gathered match comprises:
From gathered image, extract recognition feature, described recognition feature comprises character features and/or characteristics of image;
If the recognition feature extracted is character features, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
If the recognition feature extracted is characteristics of image, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
If the recognition feature extracted comprises character features and characteristics of image, then from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From described first reference substance set, obtain the reference substance that characteristics of image matches with the characteristics of image of the image gathered, this reference substance is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.
Said method, preferably, when not getting the reference substance that the recognition feature of recognition feature with the image gathered matches, determines that described initial position is the position location of current location.
A kind of positioning system, comprising:
Image collecting device, for gathering the image of current position reference substance;
Initial position acquisition module, for obtaining the initial position of described current location;
First acquisition module, in the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
First determination module, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
Second determination module, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
Said system, preferably, described first determination module comprises:
First extraction unit, for extracting recognition feature from gathered image, described recognition feature comprises character features and/or characteristics of image;
First determining unit, for when the recognition feature that described first extraction unit extracts is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
Second determining unit, for when the recognition feature that described first extraction unit extracts is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
3rd determining unit, for when the recognition feature that described first extraction unit extracts comprises character features and characteristics of image, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From the reference substance within the scope of described initial position, determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered; The reference substance simultaneously appeared in described first reference substance set and the second reference substance set is defined as the reference substance matched with the recognition feature of gathered image.
Said system, preferably, described first determination module comprises:
Second extraction unit, for extracting recognition feature from gathered image, described recognition feature comprises character features and/or characteristics of image;
4th determining unit, for when the recognition feature that described second extraction unit extracts is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
5th determining unit, for when the recognition feature that described second extraction unit extracts is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
6th determining unit, for when the recognition feature that described second extraction unit extracts comprises character features and characteristics of image, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From described first reference substance set, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered.
Said system, preferably, also comprises:
3rd determination module, during for not determining reference substance that recognition feature matches with the recognition feature of image gathered at described first determination module, determines that described initial position is the position location of current location.
A kind of mobile terminal, comprises the positioning system as above described in any one.
A kind of positioning system, comprising:
Mobile terminal and server; Wherein,
Described mobile terminal comprises:
First image collecting device, for gathering the image of current position object;
First initial position acquisition module, for obtaining the initial position of current location;
First sending module, for sending gathered image and described initial position;
First receiver module, for receiving the position location that described server sends;
Described server comprises:
First receiver module, for receiving image and the initial position of described first sending module transmission;
Second acquisition module, for from the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
4th determination module, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
5th determination module, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location;
Second sending module, for sending described position location.
A kind of positioning system, comprising:
Mobile terminal and server; Wherein,
Described mobile terminal comprises:
Second image collecting device, for gathering the image of current position object;
Characteristic extracting module, for extracting the recognition feature of the image of described second image acquisition device;
Second initial position acquisition module, for obtaining the initial position of current location;
3rd sending module, for sending described recognition feature and described initial position;
3rd receiver module, for receiving the position location that described server sends;
Described server comprises:
4th receiver module, for receiving recognition feature and the initial position of described 3rd sending module transmission;
3rd acquisition module, for from the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
6th determination module, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
7th determination module, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location;
4th sending module, for sending described position location.
Known by above scheme, a kind of localization method, system and mobile terminal that the application provides, prestore map datum, and described map datum is recognition feature and the position of some reference substances.When positioning, first the initial position of current location is obtained, then the reference substance that the image of recognition feature and image acquisition device matches is determined in the reference substance within the scope of initial position, the position of reference substance recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location, and the position of the reference substance prestored can be as accurate as certain particular location in indoor floor or floor, therefore positioning precision is higher; From above-mentioned position fixing process, the location technology scheme that the application provides, mated by the recognition feature of the recognition feature of the image by the reference substance within the scope of initial position with the image of image acquisition device, thus obtain the reference substance that mates with object, therefore, the application does not need to transform mobile terminal, do not need additionally to demarcate yet, as long as mobile terminal has image collecting function and traditional positioning function can realize accurate location, while improve positioning precision, reduce and realize cost.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
The process flow diagram of a kind of localization method that Fig. 1 provides for the embodiment of the present application;
Fig. 2 for the embodiment of the present application provide from the reference substance within the scope of initial position, determine a kind of process flow diagram of the reference substance that recognition feature matches with the recognition feature of image gathered;
Fig. 3 for the embodiment of the present application provide from the reference substance within the scope of initial position, determine the another kind of process flow diagram of the reference substance that recognition feature matches with the recognition feature of image gathered;
Fig. 4 for the embodiment of the present application provide from the reference substance within the scope of initial position, determine another process flow diagram of the reference substance that recognition feature matches with the recognition feature of image gathered;
The structural representation of a kind of positioning system that Fig. 5 provides for the embodiment of the present application;
A kind of structural representation of the first determination module that Fig. 6 provides for the embodiment of the present application;
The another kind of structural representation of the first determination module that Fig. 7 provides for the embodiment of the present application;
The structural representation of the another kind of positioning system that Fig. 8 provides for the embodiment of the present application;
The structural representation of another positioning system that Fig. 9 provides for the embodiment of the present application;
The structural representation of another positioning system that Figure 10 provides for the embodiment of the present application;
A kind of structural representation of the 6th determination module that Figure 11 provides for the embodiment of the present application;
The another kind of structural representation of the 6th determination module that Figure 12 provides for the embodiment of the present application.
Term " first ", " second ", " the 3rd " " 4th " etc. (if existence) in instructions and claims and above-mentioned accompanying drawing are for distinguishing similar part, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged in the appropriate case, so that the embodiment of the application described herein can be implemented with the order except illustrated here.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Refer to Fig. 1, the process flow diagram of a kind of localization method that Fig. 1 provides for the embodiment of the present application as described in Figure 1, can comprise:
Step S101: by the image of image acquisition device current position object;
Described object is the object with mark effect, e.g., can be trade company logo(and trade company's mark) or the street name at name of firm or trade company place and street number etc.; Described object also can be certain exhibition booth of certain landmark building or indoor; Certainly, described object also can be the cell name and Lou Hao etc. of residential building.
The first-class device that can gather image of the shooting that described image collecting device can carry for mobile terminal.
Step S102: the initial position obtaining described current location;
The initial position message of current location can be obtained by traditional localization method, as used the mode such as WIFI or GPS or base station signal to position, specifically how to locate and belonging to general knowledge known in this field, repeating no more here.
Step S103: in the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
Wherein, reference substance within the scope of described initial position can be the reference substance meeting predeterminable range condition to the distance of described initial position, reference substance within the scope of described initial position also can be meet other pre-conditioned reference substances, such as, can be meet predeterminable range condition to the distance of described initial position, and the type of reference substance meet preset kind condition reference substance (as, when default store map data storehouse, sort out with reference to thing, and then convenient acquisition meets the reference substance of preset kind condition) etc., the application is not specifically limited.
In the embodiment of the present application, gather the image of several reference substances in advance, and the recognition feature of each reference substance and accurate location, the image of described reference substance, recognition feature and positional information can be gathered by artificial, to obtain the corresponding relation of the image of reference substance, recognition feature and position.The recognition feature of described reference substance comprises character features and characteristics of image, wherein, described character features can, by manually gathering, also can be by carrying out feature extraction acquisition to gathered image, and described character features can comprise Chinese character, numeral and/or alphabetical etc.; Described characteristics of image is then carry out feature extraction acquisition to gathered image.In the embodiment of the present application, the recognition feature of the reference substance stored in map data base and positional information are corresponding, corresponding relation can be many-to-one relation, namely the recognition feature of a reference substance can comprise multiple, as the scene text (i.e. character features) of reference substance and the characteristics of image etc. of reference substance, described scene text can comprise Chinese character, numeral and/or alphabetical etc.; The recognition feature of the reference substance stored in map data base and positional information also can be man-to-man relations, and namely recognition feature only has one, e.g., and the scene text of reference substance, or, the characteristics of image of reference substance.
For the collection of the map datum in certain market, obtain market inner structure map by manually gathering, described doors structure map can be planimetric map, also can be stereographic map; The image of trade company, recognition feature and more specific location information is contained in described doors structure map, described more specific location information comprises the floor of described trade company, and trade company is at the particular location of this floor, the recognition feature of trade company and the more specific location information of trade company are associated as final map datum, are stored in map data base.
It should be noted that, in order to optimize the localization method that the embodiment of the present application provides, follow-up image characteristics extraction algorithm of may changing is to change recognition feature, therefore, in the embodiment of the present application, the image of each gathered reference substance can also be stored, when the image storing each reference substance is in order to follow-up change image characteristics extraction algorithm, facilitate feature extraction in described image data base.
The recognition feature of described trade company can comprise Word message trade company being played to mark effect, and as the logo of trade company, namely the logo of trade company is the logo of written form, or the title etc. of trade company; When the logo of trade company be figure or image time, the recognition feature of described trade company can also be the image of the logo by gathering trade company, carries out to gathered image the characteristics of image that feature extraction obtains;
Carry out to gathered image the image processing algorithm that recognition feature extraction adopts specifically can comprise: FAST(Features from Accelerated Segment Test, Accelerated fractionation detects feature), SIFT(Scale-invariant feature transform, scale invariant feature is changed), SUFR(Speeded UpRobust Features, accelerate robust features), ORB(Oriented FAST and Rotated BRIEF, directivity FAST and rotation BRIEF), OCR(Optical Character Recognition, optical character identification) or wavelet transformation etc.
Such as, can use ORB from the image of reference substance, extract characteristics of image " BRIEF(BinaryRobust Independent Elementary Features; the sane independent primitives feature of scale-of-two) ", use SIFT algorithm or FAST algorithm or SUFR algorithm from the image of reference substance, extract characteristics of image " Corner Feature ", use OCR to extract character features etc. from the image of reference substance.
Certainly, in described map data base except having comprised the Word message of mark effect, other Word message can also be comprised, as the Word message etc. be described trade company.
Step S104: from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
After collecting image, need to process gathered image, extract the recognition feature of image, concrete, image processing algorithm is used to extract the recognition feature of image, comprise characteristics of image and/or character features, wherein, described character features comprises Chinese character, digital or alphabetical, described image processing algorithm specifically can comprise: FAST(Features from Accelerated Segment Test, Accelerated fractionation detects feature), SIFT(Scale-invariant feature transform, scale invariant feature is changed), SUFR(SpeededUp Robust Features, accelerate robust features), ORB(Oriented FAST and RotatedBRIEF, directivity FAST and rotation BRIEF), OCR(Optical Character Recognition, optical character identification) or wavelet transformation etc.
The reference substance within the scope of described initial position can be determined according to the position of each reference substance in described initial position message and map data base, then, the reference substance that recognition feature matches with the recognition feature of the image gathered can be determined in the reference substance within the scope of described initial position.
Concrete, can by the reference substance within the scope of initial position, the reference substance that recognition feature comprises described character features is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered, or, the reference substance that characteristics of image within the scope of initial position and the characteristics of image of the image gathered match is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered; Certainly, also can by the reference substance within the scope of initial position, recognition feature comprises described character features, and the reference substance that characteristics of image and the characteristics of image of the image gathered match is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.
Step S105: the position of the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
After determining the more specific location information of current location, just can show this position location to user.
A kind of localization method based on image recognition that the embodiment of the present application provides, prestores map datum, and described map datum is recognition feature and the position of some reference substances.When positioning, first the initial position of current location is obtained, then the reference substance that the recognition feature of the image of recognition feature and image acquisition device matches is determined in the reference substance within the scope of initial position, the position of reference substance recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location, and, the position of the reference substance prestored can be as accurate as certain particular location in indoor floor or floor, and therefore positioning precision is higher; From above-mentioned position fixing process, the localization method that the application provides, mated by the recognition feature of the recognition feature of the image by the reference substance within the scope of initial position with the image of image acquisition device, thus obtain the reference of mating with object, therefore, the application does not need to transform mobile terminal, do not need additionally to demarcate yet, as long as mobile terminal has image collecting function and traditional positioning function can realize accurate location, while improve positioning precision, reduce and realize cost.
In the embodiment of the present application, the image of reference substance only can be stored in described map data base, and the position of reference substance, and the recognition feature of the image of described reference substance can be extracted in position fixing process from the image of reference substance, above-described embodiment, preferably, when only storing the position of the image of reference substance and reference substance in described map data base, described from the reference substance within the scope of described initial position, the process flow diagram determining the reference substance that recognition feature matches with the recognition feature of the image gathered as shown in Figure 2, can comprise:
Step S201: extract the characteristics of image of image gathered, and the characteristics of image of the image of each reference substance within the scope of described initial position;
Described characteristic information can comprise: FAST feature, SIFT feature, SURF feature or ORB feature etc.
Step S202: apply the image matching algorithm corresponding with described characteristics of image and the image of each reference substance within the scope of gathered image and described initial position is carried out images match;
Different matching algorithms can be selected according to different characteristics of image, such as, ORB algorithm uses BRIFT feature interpretation picture, and whether the image that Hamming distance (Hamming Distance) can be used to judge to collect mates with each image within the scope of initial position; Whether SIFT algorithm adopts 128 dimensional vectors to describe local feature, therefore can adopt vector similarity to judge the image collected to mate with each image within the scope of initial position; Wavelet transformation processes whole image, and statistics with histogram global feature can be used to obtain statistical vector, thus adopt vector similarity to judge whether the image collected mates with each image within the scope of initial position.
Step S203: the reference substance that characteristics of image and the characteristics of image of the image gathered match is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.
If in the reference substance within the scope of described initial position, there is the reference substance that image matches with the image gathered, then this reference substance is exactly the reference substance that recognition feature matches with the recognition feature of the image gathered.
In the embodiment of the present application, in described map data base, also only can store the recognition feature of the image of reference substance, and the position of reference substance; Both can also comprise the image of reference substance, comprise again the recognition feature of the image of reference substance, and the position of reference substance; Above-described embodiment, preferably, when only storing the recognition feature of image of reference substance in described map data base, and the position of reference substance, or in described map data base, both comprised the image of reference substance, comprise again the recognition feature of the image of reference substance, and during the position of reference substance, described from the reference substance within the scope of described initial position, another process flow diagram determining the reference substance that recognition feature matches with the recognition feature of the image gathered as shown in Figure 3, can comprise:
Step S301: extract recognition feature from gathered image, described recognition feature comprises: character features and/or characteristics of image;
In the embodiment of the present application, character features can be extracted by OCR algorithm, certainly, also can be extracted by other algorithm, be not specifically limited here.
Described characteristics of image can pass through FAST algorithm, SIFT algorithm or ORB algorithm and obtain, and certainly, also can be obtained by other algorithm, as wavelet algorithm, be not specifically limited here.
Step S302: if the recognition feature extracted is character features, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
When the recognition feature extracted is character features, when the recognition feature of namely gathered image is character features, can search whether there is described character features in the recognition feature of reference substance directly within the scope of described initial position, if existed, then determine in the reference substance within the scope of described initial position, the reference substance that the reference substance that recognition feature comprises described character features matches with the recognition feature of the image gathered for recognition feature;
Step S303: if the recognition feature extracted is characteristics of image, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered.
When the recognition feature extracted is characteristics of image, can according to characteristics of image, select corresponding image matching algorithm, image matching algorithm selected by utilization, and the characteristics of image of each reference substance within the scope of the characteristics of image of the image gathered and described initial position, the image of gathered image with each reference substance within the scope of described initial position is mated, by in the reference substance within the scope of described initial position, the reference substance that image and the image gathered match is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.That is, in the embodiment of the present application, the reference substance that described characteristics of image matches with the characteristics of image of the image gathered refers to, in the reference substance within the scope of described initial position, and the reference substance that image and the image gathered match.
Step S304: if the recognition feature extracted comprises character features and characteristics of image, then from the reference substance within the scope of described initial position, determines the first reference substance set that character features matches with the character features of the image gathered; From the reference substance within the scope of described initial position, determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered; The reference substance simultaneously appeared in described first reference substance set and the second reference substance set is defined as the reference substance matched with the recognition feature of gathered image.
When extracted feature both comprised character features, when comprising again characteristics of image, a reference substance set can be determined by each feature, then the reference substance that each reference substance intersection of sets is concentrated is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.
Above-described embodiment, preferably, described from the reference substance within the scope of described initial position, the another process flow diagram determining the reference substance that recognition feature matches with the recognition feature of the image gathered as shown in Figure 4, can comprise:
Step S401: extract recognition feature from gathered image, described recognition feature comprises: character features and/or characteristics of image;
In the embodiment of the present application, character features can be extracted by OCR algorithm, certainly, also can be extracted by other algorithm, be not specifically limited here.
Described characteristics of image can pass through FAST algorithm, SIFT algorithm or ORB algorithm and obtain, and certainly, also can be obtained by other algorithm, as wavelet algorithm, be not specifically limited here.
Step S402: if the recognition feature extracted is character features, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
When the recognition feature extracted is character features, when the recognition feature of namely gathered image is character features, can search whether there is described character features in the recognition feature of reference substance directly within the scope of described initial position, if existed, then determine in the reference substance within the scope of described initial position, the reference substance that the reference substance that recognition feature comprises described character features matches with the recognition feature of the image gathered for recognition feature;
Step S403: if the recognition feature extracted is characteristics of image, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered.
When the recognition feature extracted is characteristics of image, can according to characteristics of image, select corresponding image matching algorithm, image matching algorithm selected by utilization, and the characteristics of image of each reference substance within the scope of the characteristics of image of the image gathered and described initial position, the image of gathered image with each reference substance within the scope of described initial position is mated, by in the reference substance within the scope of described initial position, the reference substance that image and the image gathered match is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.That is, in the embodiment of the present application, the reference substance that described characteristics of image matches with the characteristics of image of the image gathered refers to, in the reference substance within the scope of described initial position, and the reference substance that image and the image gathered match.
Step S404: if the recognition feature extracted comprises character features and characteristics of image, then from the reference substance within the scope of described initial position, determines the first reference substance set that character features matches with the character features of the image gathered; From described first reference substance set, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered.
Different from embodiment illustrated in fig. 3, in the embodiment of the present application, if the recognition feature extracted not only comprises character features but also comprise characteristics of image, then first determine a reference substance set by a recognition feature, then from determined reference substance set, determine by another recognition feature the reference substance that recognition feature matches with the recognition feature of the image gathered;
In the embodiment of the present application, first can also determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered, and then determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered from the second reference substance set.
Above-described embodiment, preferably, when there is not the reference substance that recognition feature matches with the recognition feature of image gathered in the reference substance within the scope of described initial position, determines that described initial position is the position location of current location.
Corresponding with embodiment of the method, as shown in Figure 5, this system can comprise the structural representation of a kind of positioning system that the embodiment of the present application provides:
Image collecting device 501, initial position acquisition module 502, first acquisition module 503, first determination module 504, second determination module 505;
Image collecting device 501 is for gathering the image of current position object;
Initial position acquisition module 502 is for obtaining the initial position of current location;
First acquisition module 503 is connected with described initial position acquisition module 502, in the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
First determination module 504 is connected with described image collecting device 501 with described first acquisition module 503 respectively, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
Second determination module 505 is connected with described first determination module 504 with described first acquisition module 503 respectively, and the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
A kind of positioning system based on image recognition that the embodiment of the present application provides, prestore map datum, described map datum is that the recognition feature of some reference substances and position are when positioning, first the initial position message of current location is obtained, then the reference substance that the image of recognition feature and image acquisition device matches is determined in the reference substance within the scope of initial position, the position of the reference substance recognition feature of the image of recognition feature and described collection matched is defined as the position location of current location, and, the positional information of the reference substance prestored is accurate to certain particular location in indoor floor or floor, therefore positioning precision is higher, from above-mentioned position fixing process, the positioning system that the application provides, mated by the recognition feature of the recognition feature of the image by the reference substance within the scope of initial position with the image of image acquisition device, thus obtain the reference substance that mates with object, therefore, the application does not need to transform mobile terminal, do not need additionally to demarcate yet, as long as mobile terminal has image collecting function and traditional positioning function can realize accurate location, while improve positioning precision, reduce and realize cost.
A kind of structural representation of the first determination module that the embodiment of the present application provides as shown in Figure 6, can comprise:
First extraction unit 601, is connected with described image collecting device 501, and for extracting recognition feature in the image from described image collecting device 501 collection, described recognition feature comprises character features and/or characteristics of image;
First determining unit 602, be connected with described first acquisition module 503 with described first extraction unit 601 respectively, for when the recognition feature that described first extraction unit extracts is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
Second determining unit 603, be connected with described first acquisition module 503 with described first extraction unit 601 respectively, for when the recognition feature that described first extraction unit extracts is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
3rd determining unit 604, be connected with described first acquisition module 503 with described first extraction unit 601 respectively, recognition feature for extracting when described first extraction unit comprises character features and characteristics of image, time, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From the reference substance within the scope of described initial position, determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered; The reference substance simultaneously appeared in described first reference substance set and the second reference substance set is defined as the reference substance matched with the recognition feature of gathered image.
The another kind of structural representation of the first determination module that the embodiment of the present application provides as shown in Figure 7, can comprise:
Second extraction unit 701, is connected with described image collecting device 501, and for extracting recognition feature from gathered image, described recognition feature comprises character features and/or characteristics of image;
4th determining unit 702, be connected with described first acquisition module 503 with described second extraction unit 701 respectively, for when the recognition feature that described second extraction unit extracts is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
5th determining unit 703, be connected with described first acquisition module 503 with described second extraction unit 701 respectively, for when the recognition feature that described second extraction unit extracts is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
6th determining unit 704, be connected with described first acquisition module 503 with described second extraction unit 701 respectively, for when the recognition feature that described second extraction unit extracts comprises character features and characteristics of image, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From described first reference substance set, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered.
On basis embodiment illustrated in fig. 5, the structural representation of the another kind of positioning system that the embodiment of the present application provides as shown in Figure 8, can also comprise:
3rd determination module 801, during for not determining reference substance that recognition feature matches with the recognition feature of image gathered at described first determination module 504, determines that described initial position is the position location of current location.
Wherein, the function of the 3rd determination module 801 and the second determination module 505 also can be realized by a functional module.
The embodiment of the present application also provides a kind of mobile terminal, and this mobile terminal comprises positioning system as above, that is, the positioning system that the embodiment of the present application provides is a functional module or a functional component of mobile terminal.
The positioning system that the embodiment of the present application provides, only can realize at mobile terminal side, as Fig. 7 or embodiment illustrated in fig. 8, also can realize in conjunction with server end, concrete, when the positioning system that the embodiment of the present application provides is realized by mobile terminal and server combination:
The structural representation of another positioning system that the embodiment of the present application provides as shown in Figure 9, can comprise:
Mobile terminal 901 and server 902; Wherein,
Described mobile terminal 901 can comprise:
First image collecting device 9011, for gathering the image of current position object;
First initial position acquisition module 9012, for obtaining the initial position of current location;
First sending module 9013, the initial position that image and described first initial position acquisition module 9012 for sending described first image collecting device 9011 collection obtain;
First receiver module 9014, for receiving the position location that described server sends;
Described server 902 can comprise:
First receiver module 9021, for receiving image and the initial position of described first sending module 9013 transmission;
Second acquisition module 9022, for from the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
4th determination module 9023, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
4th determination module 9023 extracts the recognition feature of the image that the first receiver module 9021 receives, and described recognition feature comprises character features and/or characteristics of image, the recognition feature of extraction is mated with each recognition feature within the scope of described initial position;
In the embodiment of the present application, the concrete structure schematic diagram of the 4th determination module 9023 referring to Fig. 6 or embodiment illustrated in fig. 7, can repeat no more here.
5th determination module 9024, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
Second sending module 9025, for sending the described 5th determined position location of determination module 9024.
In order to reduce the network bandwidth shared when mobile terminal communicates with server end, the structural representation of another positioning system that the embodiment of the present application provides as shown in Figure 10, can comprise:
Mobile terminal 1001 and server 1002; Wherein,
Mobile terminal 1001 can comprise:
Second image collecting device 10011, for gathering the image of current position object;
Characteristic extracting module 10012, for extracting the recognition feature of gathered image, described recognition feature comprises character features and/or characteristics of image;
Second initial position acquisition module 10013, for obtaining the initial position of current location;
3rd sending module 10014, for sending the initial position that described in characteristic sum that described characteristic extracting module 10012 extracts, the second initial position acquisition module 10013 obtains;
3rd receiver module 9015, for receiving the position location that described server 1002 sends;
Described server 1002 can comprise:
4th receiver module 10021, for receiving recognition feature and the initial position of described 3rd sending module 10014 transmission;
3rd acquisition module 10022, for from the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
6th determination module 10023, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
A kind of structural representation of the 6th determination module 10023 that the structure of described 6th determination module 10023 can provide for the embodiment of the present application referring to Figure 11, Figure 11, can comprise:
First determines submodule 1101, be connected with described 3rd acquisition module 10022 with described 4th receiver module 10021 respectively, for when the recognition feature that described 4th receiver module 10021 receives is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
Second determines submodule 1102, be connected with described 3rd acquisition module 10022 with described 4th receiver module 10021 respectively, for when the recognition feature that described 4th receiver module 10021 receives is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
3rd determines submodule 1103, be not connected with described 3rd acquisition module 10022 with described 4th receiver module 10021, recognition feature for receiving when described 4th receiver module 10021 comprises character features and characteristics of image, time, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From the reference substance within the scope of described initial position, determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered; The reference substance simultaneously appeared in described first reference substance set and the second reference substance set is defined as the reference substance matched with the recognition feature of gathered image.
The another kind of structural representation of the 6th determination module 10023 that the embodiment of the present application provides as shown in figure 12, can comprise:
4th determines submodule 1201, be connected with described 3rd acquisition module 10022 with described 4th receiver module 10021 respectively, for when the recognition feature that described 4th receiver module 10021 receives is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
5th determines submodule 1202, be connected with described 3rd acquisition module 10022 with described 4th receiver module 10021 respectively, for when the recognition feature that described 4th receiver module 10021 receives is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
6th determines submodule 1203, be not connected with described 3rd acquisition module 10022 with described 4th receiver module 10021, for when the recognition feature that described 4th receiver module 10021 receives comprises character features and characteristics of image, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From described first reference substance set, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered.
7th determination module 10024, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
4th sending module 10025, for sending described position location.
In the embodiment of the present application, mobile terminal is after collecting identification marking image, not directly send described identification marking image, but first extract the feature of described identification marking image, only send described feature, thus decrease information traffic volume, decrease the network bandwidth shared when mobile terminal communicates with server end.
For device disclosed in embodiment, because it corresponds to the method disclosed in Example, so description is fairly simple, relevant part illustrates see method part.
To the above-mentioned explanation of the disclosed embodiments, professional and technical personnel in the field are realized or uses the present invention.To be apparent for those skilled in the art to the multiple amendment of these embodiments, General Principle as defined herein can without departing from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention can not be restricted to these embodiments shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (11)

1. a localization method, is characterized in that, comprising:
By the image of image acquisition device current position object;
Obtain the initial position of described current location;
In the map data base prestored, obtain recognition feature and the position of the reference substance within the scope of described initial position;
From the reference substance within the scope of described initial position, determine the reference substance that recognition feature matches with the recognition feature of the image gathered;
The position of the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
2. method according to claim 1, is characterized in that, described from the reference substance within the scope of described initial position, determines that the reference substance that recognition feature and the recognition feature of the image gathered match comprises:
From gathered image, extract recognition feature, described recognition feature comprises character features and/or characteristics of image;
If the recognition feature extracted is character features, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
If the recognition feature extracted is characteristics of image, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
If the recognition feature extracted comprises character features and characteristics of image, then from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From the reference substance within the scope of described initial position, determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered; The reference substance simultaneously appeared in described first reference substance set and the second reference substance set is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.
3. method according to claim 1, is characterized in that, described from the reference substance within the scope of described initial position, determines that the reference substance that recognition feature and the recognition feature of the image gathered match comprises:
From gathered image, extract recognition feature, described recognition feature comprises character features and/or characteristics of image;
If the recognition feature extracted is character features, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
If the recognition feature extracted is characteristics of image, then from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
If the recognition feature extracted comprises character features and characteristics of image, then from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From described first reference substance set, obtain the reference substance that characteristics of image matches with the characteristics of image of the image gathered, this reference substance is defined as the reference substance that recognition feature matches with the recognition feature of the image gathered.
4. the method according to claim 1-3 any one, is characterized in that, when not getting the reference substance that the recognition feature of recognition feature with the image gathered matches, determines that described initial position is the position location of current location.
5. a positioning system, is characterized in that, comprising:
Image collecting device, for gathering the image of current position reference substance;
Initial position acquisition module, for obtaining the initial position of described current location;
First acquisition module, in the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
First determination module, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
Second determination module, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location.
6. system according to claim 5, is characterized in that, described first determination module comprises:
First extraction unit, for extracting recognition feature from gathered image, described recognition feature comprises character features and/or characteristics of image;
First determining unit, for when the recognition feature that described first extraction unit extracts is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
Second determining unit, for when the recognition feature that described first extraction unit extracts is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
3rd determining unit, for when the recognition feature that described first extraction unit extracts comprises character features and characteristics of image, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From the reference substance within the scope of described initial position, determine the second reference substance set that characteristics of image matches with the characteristics of image of the image gathered; The reference substance simultaneously appeared in described first reference substance set and the second reference substance set is defined as the reference substance matched with the recognition feature of gathered image.
7. system according to claim 5, is characterized in that, described first determination module comprises:
Second extraction unit, for extracting recognition feature from gathered image, described recognition feature comprises character features and/or characteristics of image;
4th determining unit, for when the recognition feature that described second extraction unit extracts is character features, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that character features matches with the character features of the image gathered;
5th determining unit, for when the recognition feature that described second extraction unit extracts is characteristics of image, from the reference substance within the scope of described initial position, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered;
6th determining unit, for when the recognition feature that described second extraction unit extracts comprises character features and characteristics of image, from the reference substance within the scope of described initial position, determine the first reference substance set that character features matches with the character features of the image gathered; From described first reference substance set, determine this reference substance to be defined as the reference substance that recognition feature matches with the recognition feature of the image gathered by the reference substance that characteristics of image matches with the characteristics of image of the image gathered.
8. system according to claim 5, is characterized in that, also comprises:
3rd determination module, during for not determining reference substance that recognition feature matches with the recognition feature of image gathered at described first determination module, determines that described initial position is the position location of current location.
9. a mobile terminal, is characterized in that, comprises the positioning system as described in claim 5-8 any one.
10. a positioning system, is characterized in that, comprising:
Mobile terminal and server; Wherein,
Described mobile terminal comprises:
First image collecting device, for gathering the image of current position object;
First initial position acquisition module, for obtaining the initial position of current location;
First sending module, for sending gathered image and described initial position;
First receiver module, for receiving the position location that described server sends;
Described server comprises:
First receiver module, for receiving image and the initial position of described first sending module transmission;
Second acquisition module, for from the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
4th determination module, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
5th determination module, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location;
Second sending module, for sending described position location.
11. 1 kinds of positioning systems, is characterized in that, comprising:
Mobile terminal and server; Wherein,
Described mobile terminal comprises:
Second image collecting device, for gathering the image of current position object;
Characteristic extracting module, for extracting the recognition feature of the image of described second image acquisition device;
Second initial position acquisition module, for obtaining the initial position of current location;
3rd sending module, for sending described recognition feature and described initial position;
3rd receiver module, for receiving the position location that described server sends;
Described server comprises:
4th receiver module, for receiving recognition feature and the initial position of described 3rd sending module transmission;
3rd acquisition module, for from the map data base prestored, obtains recognition feature and the position of the reference substance within the scope of described initial position;
6th determination module, for from the reference substance within the scope of described initial position, determines the reference substance that recognition feature matches with the recognition feature of the image gathered;
7th determination module, the position for the reference substance described recognition feature matched with the recognition feature of the image gathered is defined as the position location of current location;
4th sending module, for sending described position location.
CN201310598195.XA 2013-11-22 2013-11-22 Positioning method, system and mobile terminal Pending CN104657389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310598195.XA CN104657389A (en) 2013-11-22 2013-11-22 Positioning method, system and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310598195.XA CN104657389A (en) 2013-11-22 2013-11-22 Positioning method, system and mobile terminal

Publications (1)

Publication Number Publication Date
CN104657389A true CN104657389A (en) 2015-05-27

Family

ID=53248533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310598195.XA Pending CN104657389A (en) 2013-11-22 2013-11-22 Positioning method, system and mobile terminal

Country Status (1)

Country Link
CN (1) CN104657389A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105188135A (en) * 2015-08-17 2015-12-23 京东方科技集团股份有限公司 Terminal positioning method and system, target terminal and positioning server
CN105792131A (en) * 2016-04-21 2016-07-20 北京邮电大学 Positioning method and system
CN106153047A (en) * 2016-08-15 2016-11-23 广东欧珀移动通信有限公司 A kind of indoor orientation method, device and terminal
CN107144857A (en) * 2017-05-17 2017-09-08 深圳市伊特利网络科技有限公司 Assisted location method and system
CN107545006A (en) * 2016-06-28 2018-01-05 百度在线网络技术(北京)有限公司 A kind of method, equipment and system for being used to establishing or updating image positional data storehouse
WO2018228256A1 (en) * 2017-06-12 2018-12-20 炬大科技有限公司 System and method for determining indoor task target location by image recognition mode
CN109086745A (en) * 2018-08-31 2018-12-25 广东工业大学 A kind of localization method, device, equipment and computer readable storage medium
CN109376208A (en) * 2018-09-18 2019-02-22 高枫峻 A kind of localization method based on intelligent terminal, system, storage medium and equipment
CN109766914A (en) * 2018-12-14 2019-05-17 深圳壹账通智能科技有限公司 Item identification method, device, equipment and storage medium based on image recognition
CN109815356A (en) * 2018-12-14 2019-05-28 北京三快在线科技有限公司 Information acquisition method and device
CN109872360A (en) * 2019-01-31 2019-06-11 斑马网络技术有限公司 Localization method and device, storage medium, electric terminal
CN110231039A (en) * 2019-06-27 2019-09-13 维沃移动通信有限公司 A kind of location information modification method and terminal device
CN112640490A (en) * 2020-04-07 2021-04-09 华为技术有限公司 Positioning method, device and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208012A (en) * 2010-03-31 2011-10-05 爱信艾达株式会社 Scene matching reference data generation system and position measurement system
CN102231188A (en) * 2011-07-05 2011-11-02 上海合合信息科技发展有限公司 Business card identifying method combining character identification with image matching
US20130045751A1 (en) * 2011-08-19 2013-02-21 Qualcomm Incorporated Logo detection for indoor positioning
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103106252A (en) * 2013-01-16 2013-05-15 浙江大学 Method for using handheld device to position plane area

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208012A (en) * 2010-03-31 2011-10-05 爱信艾达株式会社 Scene matching reference data generation system and position measurement system
CN102231188A (en) * 2011-07-05 2011-11-02 上海合合信息科技发展有限公司 Business card identifying method combining character identification with image matching
US20130045751A1 (en) * 2011-08-19 2013-02-21 Qualcomm Incorporated Logo detection for indoor positioning
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103106252A (en) * 2013-01-16 2013-05-15 浙江大学 Method for using handheld device to position plane area

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105188135B (en) * 2015-08-17 2019-06-25 京东方科技集团股份有限公司 Method of locating terminal and system, target terminal and location-server
WO2017028433A1 (en) * 2015-08-17 2017-02-23 京东方科技集团股份有限公司 Terminal positioning method and system, target terminal and positioning server
US10003917B2 (en) 2015-08-17 2018-06-19 Boe Technology Group Co., Ltd. Terminal positioning method and system, target terminal and positioning server
CN105188135A (en) * 2015-08-17 2015-12-23 京东方科技集团股份有限公司 Terminal positioning method and system, target terminal and positioning server
CN105792131A (en) * 2016-04-21 2016-07-20 北京邮电大学 Positioning method and system
CN105792131B (en) * 2016-04-21 2018-11-23 北京邮电大学 A kind of localization method and system
CN107545006A (en) * 2016-06-28 2018-01-05 百度在线网络技术(北京)有限公司 A kind of method, equipment and system for being used to establishing or updating image positional data storehouse
CN106153047A (en) * 2016-08-15 2016-11-23 广东欧珀移动通信有限公司 A kind of indoor orientation method, device and terminal
CN107144857A (en) * 2017-05-17 2017-09-08 深圳市伊特利网络科技有限公司 Assisted location method and system
WO2018228256A1 (en) * 2017-06-12 2018-12-20 炬大科技有限公司 System and method for determining indoor task target location by image recognition mode
CN109086745A (en) * 2018-08-31 2018-12-25 广东工业大学 A kind of localization method, device, equipment and computer readable storage medium
CN109376208A (en) * 2018-09-18 2019-02-22 高枫峻 A kind of localization method based on intelligent terminal, system, storage medium and equipment
CN109766914A (en) * 2018-12-14 2019-05-17 深圳壹账通智能科技有限公司 Item identification method, device, equipment and storage medium based on image recognition
CN109815356A (en) * 2018-12-14 2019-05-28 北京三快在线科技有限公司 Information acquisition method and device
CN109872360A (en) * 2019-01-31 2019-06-11 斑马网络技术有限公司 Localization method and device, storage medium, electric terminal
CN110231039A (en) * 2019-06-27 2019-09-13 维沃移动通信有限公司 A kind of location information modification method and terminal device
CN112640490A (en) * 2020-04-07 2021-04-09 华为技术有限公司 Positioning method, device and system
WO2021203241A1 (en) * 2020-04-07 2021-10-14 华为技术有限公司 Positioning method, apparatus, and system
CN112640490B (en) * 2020-04-07 2022-06-14 华为技术有限公司 Positioning method, device and system

Similar Documents

Publication Publication Date Title
CN104657389A (en) Positioning method, system and mobile terminal
CN103424113B (en) Indoor positioning and navigating method of mobile terminal based on image recognition technology
CN103067856A (en) Geographic position locating method and system based on image recognition
KR101350033B1 (en) Terminal and method for providing augmented reality
CN100433050C (en) Mobile communication system, mobile terminal device, fixed station device, character recognition device and method, and program
CN103162682B (en) Based on the indoor path navigation method of mixed reality
CN103514446A (en) Outdoor scene recognition method fused with sensor information
CN105318881A (en) Map navigation method, and apparatus and system thereof
CN102567449A (en) Vision system and method of analyzing an image
KR101868125B1 (en) Method and server for Correcting GPS Position in downtown environment using street view service
CN104145173A (en) Visual ocr for positioning
CN106646566A (en) Passenger positioning method, device and system
CN103335657A (en) Method and system for strengthening navigation performance based on image capture and recognition technology
CN102736060A (en) Positioning device, positioning system and positioning method
CN103778261A (en) Self-guided tour method based on mobile cloud computing image recognition
US11429923B2 (en) Method and device for determining mail path information
CN103761539A (en) Indoor locating method based on environment characteristic objects
KR100968837B1 (en) Portable camera system provides information about captured objects
CN104850563A (en) Destination image comparison retrieval device, destination image comparison retrieval system and destination image comparison retrieval method
CN103473254A (en) Method and apparatus for storing image data
CN103107938A (en) Information interactive method, server and terminal
CN101754363A (en) System, method and device for identifying position
KR20190059120A (en) Facility Inspection System using Augmented Reality based on IoT
Feng et al. Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments
CN107845287A (en) A kind of building parking management platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200507

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 100020, No. 18, No., Changsheng Road, Changping District science and Technology Park, Beijing, China. 1-5

Applicant before: AUTONAVI SOFTWARE Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150527