CN104661300A - Positioning method, device, system and mobile terminal - Google Patents

Positioning method, device, system and mobile terminal Download PDF

Info

Publication number
CN104661300A
CN104661300A CN201310598348.0A CN201310598348A CN104661300A CN 104661300 A CN104661300 A CN 104661300A CN 201310598348 A CN201310598348 A CN 201310598348A CN 104661300 A CN104661300 A CN 104661300A
Authority
CN
China
Prior art keywords
image
reference object
distance
frame
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310598348.0A
Other languages
Chinese (zh)
Other versions
CN104661300B (en
Inventor
白耕
王晋高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN201310598348.0A priority Critical patent/CN104661300B/en
Publication of CN104661300A publication Critical patent/CN104661300A/en
Application granted granted Critical
Publication of CN104661300B publication Critical patent/CN104661300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/003Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses a positioning method which comprises the following steps: acquiring an image of a target object in a present position through a user terminal, confirming an image of a reference object matched with the target object in images of the reference object within an initial position, and acquiring a size ratio of a zoomed image of the target object to the acquired image of the target object, wherein the initial position of the target object can be acquired by using a conventional positioning method; confirming the distance between a user terminal and the target object; taking the distance between the user terminal and the target object and a confirmed acquisition angle of the reference object as positioning result; or confirming the positioning position of the user terminal according to the distance between the user terminal and the target object, a confirmed coordinate of the reference object and the confirmed acquisition angle of the reference object. The user terminal does not need to be transformed, extra calibration is also not needed, the positioning precision is improved, and at the same time, the implementation cost is lowered. The invention further discloses a positioning device, a positioning system and a mobile terminal.

Description

Positioning method, device, system and mobile terminal
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to a positioning method, apparatus, system and mobile terminal.
Background
With the rapid development of social economy, people have more and more extensive positioning requirements on self positions, and besides outdoor positioning, the indoor self position positioning is more and more required to be realized, for example, the self positions can be determined in huge buildings and shopping malls, and more humanized services can be provided for walking guidance, position sharing and the like.
Traditional indoor location technique mainly uses modes such as WIFI, GPS, basic station signal to fix a position, and positioning accuracy is low, and present solution either reforms transform user's mobile device (if, the cell-phone embeds the location chip and fixes a position), or carries out extra demarcation indoor (if at indoor installation location launching pad, uses bluetooth module etc. and fixes a position the launching pad communication and fix a position), realizes that the cost is higher.
Disclosure of Invention
The invention aims to provide a positioning method, a positioning device, a positioning system and a mobile terminal so as to reduce the implementation cost of indoor positioning.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of positioning, comprising:
acquiring an image of a target object at a current position through a user terminal;
acquiring an initial position of the current position;
acquiring information of a reference object in the initial position range in a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the image of the reference object, a focal length of image acquisition equipment used when the image of the reference object is acquired in advance, and a distance between the image acquisition equipment and the reference object;
matching the image of the target object with each frame of image of each reference object in the initial position range, and determining one frame of image of one reference object matched with the target object;
acquiring the size ratio of the zoom image of the target object to the image of the target object;
according to the focal length of the user terminal, the focal length of image acquisition equipment used when the determined image of the reference object is acquired in advance, the distance between the image acquisition equipment and the reference object and the size ratio, the distance between the user terminal and the target object is determined;
taking the distance between the user terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the user terminal according to the distance between the user terminal and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
A positioning device, comprising:
the image acquisition module is used for acquiring an image of a target object at the current position;
an initial position obtaining module, configured to obtain an initial position of the current position;
a reference object information acquisition module, configured to acquire information of a reference object in the initial position range from a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the reference object, a focal length of image acquisition equipment used when the images of the reference object are acquired in advance, and a distance between the image acquisition equipment and the reference object;
the matching module is used for matching the image of the target object with each frame of image of each reference object in the initial position range and determining one frame of image of one reference object matched with the target object;
a size ratio determination module for obtaining a size ratio of the scaled image of the target object to the acquired image of the target object;
the distance determining module is used for determining the distance between the current position and the target object according to the focal length of the image acquisition module, the focal length of image acquisition equipment used when the image of the determined reference object is acquired in advance, the distance between the image acquisition equipment and the determined reference object and the size ratio;
the positioning result module is used for taking the distance between the current position and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the current position according to the distance between the current position and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
A mobile terminal comprises any one of the positioning devices provided by the invention.
A positioning system, comprising:
a mobile terminal and a server; wherein,
the mobile terminal includes:
the image acquisition module is used for acquiring an image of a target object at the current position;
an initial position obtaining module, configured to obtain an initial position of the current position;
the first sending module is used for sending the image of the target object and the initial position;
a first receiving module, configured to receive information of the determined reference sent by the server, where the information includes: the coordinate of the determined reference object, the image of the determined reference object, the acquisition angle of the determined reference object, the focal length of an image acquisition device used when the image of the determined reference object is acquired in advance, and the distance between the image acquisition device and the determined reference object;
a size ratio determination module for obtaining a size ratio of the scaled image of the target object to the acquired image of the target object;
the distance determining module is used for determining the distance between the mobile terminal and the target object according to the focal length of the first image acquisition module, the focal length of image acquisition equipment used when the determined image of the reference object is acquired in advance, the distance between the image acquisition equipment and the determined reference object and the size ratio;
the positioning result determining module is used for taking the distance between the mobile terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the mobile terminal according to the distance between the mobile terminal and the target object, the determined coordinate of the reference object and the determined acquisition angle of the reference object;
the server includes:
the second receiving module is used for receiving the image and the initial position of the target object sent by the mobile terminal;
a reference object information acquisition module, configured to acquire information of a reference object in the initial position range from a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the reference object, a focal length of image acquisition equipment used when the images of the reference object are acquired in advance, and a distance between the image acquisition equipment and the reference object;
the matching module is used for matching the image of the target object with each frame of image of each reference object in the initial position range and determining one frame of image of one reference object matched with the target object;
a second sending module, configured to send information of the determined reference object, where the information includes: the coordinate of the determined reference object, the image of the determined reference object, the acquisition angle of the determined reference object, the focal length of an image acquisition device used when the image of the determined reference object is acquired in advance, and the distance between the image acquisition device and the determined reference object.
According to the technical scheme, the information of the reference object is stored in advance, and the information at least comprises the following information: the system comprises coordinates of a reference object, a plurality of frames of images of the reference object, an acquisition angle of the images of the reference object, a focal length of an image acquisition device used when the images of the reference object are acquired in advance, and a distance between the image acquisition device and the reference object. Acquiring an image of a target object at the current position through a user terminal, wherein the initial position of the image can be acquired through a traditional positioning method, and then determining a frame of image of a reference object matched with the target object through image matching; acquiring the size ratio of the zoom image of the target object to the acquired image of the target object; according to the focal length of the user terminal, the focal length of image acquisition equipment used when the image of the determined reference object is acquired in advance, the distance between the image acquisition equipment and the determined reference object and the size ratio, the distance between the user terminal and the target object is determined; taking the distance between the user terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the user terminal according to the distance between the user terminal and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
Therefore, the positioning technical scheme provided by the embodiment of the application determines the position of the target object through the position of the reference object and the position of the target object relative to the reference object, so that the mobile terminal does not need to be modified, and additional calibration is not needed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a positioning method according to an embodiment of the present application;
fig. 2 is a schematic diagram of image acquisition of a reference object according to an embodiment of the present application;
fig. 3 is a schematic diagram of establishing a relative coordinate system according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a specific implementation of matching the image of the target object with each frame of image of each reference object in the initial position range and determining a frame of image of a reference object matched with the target object according to the embodiment of the present application;
FIG. 5-a is a flowchart illustrating an embodiment of obtaining an image distance between an image of the target object and each frame of image of each reference object within the initial position range;
FIG. 5-b is a flow chart of a specific implementation of the embodiment shown in FIG. 5-a provided by an embodiment of the present application;
FIG. 6 is a flowchart illustrating another embodiment of the present disclosure for obtaining image distances between an image of the target object and each frame of images of each reference object within the initial position range;
fig. 7 is a flowchart illustrating a specific implementation of obtaining a distance between each of the first/second image blocks and one frame of image of a reference object according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating an embodiment of the present application for determining a sum of distances as an image distance between an image of the target object and the image of the reference object;
fig. 9 is a flowchart of an implementation of determining a smallest sum of two distances as an image distance between an image of the target object and the frame image of the reference object according to an embodiment of the present application;
fig. 10 is a flowchart of a specific implementation of one frame of image of one reference object, in the image of the reference object in the initial position range, that determines that the first image distance, the second image distance, and the third image distance satisfy the preset image matching condition;
fig. 11 is a flowchart illustrating a specific implementation of obtaining a first color moment feature distance between each first image block and a frame of image of a reference object according to the embodiment of the present application;
fig. 12 is a schematic diagram illustrating a first image block continuously moving within a preset image range corresponding to the first image block on a frame of image of a reference according to an embodiment of the present application;
fig. 13 is a flowchart illustrating a specific implementation of obtaining a first shape feature distance between each first image block and a frame of image of a reference according to the embodiment of the present application;
fig. 14 is a flowchart illustrating a specific implementation of obtaining a first texture feature distance between each first image block and a frame of image of a reference according to the embodiment of the present application;
fig. 15 is a flowchart illustrating a specific implementation of obtaining a size ratio of the scaled image of the target object to the acquired image of the target object according to the embodiment of the present application;
fig. 16 is a schematic structural diagram of a positioning device according to an embodiment of the present disclosure;
fig. 17 is a schematic structural diagram of a matching module according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of an acquisition submodule provided in an embodiment of the present application;
fig. 19 is another schematic structural diagram of an acquisition submodule provided in the embodiment of the present application;
fig. 20 is a schematic structural diagram of a first/second obtaining unit according to an embodiment of the present disclosure;
fig. 21 is a schematic structural diagram of a first determining unit provided in an embodiment of the present application;
fig. 22 is a schematic structural diagram of a second determination unit provided in an embodiment of the present application;
fig. 23 is a schematic structural diagram of a deterministic sub-module provided in an embodiment of the present application;
fig. 24 is a schematic structural diagram of a first obtaining subunit according to an embodiment of the present application;
fig. 25 is a schematic structural diagram of a second obtaining subunit according to an embodiment of the present application;
fig. 26 is a schematic structural diagram of a third obtaining subunit provided in the embodiment of the present application;
FIG. 27 is a schematic diagram of a size ratio determining module according to an embodiment of the present application;
fig. 28 is a schematic structural diagram of a positioning system according to an embodiment of the present application;
fig. 29 is a schematic structural diagram of another positioning system provided in the embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a positioning method according to an embodiment of the present application, including:
step S11: acquiring an image of a target object at a current position through a user terminal;
in the present application, the landmark object may be selected as a target object, for example, the target object may be a trademark of a business, a door, a window, a ceiling, a pillar, a character, a pattern, or the like, or a landmark building, or a house number of a community, or the like.
Step S12: acquiring an initial position of the current position;
the initial position of the current position may be obtained by a conventional positioning method, such as positioning using WIFI or GPS or base station signals.
Step S13: acquiring information of a reference object in the initial position range in a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the reference object, a focal length of image acquisition equipment used when the images of the reference object are acquired in advance, and a distance between the image acquisition equipment and the reference object;
in the embodiment of the present application, during the acquisition, a plurality of frame images are acquired for each reference object, specifically, an image of a target object may be acquired in a video mode, and a solid image of the reference object may also be acquired according to an acquisition angle interval, specifically, referring to fig. 2, the method for acquiring an image of the reference object shown in fig. 2 is as follows: the acquisition personnel move on the arc taking the reference object as the center and taking the distance L as the radius, and image acquisition is carried out on the reference object once every theta angle. Fig. 2 only shows a schematic diagram of image acquisition of a reference object, that is, the acquisition angle interval is fixed, and in a specific implementation, the acquisition angle interval may also be unfixed, and similarly, the distance may also not be fixed, and in image acquisition, a certain pitch angle may also be provided, and the pitch angle may also be variable. It should be noted that, since some reference objects may only be partially visible (e.g., the reference objects are at the corners), the image of the reference object is acquired within the visible range of the reference object.
After each image acquisition is performed on the reference object, the focal length, the acquisition angle and the distance between the image acquisition device for acquiring the image and the reference object are also saved, wherein the acquisition angle is an angle determined relative to a predefined 0 °, for example, the due north direction may be defined as 0 °, or the due south direction may be defined as 0 °, and the like, but no matter what manner is defined, the acquisition angle cannot be changed after the predefined, that is, when all the reference objects are subjected to image acquisition, the same defined 0 ° is used as a reference to judge the acquisition angle.
When the reference object image is collected, one image can contain a plurality of reference objects, for example, the plurality of reference objects are relatively close, or one reference object is attached to another reference object, and the like; when there are multiple references, one reference may be determined as a main reference, and the distance between the image acquisition device and the main reference may be determined as the distance between the image acquisition device and the reference.
For the coordinates of the reference object, if the reference object is indoors, the relative position of the reference object may be determined by establishing a relative coordinate system, specifically, the manner of establishing the coordinate system may be as shown in fig. 3, fig. 3 is a schematic diagram of establishing a relative coordinate system provided in the embodiment of the present application, and the manner of establishing the coordinate system may be as follows:
a relative coordinate system is established in advance for the building, and a geographical position can be selected as an origin, for example, the origin can be selected at a floor entrance of the building (of course, the origin can be freely selected, but cannot be changed after being selected), an X-axis is selected to be perpendicular to an inward direction of the gate, a Y-axis is selected to be perpendicular to the X-axis in a horizontal plane, and a Z-axis is selected to be perpendicular to the horizontal plane.
After establishing the relative coordinate system, each reference object can be regarded as a coordinate point, and X, Y, Z coordinates of the reference object can be collected under the relative coordinate system, wherein the Z axis represents the height of the reference object relative to the origin; x, Y, indicate unique positions at the same elevation. For example, the Z value may be the height of the reference object from a layer, and the X and Y values are the distances between the projections of the reference object on the X and Y axes and the origin.
The position of the outdoor reference object can be determined by positioning means such as GPS and the like.
Step S14: matching the image of the target object with each frame of image of each reference object in the initial position range, and determining one frame of image of one reference object matched with the target object;
step S15: acquiring the size ratio of the zoom image of the target object to the acquired image of the target object;
the zoom image of the target object may be a zoom image obtained by zooming the image of the target object according to a preset zoom ratio.
Step S16: according to the focal length of the user terminal, the focal length of image acquisition equipment used when the determined image of the reference object is acquired in advance, the distance between the image acquisition equipment and the reference object and the size ratio determine the distance between the user terminal and the target object;
specifically, the distance between the user terminal and the target object may be determined according to a first formula, where the first formula is:
D=(λ/μ)*L*β (1)
wherein D is the distance between the user terminal and the target object; lambda is the focal length of a camera of the user terminal; mu is the focal length when the image acquisition equipment acquires the determined image of the reference object; and L is the distance between the image acquisition equipment and the reference object when the image acquisition equipment acquires the image of the reference object, and beta is the size ratio of the zoom image of the target object to the acquired image of the target object.
The size of the image may refer to the number of pixels of the image.
Step S17: taking the distance between the user terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the user terminal according to the distance between the user terminal and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
After the distance between the user terminal and the target object is determined, the distance between the user terminal and the target object and the determined acquisition angle of the reference object can be directly fed back to the user as a positioning result, and the user feeds back the specific position of the user terminal according to the distance between the user terminal and the target object and the determined acquisition angle of the reference object. For example, assuming that the distance between the user terminal and the target object is L, and the determined acquisition angle of the reference object is θ, the user may use the target object as a center, use the distance L between the user terminal and the target object as a radius, and use a predefined 0 ° reference rotation angle θ to determine a position as a positioning position.
Since the position of the reference object is known in advance, after the distance between the user terminal and the target object is determined, the position of the user terminal can be uniquely determined according to the distance between the user terminal and the target object, the determined coordinate of the reference object and the acquisition angle of the reference object. Specifically, if the coordinates of the reference object are (x, y), (x, y) may be the relative coordinates described above, or may be longitude and latitude coordinates determined by the GPS, the distance between the user terminal and the target object is D, and the determined acquisition angle of the reference object is α, then the position (u, v) of the user terminal may be determined according to a second formula:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>u</mi> <mo>=</mo> <mi>sin</mi> <mi>&alpha;</mi> <mo>&times;</mo> <mi>D</mi> <mo>+</mo> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> <mo>=</mo> <mi>cos</mi> <mi>&alpha;</mi> <mo>&times;</mo> <mi>D</mi> <mo>+</mo> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
in the positioning method provided by the present application, information of a reference object is stored in advance, where the information at least includes: the system comprises coordinates of a reference object, a plurality of frames of images of the reference object, an acquisition angle of the images of the reference object, a focal length of an image acquisition device used when the images of the reference object are acquired in advance, and a distance between the image acquisition device and the reference object. Acquiring an image of a target object at the current position through a user terminal, wherein the initial position of the image can be acquired through a traditional positioning method, and then determining an image of a reference object matched with the target object through image matching; acquiring the size ratio of the zoom image of the target object to the acquired image of the target object; according to the focal length of the user terminal, the focal length of image acquisition equipment used when the determined image of the reference object is acquired in advance, the distance between the image acquisition equipment and the reference object and the size ratio, the distance between the user terminal and the target object is determined; taking the distance between the user terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the user terminal according to the distance between the user terminal and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
Therefore, the positioning method provided by the embodiment of the application determines the position of the target object through the position of the reference object and the position of the target object relative to the reference object, so that the mobile terminal does not need to be modified, and additional calibration is not needed.
In the foregoing embodiment, preferably, the specific implementation flow of matching the image of the target object with each frame of image of each reference object in the initial position range and determining one frame of image of one reference object matched with the target object is shown in fig. 4, and may include:
step S41: acquiring the image distance between the image of the target object and each frame of image of each reference object in the initial position range;
specifically, the distance between the image of the target object and the image of each frame of image of each reference object in the initial position range may be obtained by using the image features of the image, for example, the distance between the color moment feature of the image of the target object and the color moment feature of each frame of image of each reference object in the initial position range may be extracted, and the distance between the color moment feature of the image of the target object and the color moment feature of each frame of image of each reference object in the initial position range may be calculated. Of course, the image distance between the image of the target object and each frame of image of each reference object in the initial position range can also be obtained through other image characteristics.
Step S42: and determining a frame of image of a reference object with the image distance meeting the preset image matching condition as a frame of image of a reference object matched with the target object.
For example, when the image distance between the image of the target object and each frame of image of each reference object in the initial position range is obtained through the color moment features, one frame of image of one reference object with the shortest image distance to the image of the target object may be determined as one frame of image of one reference object matched with the target object.
Based on the embodiment shown in fig. 4, a specific implementation flow for acquiring the image distance between the image of the target object and each frame of image of each reference object in the initial position range provided by the embodiment of the present application is shown in fig. 5-a, and may include:
step S51: averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
for each frame of image of each reference object in the initial position range, performing the following steps to obtain the image distance between the image of the target object and each frame of image of each reference object in the initial position range,
step S52: acquiring the distance between each first image block and one frame of image of a reference object;
step S53: and determining the sum of the distances as the image distance between the image of the target object and the frame image of the reference object.
In practical applications, a specific implementation manner of the embodiment shown in fig. 5-a is shown in fig. 5-b, and may include:
step S511: acquiring the distance between a first image block and a frame image of a reference object;
step S521: judging whether the acquired first image block is the last one, if so, entering step S541, otherwise, executing step S531
Step S531: obtaining the distance between the next first image block and the frame image of the reference object, and returning to the step S521;
step S541: determining the sum of the distances as the image distance between the image of the target object and the frame image of the reference object
Step S551: judging whether the image in the step S541 is the last frame image of the reference object, if so, entering the step S571, and if not, executing the step S561;
step S561: acquiring the distance between a first image block and another frame of image of the reference object, and returning to the step S521;
step S571: and judging whether the reference object is the last reference object in the initial position range, if so, ending the process, otherwise, acquiring the distance between one first image block and one frame image of another reference object, and returning to the step S521.
In the embodiment of the application, the sum of the distances between all the first image blocks and one frame of image of one reference object is determined as the image distance between the image of the target object and the frame of image of the reference object.
Based on the embodiment shown in fig. 4, a specific implementation flow for obtaining the image distance between the image of the target object and each frame of image of each reference object in the initial position range, which is provided by another embodiment of the present application, is shown in fig. 6, and may include:
step S61: averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
step S62: averagely dividing the image of the target object into a plurality of second image blocks according to a preset second size of a pixel block, wherein the second size of the pixel block is smaller than the first size of the pixel block;
specifically, when the second division is performed, the size of the image block may be reduced by quadtree division, for example, assuming that the size of the image block is m × m when the first division is performed, then the size of the image block may be m × m when the second division is performed
For each frame of image of each reference object in the initial position range, the following steps are carried out to obtain the image distance between the image of the target object and each frame of image of each reference object in the initial position range:
step S63: acquiring the distance between each first image block and one frame of image of a reference object;
step S64: acquiring the distance between each second image block and the frame image of the reference object;
step S65: and respectively calculating the sum value of two distances, and determining the smallest sum value of the two distance sum values as the image distance between the image of the target object and the frame image of the reference object.
In practical applications, the flow shown in fig. 6 can be implemented specifically according to the principles of the foregoing steps S511 to S571, and for saving the space, the present application is not described in detail again.
For convenience of description, a distance between each first image block and one frame image of a reference object may be referred to as a first distance, a distance between each second image block and the frame image of the reference object may be referred to as a second distance, and in step S65, a sum of the first distances and a sum of the second distances are calculated, respectively, and a minimum value of the sum of the first distances and the sum of the second distances is determined as an image distance between an image of the target object and the frame image of the reference object.
Unlike the embodiment shown in fig. 5, in the embodiment of the present application, the image of the target object is divided twice, the image distance between the image of the target object and the image of the reference object (i.e., the sum of the first distance and the second distance) is calculated once for each division, and the minimum value of the image distances acquired twice is determined as the image distance between the image of the target object and the frame image of the reference object.
In the foregoing embodiment, preferably, the distance between each of the first/second image blocks and one frame of image of one reference object may be obtained by applying color moment features, shape features and texture features, and a specific implementation flow for obtaining the distance between each of the first/second image blocks and one frame of image of one reference object provided in this embodiment of the application is shown in fig. 7, and may include:
step S71: acquiring a first/second color moment characteristic distance between each first/second image block and one frame of image of a reference object by applying color moment characteristics;
specifically, when the image of the target object is divided only once, the distance between each first image block and one frame of image of one reference object is obtained by applying the color moment characteristic and is recorded as a first color moment characteristic distance;
when the image of the target object is divided twice, the distance between each second image block and the frame image of the reference object can be obtained by applying the color moment features and is recorded as a second color moment feature distance.
Step S72: obtaining a first/second shape feature distance between each first/second image block and the frame image of the reference object by applying shape features;
specifically, when the image of the target object is divided only once, the distance between each first image block and one frame of image of one reference object is obtained by applying shape features and is recorded as a first shape feature distance;
when the image of the target object is divided twice, the distance between each second image block and the frame of image of the reference object can be obtained by applying shape features and recorded as a second shape feature distance.
Step S73: and acquiring a first/second texture feature distance between each first/second image block and the frame image of the reference object by applying texture features.
Specifically, when the image of the target object is divided only once, the distance between each first image block and one frame of image of one reference object is obtained by applying texture features and is recorded as a first texture feature distance;
when the image of the target object is divided twice, the distance between each second image block and the frame image of the reference object can be obtained by applying texture features and is recorded as a second texture feature distance.
It should be noted that the execution sequence of step S71, step S72 and step S73 may not be limited to the sequence defined in the above embodiment, that is, the execution sequence of step S71, step S72 and step S73 may be arbitrarily adjusted, and is not specifically limited herein.
In the present application, "/" denotes "or", and the first/second image block means either the first image block or the second image block.
In the above embodiment, when the image of the target object is divided only once, a specific implementation flow of determining the sum of the distances as the image distance between the image of the target object and the image of the reference object is shown in fig. 8, and may include:
step S81: determining the sum of the first color moment characteristic distances as a first image distance between the image of the target object and the frame image of the reference object;
since each first image block corresponds to one first color moment feature distance, in the embodiment of the present application, a sum of the first color moment feature distances corresponding to all first image blocks of the image of the target object is determined as the first image distance between the image of the target object and the frame of image of the reference object.
Step S82: determining the sum of the first shape feature distances as a second image distance between the image of the target object and the frame image of the reference object;
since each first image block corresponds to a first color moment feature distance, in this embodiment of the present application, a sum of first shape feature distances corresponding to all first image blocks of an image of an object is determined as a second image distance between the image of the object and the frame image of the reference object.
Step S83: determining the sum of the first texture feature distances as a third image distance between the image of the target object and the frame of image of the reference object;
since each first image block corresponds to a first color moment feature distance, in the embodiment of the present application, a sum of first texture feature distances corresponding to all first image blocks of an image of a target object is determined as a third image distance between the image of the target object and the frame of image of the reference object;
in the images of the reference object in the initial position range, the step of determining that one frame of image of one reference object whose image distance satisfies a preset image matching condition is one frame of image of one reference object matched with the target object specifically includes:
and determining one frame of image of one reference object with the first image distance, the second image distance and the third image distance meeting the preset image matching condition as one frame of image of one reference object matched with the target object in the images of the reference objects in the initial position range.
It should be noted that the execution sequence of step S81, step S82 and step S83 may not be limited to the sequence defined in the above embodiment, that is, the execution sequence of step S81, step S82 and step S83 may be arbitrarily adjusted, and is not specifically limited herein.
In the above embodiment, when the image of the object is divided twice, a specific implementation flow of determining the smallest sum of the two distances as the image distance between the image of the object and the frame of image of the reference object is shown in fig. 9, and may include:
step S91: determining the minimum sum value of the first color moment characteristic distance sum value and the second color moment characteristic distance sum value as a first image distance between the image of the target object and the frame image of the reference object;
specifically, when the image of the target object is averagely divided into a plurality of first image blocks according to a preset first size of a pixel block, a first color moment characteristic distance between each first image block and one frame of image of a reference object is obtained by applying color moment characteristics, and a sum of the first color moment characteristic distances is calculated;
averagely dividing the image of the target object into a plurality of second image blocks according to a preset second size of a pixel block, and when the second size of the pixel block is smaller than the first size of the pixel block, acquiring a second color moment characteristic distance between each second image block and one frame of image of a reference object by using color moment characteristics, and calculating the sum of the second color moment characteristic distances;
determining the minimum value of the sum of the first color moment characteristic distances and the sum of the second color moment characteristic distances as a first image distance between the image of the target object and the frame image of the reference object;
step S92: determining a second image distance between the image of the object and the frame image of the reference object as a sum of the first shape feature distance and the second shape feature distance, wherein the sum of the first shape feature distance and the second shape feature distance is the smallest sum;
specifically, when the image of the target object is averagely divided into a plurality of first image blocks according to a preset first size of a pixel block, shape features are applied to obtain a first shape feature distance between each first image block and one frame of image of a reference object, and the sum of the first shape feature distances is calculated;
averagely dividing the image of the target object into a plurality of second image blocks according to a preset second size of a pixel block, and when the second size of the pixel block is smaller than the first size of the pixel block, obtaining a second shape feature distance between each second image block and one frame of image of a reference object by applying shape features, and calculating the sum of the second shape feature distances;
and determining the minimum value of the sum of the first shape feature distances and the sum of the second shape feature distances as a second image distance between the image of the target object and the frame image of the reference object.
Step S93: determining a third image distance between the image of the object and the frame image of the reference object according to the minimum sum of the first texture feature distance and the second texture feature distance;
specifically, when the image of the target object is averagely divided into a plurality of first image blocks according to a preset first size of a pixel block, shape features are applied to obtain a first texture feature distance between each first image block and a frame of image of a reference object, and the sum of the first texture feature distances is calculated;
averagely dividing the image of the target object into a plurality of second image blocks according to a preset second size of a pixel block, and when the second size of the pixel block is smaller than the first size of the pixel block, obtaining a second texture feature distance between each second image block and one frame of image of a reference object by applying texture features, and calculating the sum of the second texture feature distances;
and determining the minimum value of the sum of the first shape feature distances and the sum of the second shape feature distances as a second image distance between the image of the target object and the frame image of the reference object.
In the images of the reference object in the initial position range, the step of determining that one frame of image of one reference object whose image distance satisfies a preset image matching condition is one frame of image of one reference object matched with the target object specifically includes:
and determining one frame of image of one reference object with the first image distance, the second image distance and the third image distance meeting the preset image matching condition as one frame of image of one reference object matched with the target object in the images of the reference objects in the initial position range.
It should be noted that the execution sequence of step S91, step S92 and step S93 may not be limited to the sequence defined in the above embodiment, that is, the execution sequence of step S91, step S92 and step S93 may be arbitrarily adjusted, and is not specifically limited herein.
In the foregoing embodiment, preferably, in order to improve the matching speed, when performing image matching on an image block and an image of each reference object, color moment features may be applied to perform image matching on the acquired image block and each frame image of each reference object in the initial position range, so as to acquire an image of the reference object successfully matched by the color moment features; and then, carrying out image matching on the acquired image block and each image of the reference object successfully matched by applying the color moment features by applying the shape features and the texture features.
Specifically, in the image of the reference object in the initial position range, a specific implementation flow of a frame of image of one reference object, in which the first image distance, the second image distance, and the third image distance satisfy the preset image matching condition, is shown in fig. 10, and may include:
step S101: determining an image of the reference object with a first image distance smaller than a preset first distance threshold value in the images of the reference object in the initial position range;
that is, a subset of the image of the reference object is determined by the color moment features, and only the image of the reference object with successfully matched color moment features is included in the subset of the image of the reference object.
Step S102: and determining one frame image of a reference object of which the weighted sum value of the second image distance and the third image distance is smaller than a preset second distance threshold value from the determined images of the reference object as one frame image of the reference object of which the first image distance, the second image distance and the third image distance meet preset image matching conditions.
In this step, from the subset of the images of the reference object determined by the color moment features, one frame of image of the reference object whose weighted sum value of the second image distance and the third image distance of the image of the reference object is smaller than the preset second distance threshold is determined as one frame of image of the reference object whose first image distance, second image distance and third image distance satisfy the preset image matching condition.
The weight of the second image distance and the weight of the third image distance may be equal, that is, both are 0.5, or may be adjusted according to experience, which is not limited specifically here.
In the embodiment of the application, firstly, through simpler color moment feature matching, most of reference object images are eliminated due to large difference between colors and image blocks, so that the subsequent complex feature extraction and matching time are saved.
In the foregoing embodiment, preferably, for each first image block, performing the process shown in fig. 11 to obtain the first color moment characteristic distance between each first image block and one frame of image of one reference object may include:
step S111: continuously moving a first image block in a preset image range corresponding to the first image block on a frame of image of a reference object, and calculating the distance between the first image block and an image block of the reference object covered by the first image block by using color moment characteristics once when the first image block is moved once;
the preset image range of the first image block on one frame image of the reference object refers to an image block and an extended edge area of the image block, which are located in the same area as the first image block in the reference object, when the reference object is divided into a plurality of image blocks in the same division manner, as shown in fig. 12, fig. 12 is a schematic diagram of the first image block continuously moving in the preset image range of the first image block on one frame image of the reference object according to the embodiment of the present application: for convenience of description, in fig. 12, corresponding portions of the image blocks of the image of the object and the image blocks of the image of the reference are labeled with the same numbers, for example, the image block labeled "1" in the image of the reference and the extended edge area of the image block labeled "1" in the image of the reference, that is, the shaded portion labeled "1" in the image of the reference is a preset image range corresponding to the first image block labeled "1" in the image of the object; similarly, the image block marked as "7" in the image of the reference and the extended edge area of the image block marked as "7" in the image of the reference, i.e., the shaded portion marked as "7" in the image of the reference, are the preset image ranges corresponding to the first image block marked as "7" in the image of the target.
Specifically, the color moment features of an image can be characterized by a first central moment, a second central moment, and a third central moment of the color of the image, wherein,
the first central moment is: <math> <mrow> <mi>&mu;</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>p</mi> <mi>ij</mi> </msub> <mo>;</mo> </mrow> </math>
the second order central moment is: <math> <mrow> <mi>&sigma;</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>ij</mi> </msub> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </msup> <mo>;</mo> </mrow> </math>
the third central moment is: <math> <mrow> <mi>s</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>ij</mi> </msub> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> </msup> <mo>;</mo> </mrow> </math>
where n is the total number of pixels of the image, pijThe synthesized pixel value at (i, j) of the two-dimensional coordinates in the image space may be a synthesized hsi (hue acceptance intensity) pixel value, a synthesized YUV pixel value, or a synthesized pixel value in another color space, for example, a synthesized RGB pixel value.
Thus, the image can be characterized by the color moment feature vectors (μ, σ, s), the distance between the two images (assumed to be a and B, respectively) can be characterized by the euclidean distance, and the specific distance D (a, B) between image a and image B can be expressed as:
D(A,B)=|μAB|+|σAB|+|sA-sB|
step S112: and determining the shortest one of the distances as a first color moment characteristic distance between the first image block and the frame image of the reference object.
In the foregoing embodiment, preferably, for each first image block, performing the process shown in fig. 13 to obtain the first shape feature distance between each first image block and one frame of image of one reference object may include:
step S131: calculating the distance between the first image block and the image block of the first image block corresponding to the frame image of the reference object by using the shape characteristics;
also taking fig. 12 as an example, in the image of the reference object, the image block labeled "6" is the image block corresponding to the image block labeled "6" in the image of the object.
In particular, the shape features of the images can be characterized by a fourier descriptor of the outline of the images, i.e., the distance between the fourier descriptors of the outline of two images is determined as the distance between the two images.
Step S132: determining the distance as a first shape feature distance between the first image block and the frame image of the reference.
In the foregoing embodiment, preferably, for each first image block, performing the process shown in fig. 14 to obtain the first texture feature distance between each first image block and one frame of image of one reference object may include:
step S141: calculating the distance between a first image block and an image block of the first image block corresponding to the image of the reference object by using texture features;
specifically, the texture features of the images may be characterized by a gray level co-occurrence matrix, that is, the distance between the gray level co-occurrence matrices of the two images is determined as the distance between the two images.
Step S142: determining the distance as a first texture feature distance between the first image block and the frame of image of the reference.
In the foregoing embodiment, preferably, the flowchart for obtaining the size ratio between the scaled image of the target object and the acquired image of the target object is shown in fig. 15, and may include:
step S151: zooming the image of the target object according to each preset zoom ratio to obtain a zoomed image of the target object under each zoom ratio;
the scaling ratio may be based on a preset scaling base a (0)<a<1) Determining, in particular, that the scaling ratio is aNFor example, assuming that the original size of the image of the target object is M, the size of the image of the target object after the first scaling is M × a, and the size of the image of the target object after the second scaling is M × a2After the third scaling, the size of the image of the object becomes M a3And so on until reaching the preset zooming times or the size ratio of the zoomed image to the original image reaches the preset value.
Step S152: continuously moving each zoom ratio of the target object on the image of the reference object, and calculating the correlation between the zoom image of the target object and the determined overlapped image of the reference object;
step S153: and determining the ratio of the size of the zoom image with the maximum correlation degree to the size of the acquired image of the target as the size ratio of the zoom image of the target to the acquired image of the target.
In the embodiment of the present application, the zoom image of the target object when the correlation is the maximum is used as the zoom image of the target object used in calculating the size ratio, and the ratio of the size of the determined zoom image of the target object to the size of the acquired image of the target object is determined as the size ratio of the zoom image of the target object to the acquired image of the target object.
Preferably, in order to reduce the storage space occupied by the image database, the image features of the image of the reference object may be extracted, and the image features of the image are stored in the image database, where the image features include color moment features, texture features, shape features, and the like, and may also include other features such as spatial relationship features, and in this case, another embodiment of the present invention provides a positioning method, which is different from the method shown in fig. 1 in that the method further includes, before determining a frame image of a reference object matching the target object: extracting image features of an image of a target object;
and, the matching the image of the target object with each frame image of each reference object in the initial position range, and determining a frame image of a reference object matched with the target object specifically includes: and matching the image characteristics of the image of the target object with the image characteristics of each frame of image of each reference object in the initial position range, and determining one frame of image of one reference object matched with the target object. Other embodiments are the same as the previous embodiments, and the processing for the image is changed to the processing for the image feature of the image, so for brevity, the related contents refer to the related contents in the foregoing, and are not repeated herein.
Corresponding to the method embodiment, an embodiment of the present application further provides a positioning apparatus, a schematic structural diagram of the positioning apparatus provided in the embodiment of the present application is shown in fig. 16, the apparatus may be applied to a mobile terminal, and the positioning apparatus may include:
an image acquisition module 161, an initial position acquisition module 162, a reference object information acquisition module 163, a matching module 164, a size ratio determination module 165, a distance determination module 166, and a positioning result determination module 167; wherein,
the image acquisition module 161 is used for acquiring an image of the target object at the current position;
the initial position obtaining module 162 is configured to obtain an initial position of the current position;
the reference object information acquiring module 163 is connected to the initial position acquiring module 162, and is configured to acquire information of a reference object within the initial position range in a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, an image of the reference object, a collection angle of the reference object, a focal length of image collection equipment used when the image of the reference object is collected in advance, and a distance between the image collection equipment and the reference object;
the matching module 164 is respectively connected to the image acquiring module 161 and the reference object information acquiring module 163, and is configured to match the image of the target object with each frame of image of each reference object in the initial position range, and determine one frame of image of one reference object matched with the target object;
the size ratio determining module 165 is respectively connected to the image acquiring module 161 and the matching module 164, and is configured to obtain a size ratio between the scaled image of the target object and the acquired image of the target object;
the distance determining module 166 is respectively connected to the image acquiring module 161, the reference object information acquiring module 163 and the matching module 164, and is configured to determine, according to the focal length of the image acquiring module, a focal length of an image acquiring device used when the determined image of the reference object is acquired in advance, a distance between the image acquiring device and the determined reference object, and the size ratio, a distance between the current position and the target object;
and the distance between the current position and the target object is the distance between the mobile terminal and the target object.
The positioning result determining module 167 is respectively connected to the reference object information obtaining module 163, the matching module 164 and the distance determining module 166, and is configured to use the distance between the current position and the target object and the determined collection angle of the reference object as a positioning result; or determining the positioning position of the current position according to the distance between the current position and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
And the positioning position of the current position is the positioning position of the mobile terminal.
The embodiment of the present application provides a positioning apparatus, which stores information of a reference object in advance, where the information at least includes: the system comprises coordinates of a reference object, a plurality of frames of images of the reference object, an acquisition angle of the images of the reference object, a focal length of an image acquisition device used when the images of the reference object are acquired in advance, and a distance between the image acquisition device and the reference object. Acquiring an image of a target object at the current position through an image acquisition module, wherein the initial position of the image can be acquired through a traditional positioning method, and then determining an image of a reference object matched with the target object through image matching; acquiring the size ratio of the zoom image of the target object to the acquired image of the target object; according to the focal length of the image acquisition module, the focal length of image acquisition equipment used when the determined image of the reference object is acquired in advance, the distance between the image acquisition equipment and the reference object and the size ratio, the distance between the mobile terminal and the target object is determined, and the distance between the mobile terminal and the target object and the determined acquisition angle of the reference object are used as positioning results; or determining the position of the mobile terminal according to the distance between the mobile terminal and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
Therefore, the positioning device provided by the embodiment of the application determines the position of the target object through the position of the reference object and the position of the target object relative to the reference object, so that the mobile terminal does not need to be modified, and additional calibration is not needed.
In the embodiment shown in fig. 16, a schematic structural diagram of the matching module 164 provided in the embodiment of the present application is shown in fig. 17, and may include:
an acquisition sub-module 171 and a determination sub-module 172; wherein,
the obtaining sub-module 171 is configured to obtain an image distance between the image of the target object and each frame of image of each reference object in the initial position range;
the determining sub-module 172 is connected to the acquiring sub-module 171, and is configured to determine, from the images of the reference objects in the initial position range, one frame of image of one reference object whose image distance satisfies a preset image matching condition as one frame of image of one reference object matching the target object.
On the basis of the embodiment shown in fig. 17, a schematic structural diagram of the obtaining sub-module 171 provided in the embodiment of the present application is shown in fig. 18, and may include:
the first dividing unit is used for averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
the first acquisition unit is used for acquiring the distance between each first image block and one frame of image of one reference object aiming at each frame of image of each reference object in the initial position range;
a first determining unit, configured to determine a sum of the distances as an image distance between the image of the target object and the frame image of the reference object.
On the basis of the embodiment shown in fig. 17, another schematic structural diagram of the obtaining sub-module 171 provided in the embodiment of the present application is shown in fig. 19, and may include:
the first dividing unit is used for averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
the second dividing unit is used for averagely dividing the image of the target object into a plurality of second image blocks according to a preset second size of a pixel block, wherein the second size of the pixel block is smaller than the first size of the pixel block;
the first obtaining unit is connected with the first dividing unit and used for obtaining the distance between each first image block and one frame image of one reference object aiming at each frame image of each reference object in the initial position range;
the second obtaining unit is connected with the second dividing unit and used for obtaining the distance between each second image block and each frame image of the reference object aiming at each frame image of each reference object in the initial position range;
the second determining unit is respectively connected with the first acquiring unit and the second acquiring unit and used for respectively calculating the sum of the two distances and determining the smallest sum of the two distances as the image distance between the image of the target object and the frame of image of the reference object.
In the embodiment shown in fig. 18 or fig. 19, a schematic structural diagram of the first/second obtaining unit as shown in fig. 20 may include:
a first obtaining subunit 201, configured to obtain, by using the color moment features, first/second color moment feature distances between each of the first/second image blocks and one frame of image of one reference object;
a second obtaining subunit 202, configured to obtain, by using shape features, a first/second shape feature distance between each of the first/second image blocks and the frame image of the reference object;
a third obtaining subunit 203, configured to obtain, by applying texture features, a first/second texture feature distance between each of the first/second image blocks and the frame image of the reference object.
That is, the distance between the image block and the image of the reference object is acquired using three features, namely, the color moment feature, the shape feature, and the texture feature, regardless of the first acquisition unit or the second acquisition unit.
On the basis of the embodiment shown in fig. 20, a schematic structural diagram of the first determining unit provided in the embodiment of the present application is shown in fig. 21, and may include:
a first determining subunit 211, configured to determine a sum of the first color moment feature distances as a first image distance between the image of the target object and the frame image of the reference object;
a second determining subunit 212, configured to determine a sum of the first shape feature distances as a second image distance between the image of the target object and the frame image of the reference object;
a third determining subunit 213, configured to determine a sum of the first texture feature distances as a third image distance between the image of the target object and the frame image of the reference object;
the determining sub-module 172 is specifically configured to determine, from the images of the reference object in the initial position range, one frame of image of one reference object whose first image distance, second image distance, and third image distance satisfy a preset image matching condition.
On the basis of the embodiment shown in fig. 20, a schematic structural diagram of a second determining unit provided in the embodiment of the present application is shown in fig. 22, and may include:
a third determining subunit 221, configured to determine, as the first image distance between the image of the target object and the frame image of the reference object, a sum value that is the smallest of the sum values of the first color moment feature distances and the second color moment feature distance;
a fourth determining subunit 222, configured to determine a smallest sum value of the sum values of the first shape feature distances and the second shape feature distances as a second image distance between the image of the target object and the frame image of the reference object;
a fifth determining subunit 223, configured to determine a minimum sum value of the first texture feature distance and the sum value of the second texture feature distance as a third image distance between the image of the target object and the frame image of the reference object;
the determining sub-module 172 is specifically configured to determine, in the images of the reference objects in the initial position range, that one frame of image of one reference object whose first image distance, second image distance, and third image distance satisfy a preset image matching condition is one frame of image of one reference object that matches the target object.
On the basis of the embodiment shown in fig. 21 or fig. 22, a schematic structural diagram of the determining sub-module 172 provided in the embodiment of the present application is shown in fig. 23, and may include:
a third determining unit 231 configured to determine, among the images of the reference objects in the initial position range, an image of the reference object whose first image distance is smaller than a preset first distance threshold;
a fourth determining unit 232, configured to determine, from the determined images of the reference object, that one frame of image of the reference object whose weighted sum value of the second image distance and the third image distance of the image of the reference object is smaller than the preset second distance threshold is one frame of image of the reference object whose first image distance, second image distance, and third image distance satisfy the preset image matching condition.
On the basis of the embodiment shown in fig. 20, a schematic structural diagram of the first obtaining subunit 201 provided in the embodiment of the present application is shown in fig. 24, and may include:
the first calculating unit 241 is configured to continuously move a first image block within a preset image range corresponding to the first image block on a frame of image of a reference object for each first image block, and calculate, once moving the first image block, a distance between the first image block and an image block of the reference object covered by the first image block by using a color moment feature;
a color moment feature distance determining unit 242, configured to determine, for each first image block, a shortest one of the distances as a first color moment feature distance between the first image block and the frame image of the reference object.
On the basis of the embodiment shown in fig. 20, a schematic structural diagram of the second obtaining subunit 202 provided in the embodiment of the present application is shown in fig. 25, and may include:
a second calculating unit 251, configured to calculate, for each first image block, a distance between the first image block and an image block of the first image block corresponding to the image of the reference object by using shape features;
a shape feature distance determining unit 252, configured to determine, for each first image block, the distance calculated by the second calculating unit 251 as a first shape feature distance between the first image block and the frame image of the reference object.
On the basis of the embodiment shown in fig. 20, a schematic structural diagram of the third obtaining subunit 203 provided in the embodiment of the present application is shown in fig. 26, and may include:
a third calculating unit 261, configured to calculate, for each first image block, a distance between the first image block and an image block of the first image block corresponding to the image of the reference object by using texture features;
a texture distance determining unit 262, configured to determine, for each first image block, a distance calculated by the third calculating unit 261 as a first texture distance between the first image block and the frame image of the reference object.
In the foregoing embodiment, preferably, a schematic structural diagram of the size ratio determining module 165 provided in this embodiment is shown in fig. 27, and may include:
the scaling submodule 271 is configured to scale the image of the target object according to each preset scaling ratio to obtain a scaled image of the target object at each scaling ratio;
a correlation determination sub-module 272, configured to continuously move each zoom ratio of the target object over the image of the reference object, and calculate a correlation between the zoom image of the target object and the determined overlapped image of the reference object;
the size ratio determining sub-module 273 determines a ratio of the size of the scaled image having the largest correlation to the size of the acquired image of the object as a size ratio of the scaled image of the object to the image of the object.
On the basis of the embodiment shown in fig. 16, in another embodiment of the positioning device provided in the embodiment of the present application, the positioning device may further include:
the feature extraction module is respectively connected with the image acquisition module 161 and the matching module 164 and is used for extracting image features of the image of the target object;
the matching module 164 is specifically configured to match the image features of the image of the target object with the image features of each frame of image of each reference object in the initial position range, and determine one frame of image of one reference object matched with the target object.
The embodiment of the application also provides a mobile terminal which is provided with the positioning device.
The positioning method provided by the present application may also be implemented by a mobile terminal in combination with a server, and a schematic structural diagram of a positioning system provided by the embodiment of the present application is shown in fig. 28, and may include:
mobile terminal 281 and server 282; wherein,
the mobile terminal 281 includes:
an image acquisition module 2811, configured to acquire an image of a target object at a current location;
an initial position obtaining module 2812, configured to obtain an initial position of the current position;
a first sending module 2813, configured to send the target object image and the initial position information;
a first receiving module 2814, configured to receive information of the determined reference sent by the server, where the information includes: the coordinate of the determined reference object, the image of the determined reference object, the acquisition angle of the determined reference object, the focal length of an image acquisition device used when the image of the determined reference object is acquired in advance, and the distance between the image acquisition device and the determined reference object;
a size ratio determination module 2815, configured to obtain a size ratio of the scaled image of the target object to the acquired image of the target object;
a distance determining module 2816, configured to determine, according to the focal length of the image capturing module 2811, a focal length of an image capturing device used when capturing an image of the determined reference object in advance, a distance between the image capturing device and the determined reference object, and the size ratio, a distance between the mobile terminal and the target object;
a positioning result determining module 2817, configured to use a distance between the mobile terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the mobile terminal according to the distance between the mobile terminal and the target object, the determined coordinate of the reference object and the determined acquisition angle of the reference object;
the server 282 includes:
a second receiving module 2821, configured to receive the image and the initial position of the target object sent by the mobile terminal;
a reference object information acquiring module 2822, configured to acquire, in a pre-stored image database, information of a reference object in the initial position range; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the reference object, a focal length of image acquisition equipment used when the images of the reference object are acquired in advance, and a distance between the image acquisition equipment and the reference object;
a matching module 2823, configured to match the image of the target object with each frame of image of each reference object in the initial position range, and determine a frame of image of one reference object that matches the target object;
a second sending module 2824, configured to send information of the determined reference, where the information includes: the coordinate of the determined reference object, the image of the determined reference object, the acquisition angle of the determined reference object, the focal length of an image acquisition device used when the image of the determined reference object is acquired in advance, and the distance between the image acquisition device and the determined reference object.
In order to reduce the consumption of client data traffic and reduce the cost of unnecessary information transmission in limited network bandwidth, and to extract image features more accurately, on the basis of the embodiment shown in fig. 28, a schematic structural diagram of another positioning system provided by the embodiment of the present application is shown in fig. 29,
the mobile terminal 281 may further include:
a feature extraction module 291, respectively connected to the image acquisition module 2811 and the first sending module 2813, for extracting image features of the image of the target object;
the first sending module 2813 is specifically configured to send the image feature of the image of the target object and the initial position;
the second receiving module 2821 is specifically configured to receive an image feature and an initial position of the image of the target object sent by the mobile terminal;
the matching module 2823 is specifically configured to match the image feature of the target object with the image feature of each frame of image of each reference object in the initial position range, and determine one frame of image of one reference object that matches the target object.
In the embodiment of the application, the image features are extracted at the mobile terminal side, so that the requirement of network bandwidth is reduced, the accuracy of the extracted image features can be ensured, image distortion caused by network transmission is avoided, and the problem of low positioning accuracy caused by low image feature extraction accuracy is solved.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (29)

1. A method of positioning, comprising:
acquiring an image of a target object at a current position through a user terminal;
acquiring an initial position of the current position;
acquiring information of a reference object in the initial position range in a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the image of the reference object, a focal length of image acquisition equipment used when the image of the reference object is acquired in advance, and a distance between the image acquisition equipment and the reference object;
matching the image of the target object with each frame of image of each reference object in the initial position range, and determining one frame of image of one reference object matched with the target object;
acquiring a size ratio of the zoom image of the target object and the acquired image of the target object;
according to the focal length of the user terminal, the focal length of image acquisition equipment used when the determined image of the reference object is acquired in advance, the distance between the image acquisition equipment and the reference object and the size ratio, the distance between the user terminal and the target object is determined;
taking the distance between the user terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the user terminal according to the distance between the user terminal and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
2. The method of claim 1, wherein matching the image of the target object with each frame of images of each reference object within the initial position range, determining a frame of image of a reference object matching the target object comprises:
acquiring the image distance between the image of the target object and each frame of image of each reference object in the initial position range;
and determining one frame of image of one reference object with the image distance meeting the preset image matching condition as one frame of image of one reference object matched with the target object in the images of the reference objects in the initial position range.
3. The method of claim 2, wherein the obtaining the image distance between the image of the target object and each frame of image of each reference object in the initial position range comprises:
averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
for each frame of image of each reference object in the initial position range, performing the following steps to obtain the image distance between the image of the target object and each frame of image of each reference object in the initial position range:
acquiring the distance between each first image block and one frame of image of a reference object;
and determining the sum of the distances as the image distance between the image of the target object and the frame image of the reference object.
4. The method of claim 2, wherein the obtaining the image distance between the image of the target object and each frame of image of each reference object in the initial position range comprises:
averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
averagely dividing the image of the target object into a plurality of second image blocks according to a preset second size of a pixel block, wherein the second size of the pixel block is smaller than the first size of the pixel block;
for each frame of image of each reference object in the initial position range, the following steps are carried out to obtain the image distance between the image of the target object and each frame of image of each reference object in the initial position range:
acquiring the distance between each first image block and one frame of image of a reference object;
acquiring the distance between each second image block and the frame image of the reference object;
and respectively calculating the sum value of two distances, and determining the smallest sum value of the two distance sum values as the image distance between the image of the target object and the frame image of the reference object.
5. The method of claim 3 or 4, wherein obtaining the distance between each of the first/second image blocks and a frame of image of a reference comprises:
acquiring a first/second color moment characteristic distance between each first/second image block and one frame of image of a reference object by applying color moment characteristics;
obtaining a first/second shape feature distance between each first/second image block and the frame image of the reference object by applying shape features;
and acquiring a first/second texture feature distance between each first/second image block and the frame image of the reference object by applying texture features.
6. The method according to claim 5, wherein determining the sum of the distances as the image distance between the image of the object and the frame of image of the reference object specifically comprises:
determining the sum of the first color moment characteristic distances as a first image distance between the image of the target object and the frame image of the reference object;
determining the sum of the first shape feature distances as a second image distance between the image of the target object and the frame image of the reference object;
determining the sum of the first texture feature distances as a third image distance between the image of the target object and the frame of image of the reference object;
in the images of the reference object in the initial position range, the step of determining that one frame of image of one reference object whose image distance satisfies a preset image matching condition is one frame of image of one reference object matched with the target object specifically includes:
and determining one frame of image of one reference object with the first image distance, the second image distance and the third image distance meeting the preset image matching condition as one frame of image of one reference object matched with the target object in the images of the reference objects in the initial position range.
7. The method of claim 5, wherein determining the smallest sum of the two distance sums as the image distance between the image of the object and the frame image of the reference object comprises:
determining the minimum sum value of the first color moment characteristic distance sum value and the second color moment characteristic distance sum value as a first image distance between the image of the target object and the frame image of the reference object;
determining a second image distance between the image of the object and the frame image of the reference object as a sum of the first shape feature distance and the second shape feature distance, wherein the sum of the first shape feature distance and the second shape feature distance is the smallest sum;
determining a third image distance between the image of the object and the frame image of the reference object according to the minimum sum of the first texture feature distance and the second texture feature distance;
in the images of the reference object in the initial position range, the step of determining that one frame of image of one reference object whose image distance satisfies a preset image matching condition is one frame of image of one reference object matched with the target object specifically includes:
and determining one frame of image of one reference object with the first image distance, the second image distance and the third image distance meeting the preset image matching condition as one frame of image of one reference object matched with the target object in the images of the reference objects in the initial position range.
8. The method according to claim 6, wherein the determining, in the images of the reference objects in the initial position range, one frame image of one reference object for which the first image distance, the second image distance, and the third image distance satisfy a preset image matching condition comprises:
determining an image of the reference object with a first image distance smaller than a preset first distance threshold value in the images of the reference object in the initial position range;
and determining one frame image of the reference object of which the weighted sum value of the second image distance and the third image distance of the image of the reference object is smaller than a preset second distance threshold value from the determined images of the reference object.
9. The method of claim 5, wherein the applying the color moment features to obtain the first color moment feature distance between each of the first image blocks and a frame of image of a reference comprises:
aiming at each first image block, the following steps are carried out to obtain a first color moment characteristic distance between each first image block and a frame of image of a reference object:
continuously moving a first image block in a preset image range corresponding to the first image block on one frame of image of the reference object, and calculating the distance between the first image block and the image block of the reference object covered by the first image block by utilizing the color moment characteristics once when the first image block is moved once;
and determining the shortest one of the distances as a first color moment characteristic distance between the first image block and the frame image of the reference object.
10. The method of claim 5, wherein said applying shape features to obtain a first shape feature distance between each of the first image blocks and the frame image of the reference comprises:
executing the following steps for each first image block to obtain a first shape feature distance between each first image block and the frame image of the reference object:
calculating the distance between the first image block and the image block of the first image block corresponding to the frame image of the reference object by using the shape characteristics;
determining the distance as a first shape feature distance between the first image block and the frame image of the reference.
11. The method of claim 5, wherein said obtaining a first texture distance between each of the first image blocks and the frame of image of the reference by applying texture comprises:
executing the following steps for each first image block to obtain a first texture feature distance between each first image block and the frame image of the reference object:
calculating the distance between a first image block and an image block of the first image block corresponding to the image of the reference object by using texture features;
determining the distance as a first texture feature distance between the first image block and the frame of image of the reference.
12. The method of any one of claims 1-4, wherein said obtaining a size ratio of the scaled image of the object to the acquired image of the object comprises:
zooming the image of the target object according to each preset zoom ratio to obtain a zoomed image of the target object under each zoom ratio;
continuously moving each zoom ratio of the target object on the image of the reference object, and calculating the correlation between the zoom image of the target object and the determined overlapped image of the reference object;
and determining the ratio of the size of the zoom image with the maximum correlation degree to the size of the acquired image of the target as the size ratio of the zoom image of the target to the acquired image of the target.
13. The method of claim 1, further comprising:
extracting image features of the image of the target object;
the matching the image of the target object with each frame of image of each reference object in the initial position range, and the determining of one frame of image of one reference object matched with the target object specifically includes:
and matching the image characteristics of the image of the target object with the image characteristics of each frame of image of each reference object in the initial position range, and determining one frame of image of one reference object matched with the target object.
14. A positioning device, comprising:
the image acquisition module is used for acquiring an image of a target object at the current position;
an initial position obtaining module, configured to obtain an initial position of the current position;
a reference object information acquisition module, configured to acquire information of a reference object in the initial position range from a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the reference object, a focal length of image acquisition equipment used when the images of the reference object are acquired in advance, and a distance between the image acquisition equipment and the reference object;
the matching module is used for matching the image of the target object with each frame of image of each reference object in the initial position range and determining one frame of image of one reference object matched with the target object;
a size ratio determination module for obtaining a size ratio of the scaled image of the target object to the acquired image of the target object;
the distance determining module is used for determining the distance between the current position and the target object according to the focal length of the image acquisition module, the focal length of image acquisition equipment used when the image of the determined reference object is acquired in advance, the distance between the image acquisition equipment and the determined reference object and the size ratio;
the positioning result module is used for taking the distance between the current position and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the current position according to the distance between the current position and the target object, the determined coordinates of the reference object and the determined acquisition angle of the reference object.
15. The apparatus of claim 14, wherein the matching module comprises:
the acquisition submodule is used for acquiring the image distance between the image of the target object and each frame of image of each reference object in the initial position range;
and the determining submodule is used for determining one frame of image of one reference object with the image distance meeting the preset image matching condition as one frame of image of one reference object matched with the target object in the images of the reference objects in the initial position range.
16. The apparatus of claim 15, wherein the acquisition sub-module comprises:
the first dividing unit is used for averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
the first acquisition unit is used for acquiring the distance between each first image block and one frame of image of one reference object aiming at each frame of image of each reference object in the initial position range;
a first determining unit, configured to determine a sum of the distances as an image distance between the image of the target object and the frame image of the reference object.
17. The apparatus of claim 15, wherein the acquisition sub-module comprises:
the first dividing unit is used for averagely dividing the image of the target object into a plurality of first image blocks according to a preset first size of a pixel block;
the second dividing unit is used for averagely dividing the image of the target object into a plurality of second image blocks according to a preset second size of a pixel block, wherein the second size of the pixel block is smaller than the first size of the pixel block;
the first acquisition unit is used for acquiring the distance between each first image block and one frame of image of one reference object aiming at each frame of image of each reference object in the initial position range;
the second acquisition unit is used for acquiring the distance between each second image block and each frame image of the reference object aiming at each frame image of each reference object in the initial position range;
and the second determining unit is used for respectively calculating the sum of the two distances and determining the smallest sum of the two distances as the image distance between the image of the target object and the frame image of the reference object.
18. The apparatus according to claim 16 or 17, wherein the first/second obtaining unit comprises:
the first acquiring subunit is used for acquiring a first/second color moment characteristic distance between each first/second image block and one frame of image of one reference object by applying the color moment characteristics;
a second obtaining subunit, configured to obtain, by using shape features, a first/second shape feature distance between each of the first/second image blocks and the frame image of the reference object;
a third obtaining subunit, configured to obtain, by applying texture features, a first/second texture feature distance between each of the first/second image blocks and the frame image of the reference object.
19. The apparatus of claim 18, wherein the first determining unit comprises:
a first determining subunit, configured to determine a sum of the first color moment feature distances as a first image distance between the image of the target object and the frame image of the reference object;
a second determining subunit, configured to determine a sum of the first shape feature distances as a second image distance between the image of the target object and the frame image of the reference object;
a third determining subunit, configured to determine a sum of the first texture feature distances as a third image distance between the image of the target object and the frame image of the reference object;
the determining submodule is specifically configured to determine, in the images of the reference object in the initial position range, one frame of image of one reference object for which the first image distance, the second image distance, and the third image distance satisfy a preset image matching condition.
20. The apparatus of claim 18, wherein the second determining unit comprises:
a third determining subunit, configured to determine, as the first image distance between the image of the target object and the frame image of the reference object, a sum of the first color moment feature distances and a sum of the second color moment feature distances, which is the smallest sum;
a fourth determining subunit, configured to determine, as the second image distance between the image of the target object and the frame image of the reference object, a sum value of the first shape feature distances and a sum value of the second shape feature distances, which is the smallest of the sum values;
a fifth determining subunit, configured to determine, as a third image distance between the image of the target object and the frame image of the reference object, a sum value that is the smallest of the sum values of the first texture feature distance and the second texture feature distance;
the determining submodule is specifically configured to determine, in the images of the reference object in the initial position range, that one frame of image of one reference object whose first image distance, second image distance, and third image distance satisfy a preset image matching condition is one frame of image of one reference object that matches the target object.
21. The apparatus of claim 19, wherein the determination submodule comprises:
a third determining unit, configured to determine, from the images of the reference object in the initial position range, an image of the reference object whose first image distance is smaller than a preset first distance threshold;
and the fourth determining unit is used for determining one frame of image of one reference object of which the weighted sum value of the second image distance and the third image distance of the image of the reference object is smaller than a preset second distance threshold value from the determined images of the reference object as one frame of image of one reference object of which the first image distance, the second image distance and the third image distance meet preset image matching conditions.
22. The apparatus of claim 18, wherein the first obtaining subunit comprises:
the first calculating unit is used for continuously moving the first image block in a preset image range corresponding to the first image block on one frame of image of a reference object aiming at each first image block, and calculating the distance between the first image block and the image block of the reference object covered by the first image block by utilizing the color moment characteristics once when the first image block is moved once;
and the color moment characteristic distance determining unit is used for determining the shortest one of the distances as a first color moment characteristic distance between the first image block and the image of the reference object aiming at each first image block.
23. The apparatus of claim 18, wherein the second obtaining subunit comprises:
the second calculating unit is used for calculating the distance between the first image block and the image block of the first image block corresponding to the image of the reference object by utilizing the shape characteristics aiming at each first image block;
and the shape characteristic distance determining unit is used for determining the distance calculated by the second calculating unit as a first shape characteristic distance between the first image block and the frame image of the reference object aiming at each first image block.
24. The apparatus of claim 18, wherein the third obtaining subunit comprises:
the third calculating unit is used for calculating the distance between the first image block and the image block of the first image block corresponding to the image of the reference object by utilizing the texture features aiming at each first image block;
and the texture feature distance determining unit is used for determining the distance calculated by the third calculating unit as a first texture feature distance between the first image block and the frame image of the reference object aiming at each first image block.
25. The apparatus of any one of claims 14-17, wherein the size ratio determination module comprises:
the scaling submodule is used for scaling the image of the target object according to each preset scaling ratio to obtain a scaled image of the target object under each scaling ratio;
a correlation determination sub-module, configured to continuously move the scaled image corresponding to each scaling ratio of the target object on the image of the reference object, and calculate a correlation between the scaled image of the target object and the determined overlapped image of the reference object;
and the size ratio determining submodule is used for determining the ratio of the size of the zoom image with the maximum correlation degree to the size of the acquired image of the target as the size ratio of the zoom image of the target to the acquired image of the target.
26. The apparatus of claim 14, further comprising:
the characteristic extraction module is used for extracting the image characteristics of the image of the target object;
the matching module is specifically configured to match the image features of the image of the target object with the image features of each frame of image of each reference object in the initial position range, and determine one frame of image of one reference object matched with the target object.
27. A mobile terminal, characterized in that it comprises a positioning device according to any one of claims 14-26.
28. A positioning system, comprising:
a mobile terminal and a server; wherein,
the mobile terminal includes:
the image acquisition module is used for acquiring an image of a target object at the current position;
an initial position obtaining module, configured to obtain an initial position of the current position;
the first sending module is used for sending the image of the target object and the initial position;
a first receiving module, configured to receive information of the determined reference sent by the server, where the information includes: the coordinate of the determined reference object, the image of the determined reference object, the acquisition angle of the determined reference object, the focal length of an image acquisition device used when the image of the determined reference object is acquired in advance, and the distance between the image acquisition device and the determined reference object;
a size ratio determination module for obtaining a size ratio of the scaled image of the target object to the acquired image of the target object;
the distance determining module is used for determining the distance between the mobile terminal and the target object according to the focal length of the first image acquisition module, the focal length of image acquisition equipment used when the determined image of the reference object is acquired in advance, the distance between the image acquisition equipment and the determined reference object and the size ratio;
the positioning result determining module is used for taking the distance between the mobile terminal and the target object and the determined acquisition angle of the reference object as a positioning result; or determining the positioning position of the mobile terminal according to the distance between the mobile terminal and the target object, the determined coordinate of the reference object and the determined acquisition angle of the reference object;
the server includes:
the second receiving module is used for receiving the image and the initial position of the target object sent by the mobile terminal;
a reference object information acquisition module, configured to acquire information of a reference object in the initial position range from a pre-stored image database; wherein, the image database stores information of a plurality of reference objects, and the information at least comprises: the method comprises the following steps of (1) coordinates of a reference object, a plurality of frame images of the reference object, an acquisition angle of the reference object, a focal length of image acquisition equipment used when the images of the reference object are acquired in advance, and a distance between the image acquisition equipment and the reference object;
the matching module is used for matching the image of the target object with each frame of image of each reference object in the initial position range and determining one frame of image of one reference object matched with the target object;
a second sending module, configured to send information of the determined reference object, where the information includes: the coordinate of the determined reference object, the image of the determined reference object, the acquisition angle of the determined reference object, the focal length of an image acquisition device used when the image of the determined reference object is acquired in advance, and the distance between the image acquisition device and the determined reference object.
29. The system according to claim 28, wherein said mobile terminal further comprises:
the characteristic extraction module is used for extracting the image characteristics of the image of the target object;
the first sending module is specifically configured to send the image feature of the image of the target object and the initial position;
the second receiving module is specifically configured to receive an image feature and an initial position of the image of the target object sent by the mobile terminal;
the matching module is specifically configured to match the image features of the target object with the image features of each frame of image of each reference object in the initial position range, and determine one frame of image of one reference object matched with the target object.
CN201310598348.0A 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal Active CN104661300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310598348.0A CN104661300B (en) 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310598348.0A CN104661300B (en) 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal

Publications (2)

Publication Number Publication Date
CN104661300A true CN104661300A (en) 2015-05-27
CN104661300B CN104661300B (en) 2018-07-10

Family

ID=53251874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310598348.0A Active CN104661300B (en) 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal

Country Status (1)

Country Link
CN (1) CN104661300B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105890597A (en) * 2016-04-07 2016-08-24 浙江漫思网络科技有限公司 Auxiliary positioning method based on image analysis
CN107144857A (en) * 2017-05-17 2017-09-08 深圳市伊特利网络科技有限公司 Assisted location method and system
CN107816983A (en) * 2017-08-28 2018-03-20 深圳市赛亿科技开发有限公司 A kind of shopping guide method and system based on AR glasses
CN110095752A (en) * 2019-05-07 2019-08-06 百度在线网络技术(北京)有限公司 Localization method, device, equipment and medium
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN111862146A (en) * 2019-04-30 2020-10-30 北京初速度科技有限公司 Target object positioning method and device
CN113389202A (en) * 2021-07-01 2021-09-14 山东省鲁南地质工程勘察院(山东省地勘局第二地质大队) Device and method for preventing aligning deviation of pile foundation engineering reinforcement cage
CN115131583A (en) * 2022-06-24 2022-09-30 佛山市天劲新能源科技有限公司 X-Ray detection system and detection method for lithium battery core package structure

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299269A (en) * 2008-06-13 2008-11-05 北京中星微电子有限公司 Method and device for calibration of static scene
CN102253995A (en) * 2011-07-08 2011-11-23 盛乐信息技术(上海)有限公司 Method and system for realizing image search by using position information
US20120127276A1 (en) * 2010-11-22 2012-05-24 Chi-Hung Tsai Image retrieval system and method and computer product thereof
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299269A (en) * 2008-06-13 2008-11-05 北京中星微电子有限公司 Method and device for calibration of static scene
US20120127276A1 (en) * 2010-11-22 2012-05-24 Chi-Hung Tsai Image retrieval system and method and computer product thereof
CN102253995A (en) * 2011-07-08 2011-11-23 盛乐信息技术(上海)有限公司 Method and system for realizing image search by using position information
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张鹏等: "给予图像匹配的目标定位技术研究", 《机器视觉》 *
颜洁等: "基于图像匹配的定位分析", 《信息传输与接入技术》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105890597A (en) * 2016-04-07 2016-08-24 浙江漫思网络科技有限公司 Auxiliary positioning method based on image analysis
CN105890597B (en) * 2016-04-07 2019-01-01 浙江漫思网络科技有限公司 A kind of assisted location method based on image analysis
CN107144857A (en) * 2017-05-17 2017-09-08 深圳市伊特利网络科技有限公司 Assisted location method and system
CN107816983A (en) * 2017-08-28 2018-03-20 深圳市赛亿科技开发有限公司 A kind of shopping guide method and system based on AR glasses
CN111862146A (en) * 2019-04-30 2020-10-30 北京初速度科技有限公司 Target object positioning method and device
CN111862146B (en) * 2019-04-30 2023-08-29 北京魔门塔科技有限公司 Target object positioning method and device
CN110095752A (en) * 2019-05-07 2019-08-06 百度在线网络技术(北京)有限公司 Localization method, device, equipment and medium
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
WO2021057797A1 (en) * 2019-09-27 2021-04-01 Oppo广东移动通信有限公司 Positioning method and apparatus, terminal and storage medium
CN113389202A (en) * 2021-07-01 2021-09-14 山东省鲁南地质工程勘察院(山东省地勘局第二地质大队) Device and method for preventing aligning deviation of pile foundation engineering reinforcement cage
CN113389202B (en) * 2021-07-01 2022-07-05 山东省鲁南地质工程勘察院(山东省地勘局第二地质大队) Device and method for preventing aligning deviation of pile foundation engineering reinforcement cage
CN115131583A (en) * 2022-06-24 2022-09-30 佛山市天劲新能源科技有限公司 X-Ray detection system and detection method for lithium battery core package structure

Also Published As

Publication number Publication date
CN104661300B (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN104661300B (en) Localization method, device, system and mobile terminal
CN106793086B (en) Indoor positioning method
CN110856112B (en) Crowd-sourcing perception multi-source information fusion indoor positioning method and system
KR100906974B1 (en) Apparatus and method for reconizing a position using a camera
CN109540144A (en) A kind of indoor orientation method and device
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN103761539B (en) Indoor locating method based on environment characteristic objects
CN104936283A (en) Indoor positioning method, server and system
CN109068272B (en) Similar user identification method, device, equipment and readable storage medium
KR101868125B1 (en) Method and server for Correcting GPS Position in downtown environment using street view service
CN104378735A (en) Indoor positioning method, client side and server
CN111782980B (en) Mining method, device, equipment and storage medium for map interest points
Feng et al. Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments
CN103491631A (en) Indoor positioning system and method based on two-dimension code and wifi signals
EP3860163B1 (en) Matching location-related information with name information of points of interest
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
CN113011445A (en) Calibration method, identification method, device and equipment
Jiao et al. A hybrid fusion of wireless signals and RGB image for indoor positioning
CN111652338B (en) Method and device for identifying and positioning based on two-dimensional code
KR100981588B1 (en) A system for generating geographical information of city facilities based on vector transformation which uses magnitude and direction information of feature point
Jiao et al. A hybrid of smartphone camera and basestation wide-area indoor positioning method
CN116363185A (en) Geographic registration method, geographic registration device, electronic equipment and readable storage medium
CN110796706A (en) Visual positioning method and system
CN114513746B (en) Indoor positioning method integrating triple vision matching model and multi-base station regression model
CN109212464B (en) Method and equipment for estimating terminal distance and position planning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200514

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Alibaba (China) Co.,Ltd.

Address before: 100020, No. 18, No., Changsheng Road, Changping District science and Technology Park, Beijing, China. 1-5

Patentee before: AUTONAVI SOFTWARE Co.,Ltd.