CN113390413B - Positioning method, device, equipment and storage medium - Google Patents
Positioning method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113390413B CN113390413B CN202010177560.XA CN202010177560A CN113390413B CN 113390413 B CN113390413 B CN 113390413B CN 202010177560 A CN202010177560 A CN 202010177560A CN 113390413 B CN113390413 B CN 113390413B
- Authority
- CN
- China
- Prior art keywords
- user equipment
- information
- positioning
- determining
- screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000015654 memory Effects 0.000 claims description 19
- 238000000605 extraction Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241001212149 Cathetus Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a positioning method, a positioning device, positioning equipment and a storage medium, and relates to the technical field of mobile positioning. The specific implementation mode is as follows: extracting identification information in an image acquired by user equipment; determining at least two reference points of interest associated with the identification information; for each reference interest point, determining direction information of the user equipment according to the screen position information of the reference interest point and the gesture data of the user equipment; and determining the positioning information of the user equipment according to the positioning information and the direction information of the reference interest point. According to the embodiment of the application, the auxiliary positioning of the user equipment is carried out through the identification information, and the existing positioning mode is broken through. In addition, the auxiliary positioning of the user equipment is carried out based on the positioning information of the reference interest points, hardware equipment does not need to be reset, and the positioning cost is reduced. Meanwhile, the orientation information of the user equipment is introduced, the positioning information of the reference interest point is restrained, and the positioning precision of the user equipment is improved, so that the positioning precision and the investment cost are considered.
Description
Technical Field
The present application relates to computer technologies, and in particular, to the field of mobile positioning technologies, and in particular, to a positioning method, apparatus, device, and storage medium.
Background
With the continuous development of internet technology and the improvement of living standard of people, the use of mobile terminals such as mobile phones and tablets is also continuously popularized. With the increase of applications installed in the mobile terminal and the improvement of the functions of the mobile terminal, positioning the current position of the user by using the mobile terminal gradually becomes standard configuration.
In the prior art, when positioning a user, in order to reduce the positioning cost, an existing positioning device is usually used for positioning the user, such as satellite positioning or base station positioning. However, the positioning accuracy of the above positioning method is low, the positioning process is also easily affected by factors such as weather, and the positioning result is unstable.
Based on the above, how to consider both the positioning accuracy and the investment cost becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, positioning equipment and a storage medium, so as to take positioning accuracy and input cost into consideration.
In a first aspect, an embodiment of the present application provides a positioning method, including:
extracting identification information in an image acquired by user equipment;
determining at least two reference points of interest associated with the identification information;
for each reference interest point, determining direction information of the user equipment according to the screen position information of the reference interest point and the gesture data of the user equipment;
and determining the positioning information of the user equipment according to the positioning information of the reference interest point and the direction information.
The embodiment of the application acquires identification information in an image acquired by user equipment; determining at least two reference points of interest associated with the identification information; determining orientation information of the user equipment according to the screen position information of the reference interest points and the gesture data of the user equipment aiming at the reference interest points; and determining the positioning information of the user equipment according to the positioning information and the direction information of the reference interest point. According to the technical scheme, the auxiliary positioning of the user equipment is carried out by collecting the identification information of the image, and the existing positioning mode is broken through. In addition, the auxiliary positioning of the user equipment is carried out based on the positioning information of the reference interest points, the multiplexing of the existing hardware equipment is realized, the hardware equipment does not need to be reset, and the positioning cost is reduced. Meanwhile, the orientation information of the user equipment is introduced, and the positioning information of the reference interest point is restrained, so that the positioning precision of the user equipment is improved, and the positioning precision and the investment cost are considered.
Optionally, determining, according to the screen position information of the reference point of interest and the gesture data of the user equipment, direction information of the user equipment includes:
determining a reference direction according to the attitude data of the user equipment;
and determining the direction information of the user equipment relative to the reference direction according to the lens opening angle of the user equipment and the screen position information of the reference interest point.
In an optional implementation manner of the foregoing application, the direction information of the user equipment is refined to be determined as the reference direction according to the posture data of the user equipment, and then the direction information of the user equipment relative to the reference direction is determined according to the lens opening angle of the user equipment and the screen position information of the reference interest point, so that the determination mechanism of the direction information is improved.
Optionally, determining a reference direction according to the posture data of the user equipment includes:
determining the position of a screen center line of the user equipment according to the attitude data of the user equipment;
and taking the direction perpendicular to the central line of the screen as the reference direction.
In an optional implementation manner in the above application, the center line of the screen is determined according to the posture information of the user equipment, and a direction perpendicular to the center line of the screen is used as a reference direction, so that a determination mechanism of the reference direction is perfected.
Optionally, determining, according to the lens opening angle of the user equipment and the screen position information of the reference interest point, direction information of the user equipment relative to the reference direction includes:
and determining direction information of the user equipment relative to the reference direction according to the screen length of the user equipment along a first direction, the screen position information of the reference interest point along the first direction and the lens opening angle of the user equipment based on the triangular relation.
In an optional implementation manner of the foregoing application, based on a trigonometric relationship, the position information of the user equipment relative to the reference direction is determined according to the screen length of the user equipment along the first direction, the screen position information of the reference interest point along the first direction, and the lens opening angle of the user equipment, so that a determination mechanism of the position information is improved.
Optionally, the first direction is a direction parallel to or perpendicular to the reference direction in a plane where a screen of the user equipment is located.
In an optional implementation manner in the above application, the first direction is refined into a direction parallel or perpendicular to the reference direction in a plane where a screen of the user equipment is located, so that a determination manner of the direction information is enriched.
Optionally, determining at least two reference points of interest associated with the identification information includes:
acquiring initial positioning information of the user equipment;
determining similarity between the identification information and candidate identification information of each candidate interest point in the region to which the initial positioning information belongs;
and determining at least two candidate interest points associated with the identification information as reference interest points according to the similarity.
In an optional implementation manner of the foregoing application, initial positioning information of the user equipment based on satellite or base station positioning is obtained, a candidate interest point is defined through an area to which the initial positioning information belongs, and then a similarity between identification information and identification information of the candidate interest point is determined, a reference interest point is determined based on the similarity, data support is provided for subsequent positioning of the user equipment, and meanwhile, the similarity determines the reference interest point, which lays a foundation for accuracy of positioning information of the user equipment.
Optionally, before determining the direction information of the user equipment according to the screen position information of the at least two reference interest points and the gesture data of the user equipment, the method further includes:
and determining the screen position information of the reference interest points according to the screen position information of at least two reference points in the reference interest points.
In an optional embodiment of the foregoing application, the screen location information of the reference interest point is determined by referring to the screen location information of at least two reference points in the interest points, so that a determination mechanism of the screen location information of the reference interest point is perfected.
Optionally, the identification information includes a text identification and/or a pattern identification.
In an optional implementation manner in the above application, the identification information is refined into the text identifier and/or the pattern identifier, so that the positioning manner when the user equipment is positioned is further enriched.
In a second aspect, an embodiment of the present application further provides a positioning apparatus, including:
the identification information extraction module is used for extracting identification information in an image acquired by user equipment;
a reference interest point determination module for determining at least two reference interest points associated with the identification information;
the direction information determining module is used for determining direction information of the user equipment according to the screen position information of the reference interest points and the gesture data of the user equipment aiming at the reference interest points;
and the user equipment positioning module is used for determining the positioning information of the user equipment according to the positioning information of the reference interest point and the direction information.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform a positioning method as provided in embodiments of the first aspect.
In a fourth aspect, embodiments of the present application further provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a positioning method provided in the first aspect.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart of a positioning method in a first embodiment of the present application;
fig. 2A is a flowchart of a positioning method in the second embodiment of the present application;
fig. 2B is a schematic diagram illustrating a principle of determining direction information in the second embodiment of the present application;
fig. 2C is a schematic diagram of a positioning principle for positioning a ue in the second embodiment of the present application;
FIG. 3 is a block diagram of a positioning device in an embodiment of the present application;
fig. 4 is a block diagram of an electronic device for implementing the positioning method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example one
Fig. 1 is a flowchart of a positioning method in a first embodiment of the present application. The embodiment of the application is suitable for the situation of positioning the user equipment in the process of using the user equipment. The method is executed by a positioning device, the device is executed by software and/or hardware, the device is configured in an electronic device, and the electronic device can be a terminal device or a server.
A method of positioning as shown in fig. 1, comprising:
s101, extracting identification information in an image acquired by user equipment.
The user equipment can be acquisition equipment provided with a camera and used for acquiring images. Optionally, the user equipment is a smart phone, a camera, a tablet computer, or the like.
Wherein the identification information comprises a text identification and/or a pattern identification. Where the textual identification may be a character or the like associated with a point of interest in the image, such as "kendiry" or "KFC" on a "kendiry" store sign. The pattern logo may be a trademark or a logo, for example, a pattern designed with a creator image on a "kentucky" shop signboard.
Optionally, at least one text identifier in the captured image may be extracted using OCR (Optical Character Recognition).
Or optionally, a text identifier library is constructed in advance, the collected image is matched with a reference text identifier in the text identifier library, and the matched reference text identifier is used as the identifier information; or, a representation library can be constructed in advance, the acquired image is matched with the reference pattern identifier in the pattern identifier library, and the matched reference pattern identifier is used as the identifier information.
S102, determining at least two reference interest points associated with the identification information.
The interest point may be a house, a shop, a mailbox, or a bus station.
Illustratively, initial positioning information of the user equipment is obtained; determining similarity between the identification information and candidate identification information of each candidate interest point in the region to which the initial positioning information belongs; and determining at least two candidate interest points associated with the identification information as reference interest points according to the similarity.
Optionally, the obtained initial positioning information of the user equipment may be positioning information obtained by positioning based on existing positioning equipment such as a satellite or a base station.
It can be understood that the candidate interest points associated with the identification information are defined by the initial positioning information, so that the situation that the same interest points are distributed in different areas to cause interest point confusion and finally cause inaccurate positioning results when the user equipment is positioned is avoided.
It should be noted that, by determining the similarity between the identification information and the candidate interest points, at least two candidate interest points with higher similarity to the identification information, that is, with higher matching degree, can be screened from each candidate interest point as reference interest points.
Exemplarily, determining at least two candidate interest points associated with the identification information as the reference interest points according to the similarity may be: according to the similarity, sequencing the candidate interest points; selecting a set number of candidate interest points with the top ranking as reference interest points; and/or screening candidate interest points with the similarity larger than a set similarity threshold value as reference interest points.
S103, aiming at each reference interest point, determining direction information of the user equipment according to the screen position information of the reference interest point and the gesture data of the user equipment.
The screen position coordinates can be understood as relative coordinates of the reference interest point in the image of the reference interest point relative to the screen of the user equipment when image acquisition is performed on the reference interest point in advance.
For example, the screen position information of any one of the reference points of interest may be used as the screen position information of the reference point of interest.
In order to improve the accuracy of the determined reference interest points and lay a foundation for the accuracy of the positioning result of the subsequent user equipment, for example, the screen position information of the reference interest points can be determined according to the screen position information of at least two reference points in the reference interest points.
Optionally, the screen position information of at least two reference points included in the reference interest point is averaged, and the average result is used as the screen position information of the reference interest point.
Specifically, if (W) 1 ,H 1 )、(W 2 ,H 2 )、(W 3 ,H 3 ) And (W) 4 ,H 4 ) To refer to the screen position information of four reference points among the interest points, it will be ((W) 1 +W 2 +W 3 +W 4 )/4,(H 1 +H 2 +H 3 +H 4 ) And/4) screen position information as a reference point of interest.
Or optionally, identifying the maximum screen position and the minimum screen position of the screen reference direction according to the screen position information of at least two reference points included in the reference interest points, and respectively determining the screen positions of the screen reference direction according to the average values of the maximum screen position and the minimum screen position of the screen reference direction; and combining the screen positions of the reference directions of the screens to obtain the screen position information of the reference interest points.
Specifically, if (W) 1 ,H 1 )、(W 2 ,H 2 )、(W 3 ,H 3 ) And (W) 4 ,H 4 ) To refer to the screen position information of four reference points among the interest points, maximum screen positions W in the W direction and the H direction are determined according to the following manner, respectively max =max{W 1 ,W 2 ,W 3 ,W 4 H and max =max{H 1 ,H 2 ,H 3 ,H 4 and, minimum screen position W in the W and H directions min =min{W 1 ,W 2 ,W 3 ,W 4 H and min =min{H 1 ,H 2 ,H 3 ,H 4 }; will ((W) max +W min )/2,(H max +H min ) /2)) as screen location information of the reference point of interest.
It should be noted that, the screen position information of the reference point may automatically determine the reference point and identify the screen position information in the process of acquiring the image of the reference point interest point.
The gesture data of the user equipment is used for representing acquisition azimuth information when the user equipment acquires images.
For example, a collection direction, a pitch angle, a rotation angle, and a yaw angle of the user device may be acquired, and the posture data of the user device may be determined according to the acquired collection direction, pitch angle, rotation angle, and yaw angle.
Illustratively, according to the screen position information of the reference interest point and the posture data of the user equipment, the angular offset between the user equipment and the reference interest point when image acquisition is carried out can be determined, and the angular offset is used as the direction information of the user equipment relative to the reference interest point.
S104, determining the positioning information of the user equipment according to the positioning information of the reference interest point and the direction information.
Generally, when the user equipment is positioned, the positioning result at least comprises two dimensions, so that the positioning information of the user equipment in one dimension can be determined according to the positioning information and the direction information of the reference interest point; and combining the positioning information under each dimension to obtain the positioning information of the user equipment.
The positioning information of the reference interest point may be obtained based on positioning by existing positioning equipment, for example, by using satellite positioning or base station positioning, and the positioning information of the reference interest point is an absolute positioning coordinate of the reference interest point in a world coordinate system.
The embodiment of the application acquires identification information in an image acquired by user equipment; determining at least two reference points of interest associated with the identification information; determining orientation information of the user equipment according to the screen position information of the reference interest points and the gesture data of the user equipment aiming at the reference interest points; and determining the positioning information of the user equipment according to the positioning information and the direction information of the reference interest point. According to the technical scheme, the auxiliary positioning of the user equipment is carried out by collecting the identification information of the image, and the existing positioning mode is broken through. In addition, the auxiliary positioning of the user equipment is carried out based on the positioning information of the reference interest points, the multiplexing of the existing hardware equipment is realized, the hardware equipment does not need to be reset, and the positioning cost is reduced. Meanwhile, the orientation information of the user equipment is introduced, and the positioning information of the reference interest point is restrained, so that the positioning precision of the user equipment is improved, and the positioning precision and the investment cost are considered.
Example two
Fig. 2A is a flowchart of a positioning method in the second embodiment of the present application, and the second embodiment of the present application performs optimization and improvement on the basis of the technical solutions of the foregoing embodiments.
Further, the operation of determining the direction information of the user equipment according to the screen position information of the reference interest point and the gesture data of the user equipment is refined into the operation of determining the reference direction according to the gesture data of the user equipment; determining direction information' of the user equipment relative to the reference direction according to the lens opening angle of the user equipment and the screen position information of the reference interest point so as to
A positioning method as shown in fig. 2A, comprising:
s201, extracting identification information in an image collected by user equipment.
S202, determining at least two reference interest points associated with the identification information.
Illustratively, initial positioning information of the user equipment is obtained; determining similarity between the identification information and candidate identification information of each candidate interest point in the region to which the initial positioning information belongs; and determining at least two candidate interest points associated with the identification information as reference interest points according to the similarity.
For example, image acquisition may be performed in advance for each candidate interest point, so as to obtain a candidate image, and screen position information of candidate identification information in the candidate interest point in the candidate image and location information of the candidate interest point may be determined.
In order to ensure the accuracy of the positioning result of the user equipment, the same screen shooting state is kept when the candidate images are collected aiming at the candidate interest points, wherein the screen shooting state is horizontal screen shooting or vertical screen shooting. Accordingly, when the user equipment is located and the user equipment is used for image acquisition, the same screen shooting state as that of the candidate interest point needs to be maintained.
S203, aiming at each reference interest point, determining a reference direction according to the attitude data of the user equipment.
In an optional implementation manner of the embodiment of the present application, determining the reference direction according to the posture data of the user equipment may be: determining the position of a screen center line of the user equipment according to the attitude data of the user equipment; and taking the direction perpendicular to the central line of the screen as the reference direction.
Referring to a schematic diagram of a principle of determining direction information shown in fig. 2B, a screen center line 21 is determined according to the posture data of the user equipment 20 for the reference interest point a, and a direction perpendicular to the screen center line is taken as a reference direction 22.
S204, determining direction information of the user equipment relative to the reference direction according to the lens opening angle of the user equipment and the screen position information of the reference interest point.
In an optional implementation manner of the embodiment of the present application, based on a trigonometric relationship, the direction information of the user equipment relative to the reference direction is determined according to the screen length of the user equipment along the first direction, the screen position information of the reference interest point along the first direction, and the lens opening angle of the user equipment.
The first direction is a direction parallel to or perpendicular to the reference direction in a plane where a screen of the user equipment is located.
Specifically, referring to fig. 2B, when the first direction 23 is a direction parallel to the reference direction, the direction information of the user equipment is determined according to the following formula according to the characteristic that the right angle Δ ADB and the right angle Δ CDB have the common cathetus BD:
wherein c is a right angle, r is the lens opening angle, sw is the screen width of the user equipment, and w is i For the width in the ith reference point of interest screen position information, a i Is the direction information.
Specifically, when the first direction is a direction perpendicular to the reference direction, the direction information of the user equipment is determined according to the following formula:
wherein c is a right angle, r is the lens opening angle, sh is the screen height of the user equipment, and w i Height in screen position information of ith reference point of interest, a i Is the direction information.
It should be noted that point B in fig. 2B is a field angle aggregation point, and the field angle aggregation point is used as an actual positioning point of the user equipment, so that when positioning user settings, the matching degree between the positioning result and the actual position of the user can be improved.
It can be appreciated that, since the screen location information of different reference points of interest is different, the orientation information of the user equipment determined by different reference points of interest is also different.
S205, determining the positioning information of the user equipment according to the positioning information of the reference interest point and the direction information.
See fig. 2C for a schematic illustration of a positioning principle for positioning a user equipment.
Specifically, the positioning information of the user equipment is determined according to the following formula:
wherein (X, Y) is the positioning information of the user equipment, (X) i ,y i ) For the ith reference point of interest POI i Positioning information of a i Is the direction information.
The value range of i is determined by the latitude of the positioning information, and if the positioning information of the user equipment is two-dimensional data, i =1,2; if the positioning information of the user equipment is three-dimensional data, i =1,2,3.
It should be noted that, because the accuracy of the positioning information of the reference interest point is poor, the reference interest point is corrected by the direction information of the user equipment, so as to determine the positioning information of the user equipment.
It can be understood that, in order to reduce the influence of random errors, when the positioning information of the user equipment is performed, the positioning information of the user equipment can be determined by respectively referring to the positioning information and the direction information of the plurality of interest points; and obtaining the final positioning result of the user equipment by averaging the positioning information of the plurality of user equipment.
According to the method and the device, the determination process of the direction information of the user equipment is carried out aiming at each reference interest point, the reference direction is determined according to the posture data of the user equipment, and then the direction information of the user equipment relative to the reference direction is determined according to the lens opening angle of the user equipment and the screen position information of the reference interest points. By adopting the technical scheme, the determination mechanism of the direction information is perfected, and data support is provided for subsequent positioning of the user equipment.
EXAMPLE III
Fig. 3 is a structural diagram of a positioning apparatus in an embodiment of the present application. The embodiment of the application is suitable for the situation of positioning the user equipment in the process of using the user equipment. The method is executed by a positioning device, the device is executed by software and/or hardware, the device is configured in an electronic device, and the electronic device can be a terminal device or a server.
A positioning device 300, as shown in fig. 3, comprising: an identification information extraction module 301, a reference point of interest determination module 302, a direction information determination module 303 and a user equipment location module 304.
Wherein,
an identification information extraction module 301, configured to extract identification information in an image acquired by a user equipment;
a reference point of interest determination module 302, configured to determine at least two reference points of interest associated with the identification information;
a direction information determining module 303, configured to determine direction information of the user equipment according to the screen position information of the at least two reference interest points and the gesture data of the user equipment;
a user equipment location module 304, configured to determine location information of the user equipment according to the location information of the reference interest point and the direction information.
The embodiment of the application acquires the identification information in the image acquired by the user equipment through the identification information extraction module; determining, by a direction information determination module, at least two reference points of interest associated with the identification information; determining orientation information of the user equipment according to the screen position information of the reference interest points and the gesture data of the user equipment aiming at each reference interest point; and determining the positioning information of the user equipment according to the positioning information and the direction information of the reference interest point through a user equipment positioning module. According to the technical scheme, the auxiliary positioning of the user equipment is carried out by collecting the identification information of the image, and the existing positioning mode is broken through. In addition, the auxiliary positioning of the user equipment is carried out based on the positioning information of the reference interest points, the multiplexing of the existing hardware equipment is realized, the hardware equipment does not need to be reset, and the positioning cost is reduced. Meanwhile, the orientation information of the user equipment is introduced, and the positioning information of the reference interest point is restrained, so that the positioning precision of the user equipment is improved, and the positioning precision and the investment cost are considered.
Further, the direction information determining module 303 is specifically configured to:
determining a reference direction according to the attitude data of the user equipment;
and determining the direction information of the user equipment relative to the reference direction according to the lens opening angle of the user equipment and the screen position information of the reference interest point.
Further, the direction information determining module 303, when performing the determining of the reference direction according to the gesture data of the user equipment, is specifically configured to:
determining the position of a screen center line of the user equipment according to the attitude data of the user equipment;
and taking the direction perpendicular to the central line of the screen as the reference direction.
Further, the direction information determining module 303 is specifically configured to:
and determining direction information of the user equipment relative to the reference direction according to the screen length of the user equipment along a first direction, the screen position information of the reference interest point along the first direction and the lens opening angle of the user equipment based on the triangular relation.
Further, the first direction is a direction parallel or perpendicular to the reference direction in a plane of a screen of the user equipment.
Further, the reference point of interest determination module 302 is specifically configured to:
acquiring initial positioning information of the user equipment;
determining similarity between the identification information and candidate identification information of each candidate interest point in the region to which the initial positioning information belongs;
and determining at least two candidate interest points associated with the identification information as reference interest points according to the similarity.
Further, the apparatus further comprises a screen position information determining module configured to:
before determining the direction information of the user equipment according to the screen position information of the at least two reference interest points and the gesture data of the user equipment, determining the screen position information of the reference interest points according to the screen position information of the at least two reference points.
Further, the identification information includes a text identification and/or a pattern identification.
The positioning device can execute the positioning method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution behavior method.
Example four
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 4 is a block diagram of an electronic device implementing the positioning method according to the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 4, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 4, one processor 401 is taken as an example.
The memory 402, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the positioning method in the embodiment of the present application (for example, the identification information extraction module 301, the reference interest point determination module 302, the direction information determination module 303, and the user equipment positioning module 304 shown in fig. 3). The processor 401 executes various functional applications of the server and data processing by executing non-transitory software programs, instructions and modules stored in the memory 402, so as to implement the positioning method in the above-described method embodiment.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the electronic device implementing the positioning method, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 402 may optionally include memory located remotely from the processor 401, which may be connected via a network to an electronic device implementing the positioning method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device implementing the positioning method may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 4 illustrates an example of a connection by a bus.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus implementing the positioning method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 404 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the Internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The embodiment of the application acquires identification information in an image acquired by user equipment; determining at least two reference points of interest associated with the identification information; determining orientation information of the user equipment according to the screen position information of the reference interest points and the gesture data of the user equipment aiming at the reference interest points; and determining the positioning information of the user equipment according to the positioning information and the direction information of the reference interest point. According to the technical scheme, the auxiliary positioning of the user equipment is carried out by collecting the identification information of the image, and the existing positioning mode is broken through. In addition, the auxiliary positioning of the user equipment is carried out based on the positioning information of the reference interest points, the multiplexing of the existing hardware equipment is realized, the hardware equipment does not need to be reset, and the positioning cost is reduced. Meanwhile, the orientation information of the user equipment is introduced, and the positioning information of the reference interest point is restrained, so that the positioning precision of the user equipment is improved, and the positioning precision and the investment cost are both considered. It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (9)
1. A method of positioning, comprising:
extracting identification information in an image acquired by user equipment;
determining at least two reference points of interest associated with the identification information;
for each reference interest point, determining direction information of the user equipment according to the screen position information of the reference interest point and the gesture data of the user equipment;
determining the positioning information of the user equipment according to the positioning information of the reference interest point and the direction information;
determining direction information of the user equipment according to the screen position information of the reference interest point and the gesture data of the user equipment, wherein the determining comprises the following steps:
determining a reference direction according to the attitude data of the user equipment;
and determining direction information of the user equipment relative to the reference direction according to the screen length of the user equipment along a first direction, the screen position information of the reference interest point along the first direction and the lens opening angle of the user equipment based on the triangular relation.
2. The method of claim 1, wherein determining a reference direction from the pose data of the user device comprises:
determining the position of a screen center line of the user equipment according to the attitude data of the user equipment;
and taking the direction perpendicular to the central line of the screen as the reference direction.
3. The method according to claim 1, wherein the first direction is a direction parallel or perpendicular to the reference direction in a plane of a screen of the user equipment.
4. The method of claim 1, wherein determining at least two reference points of interest associated with the identifying information comprises:
acquiring initial positioning information of the user equipment;
determining similarity between the identification information and candidate identification information of each candidate interest point in the region to which the initial positioning information belongs;
and determining at least two candidate interest points associated with the identification information as reference interest points according to the similarity.
5. The method of claim 1, wherein prior to determining the orientation information of the user device based on the screen location information of the reference point of interest and the pose data of the user device, the method further comprises:
and determining the screen position information of the reference interest points according to the screen position information of at least two reference points in the reference interest points.
6. The method according to any of claims 1-5, wherein the identification information comprises a text identification and/or a pattern identification.
7. A positioning device, comprising:
the identification information extraction module is used for extracting identification information in an image acquired by user equipment;
a reference interest point determination module for determining at least two reference interest points associated with the identification information;
the direction information determining module is used for determining direction information of the user equipment according to the screen position information of the reference interest points and the gesture data of the user equipment aiming at the reference interest points;
the user equipment positioning module is used for determining the positioning information of the user equipment according to the positioning information of the reference interest point and the direction information;
the direction information determining module is specifically configured to:
determining a reference direction according to the attitude data of the user equipment;
and determining direction information of the user equipment relative to the reference direction according to the screen length of the user equipment along a first direction, the screen position information of the reference interest point along the first direction and the lens opening angle of the user equipment based on the triangular relation.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a positioning method according to any one of claims 1-6.
9. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a positioning method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010177560.XA CN113390413B (en) | 2020-03-13 | 2020-03-13 | Positioning method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010177560.XA CN113390413B (en) | 2020-03-13 | 2020-03-13 | Positioning method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113390413A CN113390413A (en) | 2021-09-14 |
CN113390413B true CN113390413B (en) | 2022-11-29 |
Family
ID=77616294
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010177560.XA Active CN113390413B (en) | 2020-03-13 | 2020-03-13 | Positioning method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113390413B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573735A (en) * | 2015-01-05 | 2015-04-29 | 广东小天才科技有限公司 | Method for optimizing positioning based on image shooting, intelligent terminal and server |
CN104881625A (en) * | 2015-05-13 | 2015-09-02 | 上海人智信息科技有限公司 | Method and system for positioning and navigation based on two-dimension code recognition |
CN105890597A (en) * | 2016-04-07 | 2016-08-24 | 浙江漫思网络科技有限公司 | Auxiliary positioning method based on image analysis |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110153198A1 (en) * | 2009-12-21 | 2011-06-23 | Navisus LLC | Method for the display of navigation instructions using an augmented-reality concept |
-
2020
- 2020-03-13 CN CN202010177560.XA patent/CN113390413B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573735A (en) * | 2015-01-05 | 2015-04-29 | 广东小天才科技有限公司 | Method for optimizing positioning based on image shooting, intelligent terminal and server |
CN104881625A (en) * | 2015-05-13 | 2015-09-02 | 上海人智信息科技有限公司 | Method and system for positioning and navigation based on two-dimension code recognition |
CN105890597A (en) * | 2016-04-07 | 2016-08-24 | 浙江漫思网络科技有限公司 | Auxiliary positioning method based on image analysis |
Also Published As
Publication number | Publication date |
---|---|
CN113390413A (en) | 2021-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111695628B (en) | Key point labeling method and device, electronic equipment and storage medium | |
CN111523468B (en) | Human body key point identification method and device | |
CN111722245B (en) | Positioning method, positioning device and electronic equipment | |
CN111782977B (en) | Point-of-interest processing method, device, equipment and computer readable storage medium | |
CN111275827B (en) | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment | |
CN111862199B (en) | Positioning method, positioning device, electronic equipment and storage medium | |
CN110532415B (en) | Image search processing method, device, equipment and storage medium | |
CN110675635B (en) | Method and device for acquiring external parameters of camera, electronic equipment and storage medium | |
CN111507354B (en) | Information extraction method, device, equipment and storage medium | |
KR102566300B1 (en) | Method for indoor localization and electronic device | |
CN112241716B (en) | Training sample generation method and device | |
CN111462174A (en) | Multi-target tracking method and device and electronic equipment | |
CN111784757A (en) | Training method of depth estimation model, depth estimation method, device and equipment | |
CN111191619A (en) | Method, device and equipment for detecting virtual line segment of lane line and readable storage medium | |
CN111698422A (en) | Panoramic image acquisition method and device, electronic equipment and storage medium | |
CN112000901B (en) | Method and device for extracting spatial relationship of geographic position points | |
CN111400537B (en) | Road element information acquisition method and device and electronic equipment | |
CN113390413B (en) | Positioning method, device, equipment and storage medium | |
CN112488126A (en) | Feature map processing method, device, equipment and storage medium | |
CN112017304A (en) | Method, apparatus, electronic device, and medium for presenting augmented reality data | |
CN111696134A (en) | Target detection method and device and electronic equipment | |
CN112200190B (en) | Method and device for determining position of interest point, electronic equipment and storage medium | |
CN112381877B (en) | Positioning fusion and indoor positioning method, device, equipment and medium | |
US11736795B2 (en) | Shooting method, apparatus, and electronic device | |
US11488384B2 (en) | Method and device for recognizing product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |