CN112802097A - Positioning method, positioning device, electronic equipment and storage medium - Google Patents

Positioning method, positioning device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112802097A
CN112802097A CN202011604841.5A CN202011604841A CN112802097A CN 112802097 A CN112802097 A CN 112802097A CN 202011604841 A CN202011604841 A CN 202011604841A CN 112802097 A CN112802097 A CN 112802097A
Authority
CN
China
Prior art keywords
information
dimensional pose
user side
positioning result
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011604841.5A
Other languages
Chinese (zh)
Inventor
李宇飞
张建博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202011604841.5A priority Critical patent/CN112802097A/en
Publication of CN112802097A publication Critical patent/CN112802097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)

Abstract

The disclosure provides a positioning method, a positioning device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a current scene image shot by a user side; determining three-dimensional pose information corresponding to the user side based on the current scene image; under the condition that the height information in the three-dimensional pose information is within a target height range, taking the determined three-dimensional pose information as a positioning result of the user side; and determining that the positioning result of the user side is positioning failure under the condition that the height information in the three-dimensional pose information is not in the target height range.

Description

Positioning method, positioning device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a positioning method, an apparatus, an electronic device, and a storage medium.
Background
The positioning method can acquire a scene image by using equipment, and determine the pose information of the equipment according to the scene image and a visual high-precision map constructed in advance.
Generally, when a visual high-precision map and a scene image are used for positioning, under the condition that the scene image acquired by equipment has fewer visual features, a positioning result has larger errors, and the positioning accuracy is reduced.
Disclosure of Invention
In view of the above, the present disclosure provides at least a positioning method, an apparatus, an electronic device and a storage medium.
In a first aspect, the present disclosure provides a positioning method, including:
acquiring a current scene image shot by a user side;
determining three-dimensional pose information corresponding to the user side based on the current scene image;
under the condition that the height information in the three-dimensional pose information is within a target height range, taking the determined three-dimensional pose information as a positioning result of the user side;
and determining that the positioning result of the user side is positioning failure under the condition that the height information in the three-dimensional pose information is not in the target height range.
By adopting the method, the three-dimensional pose information corresponding to the user side is determined by utilizing the current scene image shot by the user side, whether the height information in the three-dimensional pose information is located in the target height range is judged, and if yes, the determined three-dimensional pose information is used as a positioning result for the user side; if not, determining that the positioning result of the user side is positioning failure; considering that the height information in the three-dimensional pose information corresponding to the user side generally corresponds to the height of the user, the three-dimensional pose information is screened by using the set target height range, so that the three-dimensional pose information which is not in the target height range can be screened out, and the application error caused by inaccurate positioning results, such as the deviation of a navigation route caused by inaccurate positioning, is avoided.
In a possible implementation manner, after determining that the positioning result for the ue is a positioning failure, the method further includes:
and displaying abnormal prompt information of the positioning result through the user side, wherein the abnormal prompt information of the positioning result is used for prompting a user to adjust a shooting visual angle.
In the foregoing embodiment, after the positioning fails, the user may be prompted to adjust the shooting angle of view to a suitable position, for example, to shoot a scene image with more visual features, so that a positioning result may be obtained based on the scene image after the shooting angle of view is adjusted.
In a possible implementation manner, the acquiring a current scene image captured by a user side includes:
responding to a user navigation request, and acquiring a current scene image shot by a user side;
the step of using the determined three-dimensional pose information as a positioning result for the user side includes:
and taking the determined three-dimensional pose information as a positioning result of the user side in navigation.
In the above embodiment, the three-dimensional pose information corresponding to the user side obtained in the navigation process can be screened in the navigation process, so that the accuracy of the positioning result is ensured, and the accuracy of the navigation process is further improved.
In a possible implementation manner, after determining the three-dimensional pose information corresponding to the user side, the method further includes:
under the condition that a positioning result corresponding to a previous frame of scene image exists, judging whether height information in the three-dimensional pose information is in a set target height range or not, and judging whether a difference value between position information in a horizontal direction in the three-dimensional pose information and position information in the horizontal direction indicated by the positioning result corresponding to the previous frame of scene image is in a set difference value range or not;
when the height information in the three-dimensional pose information is within the target height range, taking the determined three-dimensional pose information as a positioning result of the user side, and the method comprises the following steps:
and when the height information in the three-dimensional pose information is within a target height range and the difference value between the position information in the horizontal direction in the three-dimensional pose information and the position information in the horizontal direction indicated by the positioning result corresponding to the previous scene image is within a set difference value range, taking the determined three-dimensional pose information as the positioning result corresponding to the current scene image of the user terminal.
In the above embodiment, it is considered that the user terminal should move in the horizontal direction within a set range within a certain time, for example, within 20 seconds, the user terminal generally does not move more than 50 meters in the horizontal direction. Therefore, whether the three-dimensional pose information is determined as the positioning result of the user terminal can be comprehensively determined based on the height information of the three-dimensional pose information and the position information of the horizontal direction, namely, the determined three-dimensional pose information is used as the positioning result corresponding to the current scene image of the user terminal under the condition that the height information in the three-dimensional pose information is within the target height range and the difference value between the position information of the horizontal direction in the three-dimensional pose information and the position information of the horizontal direction indicated by the positioning result corresponding to the previous scene image is within the set difference value range, and the accuracy of the positioning result is improved.
In a possible embodiment, the determining three-dimensional pose information as a result of positioning the user terminal in navigation includes:
taking the current positioning result as navigation starting point information under the condition that the current positioning result is the positioning result at the beginning of navigation;
and generating a navigation path based on the navigation starting point information and the navigation end point information determined by the user side, and displaying the navigation path at the user side.
Here, when the determined three-dimensional pose information is used as a positioning result for the user side in navigation, the positioning result can be used as navigation starting point information, and on the premise that the positioning result is accurate, the accuracy of the navigation process can be higher.
In a possible implementation manner, the determining three-dimensional pose information as a result of positioning the user terminal in navigation further includes:
judging whether the current positioning result is on the navigation path or not;
if the navigation path is not on the navigation path, displaying path deviation indication information through the user side, and/or regenerating the navigation path according to the current positioning result and the navigation destination information, and displaying the navigation path at the user side.
In a possible implementation manner, determining three-dimensional pose information corresponding to the user side based on the current scene image includes:
and determining three-dimensional pose information corresponding to the user side based on the current scene image and a pre-constructed three-dimensional map.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides a positioning device comprising:
the acquisition module is used for acquiring a current scene image shot by a user side;
the determining module is used for determining three-dimensional pose information corresponding to the user side based on the current scene image;
the first judgment module is used for taking the determined three-dimensional pose information as a positioning result of the user side under the condition that the height information in the three-dimensional pose information is within a target height range;
and the second judgment module is used for determining that the positioning result of the user side is positioning failure under the condition that the height information in the three-dimensional pose information is not in the target height range.
In a possible implementation manner, after determining that the positioning result for the ue is a positioning failure, the method further includes:
and the display module is used for displaying abnormal prompt information of the positioning result through the user side, and the abnormal prompt information of the positioning result is used for prompting a user to adjust a shooting visual angle.
In a possible implementation manner, the obtaining module, when obtaining the current scene image captured by the user terminal, is configured to:
responding to a user navigation request, and acquiring a current scene image shot by a user side;
the first judging module, when the determined three-dimensional pose information is used as a positioning result for the user side, is configured to:
and taking the determined three-dimensional pose information as a positioning result of the user side in navigation.
In a possible implementation manner, after determining the three-dimensional pose information corresponding to the user side, the method further includes:
a third judging module, configured to, in the presence of a positioning result corresponding to a previous frame of scene image, judge whether height information in the three-dimensional pose information is within a set target height range, and judge whether a difference between position information in a horizontal direction in the three-dimensional pose information and position information in a horizontal direction indicated by the positioning result corresponding to the previous frame of scene image is within a set difference range;
the first judging module, when taking the determined three-dimensional pose information as a positioning result for the user side under the condition that the height information in the three-dimensional pose information is within the target height range, is configured to:
and when the height information in the three-dimensional pose information is within a target height range and the difference value between the position information in the horizontal direction in the three-dimensional pose information and the position information in the horizontal direction indicated by the positioning result corresponding to the previous scene image is within a set difference value range, taking the determined three-dimensional pose information as the positioning result corresponding to the current scene image of the user terminal.
In a possible implementation manner, the first determining module, when the determined three-dimensional pose information is used as a positioning result for the user terminal in navigation, is configured to:
taking the current positioning result as navigation starting point information under the condition that the current positioning result is the positioning result at the beginning of navigation;
and generating a navigation path based on the navigation starting point information and the navigation end point information determined by the user side, and displaying the navigation path at the user side.
In a possible implementation manner, the first determining module, when using the determined three-dimensional pose information as a positioning result for the user terminal in navigation, is further configured to:
judging whether the current positioning result is on the navigation path or not;
if the navigation path is not on the navigation path, displaying path deviation indication information through the user side, and/or regenerating the navigation path according to the current positioning result and the navigation destination information, and displaying the navigation path at the user side.
In a possible implementation manner, the determining module, when determining the three-dimensional pose information corresponding to the user terminal based on the current scene image, is configured to:
and determining three-dimensional pose information corresponding to the user side based on the current scene image and a pre-constructed three-dimensional map.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the positioning method according to the first aspect or any of the embodiments.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the positioning method according to the first aspect or any one of the embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic flow chart illustrating a positioning method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a display interface of a user side in a positioning method according to an embodiment of the disclosure;
fig. 3 is a schematic diagram illustrating an architecture of a positioning apparatus provided in an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
The computer vision target positioning can utilize equipment to obtain a scene image, and determine the pose information of the equipment according to the scene image and a vision high-precision map constructed in advance. Generally, when a visual high-precision map and a scene image are used for positioning, under the condition that the scene image acquired by equipment has fewer visual features, a positioning result has larger errors, and the positioning accuracy is reduced. In order to solve the above problem, an embodiment of the present disclosure provides a positioning method.
To facilitate understanding of the embodiment of the present disclosure, first, a positioning method disclosed in the embodiment of the present disclosure is described in detail, where an execution main body of the positioning method provided in the embodiment of the present disclosure may be a user side, for example, the user side may be an electronic device such as a mobile phone, a tablet, and AR glasses; the server may also be a server, such as a local server, a cloud server, and the like, which is not limited in the embodiment of the present disclosure.
Referring to fig. 1, a schematic flow chart of a positioning method provided in the embodiment of the present disclosure is shown, the method includes S101-S104, where:
and S101, acquiring a current scene image shot by a user side.
And S102, determining three-dimensional pose information corresponding to the user side based on the current scene image.
And S103, under the condition that the height information in the three-dimensional pose information is within the target height range, taking the determined three-dimensional pose information as a positioning result for the user side.
And S104, determining that the positioning result of the user side is positioning failure under the condition that the height information in the three-dimensional pose information is not in the target height range.
In the method, the current scene image shot by the user side is utilized to determine the three-dimensional pose information corresponding to the user side, whether the height information in the three-dimensional pose information is located in the target height range is judged, and if yes, the determined three-dimensional pose information is used as a positioning result for the user side; if not, determining that the positioning result of the user side is positioning failure; considering that the height information in the three-dimensional pose information corresponding to the user side generally corresponds to the height of the user, the three-dimensional pose information is screened by using the set target height range, so that the three-dimensional pose information which is not in the target height range can be screened out, and the application error caused by inaccurate positioning results, such as the deviation of a navigation route caused by inaccurate positioning, is avoided.
For S101:
in specific implementation, the user terminal can shoot the current scene image corresponding to the position of the user terminal through the camera, so that the execution main body can obtain the current scene image shot by the user terminal. When the execution subject is the user side, the user side can acquire the shot current scene image. When the execution main body is the server, the user side can send the shot current scene image to the server; alternatively, the server may obtain the current scene image from the user terminal.
The current scene image may be a color image or a grayscale image, where the grayscale image may be obtained by converting a pixel value of a captured color image. Here, when the current scene image is a grayscale image, the time for sending the current scene image to the server may be reduced, and the positioning efficiency may be further improved.
For S102:
here, after the current scene image is acquired, the current scene image may be identified, and the three-dimensional pose information corresponding to the user terminal may be determined.
In an optional embodiment, determining three-dimensional pose information corresponding to a user side based on a current scene image may include: and determining three-dimensional pose information corresponding to the user side based on the current scene image and a pre-constructed three-dimensional map.
Here, the current scene image may be identified, feature information of feature points included in the current scene image may be determined, and the three-dimensional pose information corresponding to the user terminal may be determined based on matching between the determined feature information of the feature points corresponding to the current scene image and a pre-constructed three-dimensional map.
Here, the user side may determine three-dimensional pose information corresponding to the user side based on the current scene image and the three-dimensional map acquired from the server; and the server can also determine the three-dimensional pose information corresponding to the user side based on the current scene image and a pre-constructed three-dimensional map.
The three-dimensional pose information comprises position information and orientation information of the user side. The location information of the user terminal may be a three-dimensional coordinate value of the user terminal in a world map coordinate system, that is, the location information of the user terminal includes horizontal information (location information in a horizontal direction), depth information (location information in a depth direction), and height information (location information in a height direction).
Illustratively, a three-dimensional map may be constructed according to the following steps: a scene video corresponding to a real scene can be obtained first, and a plurality of frames of sample images are obtained by sampling from the scene video; or acquiring a multi-frame sample image corresponding to the real scene; and extracting the characteristic information of the sample characteristic points included in each frame of sample image from the acquired multi-frame sample images, and constructing a three-dimensional map based on the characteristic information of the sample characteristic points respectively corresponding to the multi-frame sample images.
For S103 and S104:
here, the positioning result corresponding to the current scene image may be determined based on the determined height information in the three-dimensional pose information corresponding to the user terminal. Namely, when the height information in the three-dimensional pose information is within the target height range, the determined three-dimensional pose information is used as a positioning result for the user side, and when the execution main body is the server, the three-dimensional pose information can be sent to the user side. When the height information in the three-dimensional pose information is not within the target height range any more, the positioning result of the user side is determined to be positioning failure, and when the execution main body is a server, feedback information indicating the positioning failure can be sent to the user side.
For example, if the target height range is 0.5 m to 3 m, and the height information in the three-dimensional pose information is 2 m, the determined three-dimensional pose information is used as the positioning result of the user side; and when the height information in the three-dimensional pose information is 3.5 meters, determining that the positioning result of the user side is positioning failure.
The target height range may be determined based on the height of each user, for example, the highest height and the lowest height of the user may be counted, and the target height range may be determined based on the highest height and the lowest height. For example, the target height may range from 0.5 meters to 3 meters.
Here, the target height range may be set according to actual circumstances. For example, when the user initiates positioning, height information of the user may be obtained, and the target height range may be determined based on the height information of the user. For another example, when the user initiates positioning, an image of the user may be obtained, and height information of the user may be determined based on the image of the user.
In an optional implementation manner, after it is determined that the positioning result of the user side is a positioning failure, abnormal positioning result prompt information may be displayed through the user side, where the abnormal positioning result prompt information is used to prompt the user to adjust the shooting angle.
Here, when it is determined that the positioning result of the user side is positioning failure, abnormal positioning result prompt information can be generated, and the abnormal positioning result prompt information is displayed through the user side, so that after the user receives the abnormal positioning result prompt information, the user can adjust the shooting angle, and an accurate positioning result can be obtained when next positioning is performed.
The positioning result abnormal prompt information is used for prompting a user to adjust a shooting visual angle, wherein the positioning result abnormal prompt information can comprise one or more of character information, image information, voice information, video information and AR data. For example, the positioning result abnormality prompting message may be: "positioning fails, please adjust the user side", etc., refer to a schematic diagram of the display interface of the user side in the positioning method shown in fig. 2. Specifically, the positioning result abnormality prompt information may be set as needed.
In the foregoing embodiment, after the positioning fails, the user may be prompted to adjust the shooting angle of view to a suitable position, for example, to shoot a scene image with more visual features, so that a positioning result may be obtained based on the scene image after the shooting angle of view is adjusted.
In specific implementation, the positioning method provided by the present disclosure may be applied in an AR navigation scene.
As an optional embodiment, acquiring a current scene image captured by a user side includes: and responding to the user navigation request, and acquiring the current scene image shot by the user side.
Using the determined three-dimensional pose information as a positioning result for the user side, comprising: and taking the determined three-dimensional pose information as a positioning result of the user side in navigation.
Here, the user may trigger a navigation button displayed on the user end to generate the user navigation request, for example, the navigation button may be a photographing button on the navigation interface. And then, responding to the user navigation request, acquiring a current scene image shot by the user side, and determining the three-dimensional pose information corresponding to the user side based on the acquired current scene image. Finally, a positioning result of the user side in the navigation process can be determined based on the height information in the determined three-dimensional pose information; namely, when the height information in the three-dimensional pose information is within the target height range, the determined three-dimensional pose information is used as a positioning result of a user terminal in navigation; and when the height information in the three-dimensional pose information is not in the target height range, determining that the positioning result of the user side in the navigation is positioning failure.
In the above embodiment, the three-dimensional pose information corresponding to the user side obtained in the navigation process can be screened in the navigation process, so that the accuracy of the positioning result is ensured, and the accuracy of the navigation process is further improved.
In an optional embodiment, in the case of using the determined three-dimensional pose information as a positioning result for the user terminal in navigation, the following navigation presentation process may be performed based on the positioning result:
firstly, when the current positioning result is the positioning result at the beginning of navigation, the current positioning result is used as navigation starting point information.
And secondly, generating a navigation path based on the navigation starting point information and the navigation end point information determined by the user side, and displaying the navigation path at the user side.
Here, in the navigation scene, if the current positioning result is the positioning result at the start of navigation, the current positioning result is used as navigation start point information. For example, the scene object corresponding to the determined three-dimensional pose information may be determined as a navigation starting point. Further, a navigation path is generated based on the navigation start point information and the navigation end point information determined by the user terminal, and the generated navigation path is displayed on the user terminal, so that the user can move to a destination position (i.e. a navigation end point position) according to the navigation path.
Here, when the determined three-dimensional pose information is used as a positioning result for the user side in navigation, the positioning result can be used as navigation starting point information, and on the premise that the positioning result is accurate, the accuracy of the navigation process can be higher.
In an optional embodiment, in the case that the determined three-dimensional pose information is used as a positioning result for the user terminal in navigation, the following navigation display process may be further performed based on the positioning result:
firstly, judging whether the current positioning result is on a navigation path.
And if the navigation path is not located on the navigation path, displaying the path deviation indication information through the user side, and/or regenerating the navigation path according to the current positioning result and the navigation end point information, and displaying the navigation path at the user side.
Here, in the navigation process, at the start of navigation, a positioning result (i.e., a current position of the user is determined) may be determined from the current scene image, and the positioning result may be determined as navigation start point information. In the process that a user moves according to the navigation path, the current scene image can be periodically acquired once after each target distance or each target time, and a positioning result is determined once based on the current scene image. For example, the current scene image may be acquired every 10 seconds or every 20 meters of the user terminal, and a positioning result may be determined based on the current scene image.
When the current positioning result is a positioning result in the navigation process, that is, after the current positioning result is generated in the navigation process, it may be determined whether the current positioning result is on the navigation path, that is, whether the current positioning result deviates from the navigation path, and if not on the navigation path (that is, deviates from the navigation path), the path deviation prompting information may be displayed through the user side, for example, the path deviation prompting information may be: "off course, please reposition". Or, the navigation path can be generated for the user again according to the current positioning result and the set navigation end point information, and the regenerated navigation path is displayed at the user side.
In a practical scene, there may be a case where image feature information corresponding to two different positions is the same, for example, in a field a, an image of a celebrity a is set at a position a, and an image of a celebrity B is also set at a position B that is 1 km away from the position a, if a current scene image acquired by a user is an image including the celebrity a, when three-dimensional pose information (including position information and orientation information) corresponding to a user terminal is determined based on the current scene image, the position information at the position B may be determined as the position information in the three-dimensional pose information, which results in a large error of the determined three-dimensional pose information.
In consideration of the above situation, after the three-dimensional pose information corresponding to the user terminal is determined, the position information in the horizontal direction in the three-dimensional pose information can be determined.
That is, after determining the three-dimensional pose information corresponding to the user side, the method may further include: and under the condition that a positioning result corresponding to a previous scene image exists, judging whether height information in the three-dimensional pose information is in a set target height range, and judging whether a difference value between position information in a horizontal direction in the three-dimensional pose information and position information in the horizontal direction indicated by the positioning result corresponding to the previous scene image is in a set difference value range.
When the height information in the three-dimensional pose information is within the target height range, taking the determined three-dimensional pose information as a positioning result for the user side may include: and when the height information in the three-dimensional pose information is within a target height range and the difference value between the position information in the horizontal direction in the three-dimensional pose information and the position information in the horizontal direction indicated by the positioning result corresponding to the previous scene image is within a set difference value range, taking the determined three-dimensional pose information as the positioning result corresponding to the current scene image of the user terminal.
When a positioning result corresponding to a previous frame of scene image exists, after determining three-dimensional pose information corresponding to a user terminal, judging whether height information and horizontal direction position information in the three-dimensional pose information corresponding to a current scene image meet preset requirements, and when the height information and the horizontal direction position information in the three-dimensional pose information corresponding to the current scene image meet the preset requirements, taking the determined three-dimensional pose information as the positioning result corresponding to the current scene image of the user terminal; and when the height information and/or the horizontal direction position information in the three-dimensional pose information corresponding to the current scene image do not meet the preset requirements, determining that the positioning result of the user side is positioning failure.
In specific implementation, whether the height information in the three-dimensional pose information corresponding to the current scene image is within the set target height range or not can be judged, and if yes, the height information is determined to meet the preset requirement. Meanwhile, whether the difference value between the position information in the horizontal direction in the three-dimensional pose information corresponding to the current scene image and the position information in the horizontal direction indicated by the positioning result corresponding to the previous scene image is within the set difference value range can be judged, and if yes, the position information in the horizontal direction is determined to meet the preset requirement. The set difference range may be set according to actual needs, for example, the set difference range may be 50 meters, 100 meters, and the like.
For example, when the set difference range is 50 meters and the position information in the horizontal direction indicated by the positioning result corresponding to the previous frame of scene image is 2 meters, if the position information in the horizontal direction in the three-dimensional pose information corresponding to the current scene image is 10 meters, it is determined that the position information in the horizontal direction in the three-dimensional pose information corresponding to the current scene image meets the preset requirement; and if the position information in the horizontal direction in the three-dimensional pose information corresponding to the current scene image is 100 meters, determining that the position information in the horizontal direction in the three-dimensional pose information corresponding to the current scene image does not meet the preset requirement.
In the above embodiment, it is considered that the user terminal should move in the horizontal direction within a set range within a certain time, for example, within 20 seconds, the user terminal will not move in the horizontal direction by more than 50 meters. Therefore, whether the three-dimensional pose information is determined as the positioning result of the user terminal can be comprehensively determined based on the height information of the three-dimensional pose information and the position information of the horizontal direction, namely, the determined three-dimensional pose information is used as the positioning result corresponding to the current scene image of the user terminal under the condition that the height information in the three-dimensional pose information is within the target height range and the difference value between the position information of the horizontal direction in the three-dimensional pose information and the position information of the horizontal direction indicated by the positioning result corresponding to the previous scene image is within the set difference value range, and the accuracy of the positioning result is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides a positioning apparatus, as shown in fig. 3, which is a schematic structural diagram of the positioning apparatus provided in the embodiment of the present disclosure, and includes an obtaining module 301, a determining module 302, a first determining module 303, a second determining module 304, a displaying module 305, and a third determining module 306, specifically:
an obtaining module 301, configured to obtain a current scene image captured by a user side;
a determining module 302, configured to determine, based on the current scene image, three-dimensional pose information corresponding to the user side;
a first judging module 303, configured to, when height information in the three-dimensional pose information is within a target height range, use the determined three-dimensional pose information as a positioning result for the user side;
a second determining module 304, configured to determine that a positioning result for the user side is a positioning failure when the height information in the three-dimensional pose information is not within the target height range.
In a possible implementation manner, after determining that the positioning result for the ue is a positioning failure, the method further includes:
a displaying module 305, configured to display, by the user side, abnormal prompt information of the positioning result, where the abnormal prompt information of the positioning result is used to prompt a user to adjust a shooting angle.
In a possible implementation manner, the obtaining module 301, when obtaining the current scene image captured by the user terminal, is configured to:
responding to a user navigation request, and acquiring a current scene image shot by a user side;
the first determining module 303, when the determined three-dimensional pose information is used as a positioning result for the user side, is configured to:
and taking the determined three-dimensional pose information as a positioning result of the user side in navigation.
In a possible implementation manner, after determining the three-dimensional pose information corresponding to the user side, the method further includes:
a third determining module 306, configured to determine whether height information in the three-dimensional pose information is within a set target height range and determine whether a difference between position information in a horizontal direction in the three-dimensional pose information and position information in a horizontal direction indicated by a positioning result corresponding to a previous frame of scene image is within a set difference range in the presence of a positioning result corresponding to a previous frame of scene image;
the first determining module 303, when the determined three-dimensional pose information is used as the positioning result for the user side when the height information in the three-dimensional pose information is within the target height range, is configured to:
and when the height information in the three-dimensional pose information is within a target height range and the difference value between the position information in the horizontal direction in the three-dimensional pose information and the position information in the horizontal direction indicated by the positioning result corresponding to the previous scene image is within a set difference value range, taking the determined three-dimensional pose information as the positioning result corresponding to the current scene image of the user terminal.
In a possible implementation manner, the first determining module 303, when the determined three-dimensional pose information is used as a positioning result for the user terminal in navigation, is configured to:
taking the current positioning result as navigation starting point information under the condition that the current positioning result is the positioning result at the beginning of navigation;
and generating a navigation path based on the navigation starting point information and the navigation end point information determined by the user side, and displaying the navigation path at the user side.
In a possible implementation manner, the first determining module 303, when the determined three-dimensional pose information is used as a positioning result for the user terminal in navigation, is further configured to:
judging whether the current positioning result is on the navigation path or not;
if the navigation path is not on the navigation path, displaying path deviation indication information through the user side, and/or regenerating the navigation path according to the current positioning result and the navigation destination information, and displaying the navigation path at the user side.
In a possible implementation manner, the determining module 302, when determining the three-dimensional pose information corresponding to the user terminal based on the current scene image, is configured to:
and determining three-dimensional pose information corresponding to the user side based on the current scene image and a pre-constructed three-dimensional map.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 4, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is used for storing execution instructions and includes a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with the external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the electronic device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
acquiring a current scene image shot by a user side;
determining three-dimensional pose information corresponding to the user side based on the current scene image;
under the condition that the height information in the three-dimensional pose information is within a target height range, taking the determined three-dimensional pose information as a positioning result of the user side;
and determining that the positioning result of the user side is positioning failure under the condition that the height information in the three-dimensional pose information is not in the target height range.
Furthermore, the embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the positioning method described in the above method embodiments.
The computer program product of the positioning method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the positioning method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of positioning, comprising:
acquiring a current scene image shot by a user side;
determining three-dimensional pose information corresponding to the user side based on the current scene image;
under the condition that the height information in the three-dimensional pose information is within a target height range, taking the determined three-dimensional pose information as a positioning result of the user side;
and determining that the positioning result of the user side is positioning failure under the condition that the height information in the three-dimensional pose information is not in the target height range.
2. The method according to claim 1, wherein after determining that the positioning result for the ue is a positioning failure, the method further comprises:
and displaying abnormal prompt information of the positioning result through the user side, wherein the abnormal prompt information of the positioning result is used for prompting a user to adjust a shooting visual angle.
3. The method according to claim 1 or 2, wherein the acquiring the current scene image captured by the user terminal includes:
responding to a user navigation request, and acquiring a current scene image shot by a user side;
the step of using the determined three-dimensional pose information as a positioning result for the user side includes:
and taking the determined three-dimensional pose information as a positioning result of the user side in navigation.
4. The positioning method according to claim 1, wherein after determining the three-dimensional pose information corresponding to the user side, the method further comprises:
under the condition that a positioning result corresponding to a previous frame of scene image exists, judging whether height information in the three-dimensional pose information is in a set target height range or not, and judging whether a difference value between position information in a horizontal direction in the three-dimensional pose information and position information in the horizontal direction indicated by the positioning result corresponding to the previous frame of scene image is in a set difference value range or not;
when the height information in the three-dimensional pose information is within the target height range, taking the determined three-dimensional pose information as a positioning result of the user side, and the method comprises the following steps:
and when the height information in the three-dimensional pose information is within a target height range and the difference value between the position information in the horizontal direction in the three-dimensional pose information and the position information in the horizontal direction indicated by the positioning result corresponding to the previous scene image is within a set difference value range, taking the determined three-dimensional pose information as the positioning result corresponding to the current scene image of the user terminal.
5. The positioning method according to claim 3, wherein the using the determined three-dimensional pose information as a positioning result for the user terminal in navigation comprises:
taking the current positioning result as navigation starting point information under the condition that the current positioning result is the positioning result at the beginning of navigation;
and generating a navigation path based on the navigation starting point information and the navigation end point information determined by the user side, and displaying the navigation path at the user side.
6. The positioning method according to claim 5, wherein the using the determined three-dimensional pose information as a result of positioning the user terminal in navigation further comprises:
judging whether the current positioning result is on the navigation path or not;
if the navigation path is not on the navigation path, displaying path deviation indication information through the user side, and/or regenerating the navigation path according to the current positioning result and the navigation destination information, and displaying the navigation path at the user side.
7. The positioning method according to any one of claims 1 to 6, wherein determining three-dimensional pose information corresponding to the user side based on the current scene image comprises:
and determining three-dimensional pose information corresponding to the user side based on the current scene image and a pre-constructed three-dimensional map.
8. A positioning device, comprising:
the acquisition module is used for acquiring a current scene image shot by a user side;
the determining module is used for determining three-dimensional pose information corresponding to the user side based on the current scene image;
the first judgment module is used for taking the determined three-dimensional pose information as a positioning result of the user side under the condition that the height information in the three-dimensional pose information is within a target height range;
and the second judgment module is used for determining that the positioning result of the user side is positioning failure under the condition that the height information in the three-dimensional pose information is not in the target height range.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the positioning method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the positioning method according to one of the claims 1 to 7.
CN202011604841.5A 2020-12-30 2020-12-30 Positioning method, positioning device, electronic equipment and storage medium Pending CN112802097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011604841.5A CN112802097A (en) 2020-12-30 2020-12-30 Positioning method, positioning device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011604841.5A CN112802097A (en) 2020-12-30 2020-12-30 Positioning method, positioning device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112802097A true CN112802097A (en) 2021-05-14

Family

ID=75805769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011604841.5A Pending CN112802097A (en) 2020-12-30 2020-12-30 Positioning method, positioning device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112802097A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
US20160147230A1 (en) * 2014-11-26 2016-05-26 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
WO2016185637A1 (en) * 2015-05-20 2016-11-24 三菱電機株式会社 Point-cloud-image generation device and display system
CN109284681A (en) * 2018-08-20 2019-01-29 北京市商汤科技开发有限公司 Position and posture detection method and device, electronic equipment and storage medium
CN110310333A (en) * 2019-06-27 2019-10-08 Oppo广东移动通信有限公司 Localization method and electronic equipment, readable storage medium storing program for executing
CN110617821A (en) * 2018-06-19 2019-12-27 北京嘀嘀无限科技发展有限公司 Positioning method, positioning device and storage medium
CN110716646A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method, device, equipment and storage medium
CN111105454A (en) * 2019-11-22 2020-05-05 北京小米移动软件有限公司 Method, device and medium for acquiring positioning information
CN111366139A (en) * 2020-04-03 2020-07-03 深圳市赛为智能股份有限公司 Indoor mapping point positioning method and device, computer equipment and storage medium
CN111694430A (en) * 2020-06-10 2020-09-22 浙江商汤科技开发有限公司 AR scene picture presentation method and device, electronic equipment and storage medium
WO2020199564A1 (en) * 2019-03-29 2020-10-08 魔门塔(苏州)科技有限公司 Method and apparatus for correcting vehicle position and posture during initialization of navigation map
CN111862199A (en) * 2020-06-17 2020-10-30 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN112105890A (en) * 2019-01-30 2020-12-18 百度时代网络技术(北京)有限公司 RGB point cloud based map generation system for autonomous vehicles

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
US20160147230A1 (en) * 2014-11-26 2016-05-26 Irobot Corporation Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
WO2016185637A1 (en) * 2015-05-20 2016-11-24 三菱電機株式会社 Point-cloud-image generation device and display system
CN110617821A (en) * 2018-06-19 2019-12-27 北京嘀嘀无限科技发展有限公司 Positioning method, positioning device and storage medium
CN109284681A (en) * 2018-08-20 2019-01-29 北京市商汤科技开发有限公司 Position and posture detection method and device, electronic equipment and storage medium
CN112105890A (en) * 2019-01-30 2020-12-18 百度时代网络技术(北京)有限公司 RGB point cloud based map generation system for autonomous vehicles
WO2020199564A1 (en) * 2019-03-29 2020-10-08 魔门塔(苏州)科技有限公司 Method and apparatus for correcting vehicle position and posture during initialization of navigation map
CN110310333A (en) * 2019-06-27 2019-10-08 Oppo广东移动通信有限公司 Localization method and electronic equipment, readable storage medium storing program for executing
CN110716646A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method, device, equipment and storage medium
CN111105454A (en) * 2019-11-22 2020-05-05 北京小米移动软件有限公司 Method, device and medium for acquiring positioning information
CN111366139A (en) * 2020-04-03 2020-07-03 深圳市赛为智能股份有限公司 Indoor mapping point positioning method and device, computer equipment and storage medium
CN111694430A (en) * 2020-06-10 2020-09-22 浙江商汤科技开发有限公司 AR scene picture presentation method and device, electronic equipment and storage medium
CN111862199A (en) * 2020-06-17 2020-10-30 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张梁;徐锦法;夏青元;于永军;: "地面目标特征识别与无人飞行器位姿估计", 国防科技大学学报, no. 01, 28 February 2015 (2015-02-28) *
徐敏;陈州尧;: "一种移动码垛机器人视觉控制与定位", 组合机床与自动化加工技术, no. 08, 25 August 2016 (2016-08-25) *

Similar Documents

Publication Publication Date Title
EP2975555B1 (en) Method and apparatus for displaying a point of interest
JP5783885B2 (en) Information presentation apparatus, method and program thereof
EP2727332B1 (en) Mobile augmented reality system
US9154742B2 (en) Terminal location specifying system, mobile terminal and terminal location specifying method
JP2012103789A (en) Object display device and object display method
CN107562189B (en) Space positioning method based on binocular camera and service equipment
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
US8996577B2 (en) Object information provision device, object information provision system, terminal, and object information provision method
CN112116655A (en) Method and device for determining position information of image of target object
CN110647603B (en) Image annotation information processing method, device and system
CN112489136A (en) Calibration method, position determination method, device, electronic equipment and storage medium
CN112181141A (en) AR positioning method, AR positioning device, electronic equipment and storage medium
CN110232676B (en) Method, device, equipment and system for detecting installation state of aircraft cable bracket
CN110650284B (en) Image shooting control method, device, equipment and storage medium
CN113240806B (en) Information processing method, information processing device, electronic equipment and storage medium
CN113469378B (en) Maintenance method and maintenance equipment
CN114638885A (en) Intelligent space labeling method and system, electronic equipment and storage medium
CN109034214B (en) Method and apparatus for generating a mark
CN113178017A (en) AR data display method and device, electronic equipment and storage medium
CN111914739A (en) Intelligent following method and device, terminal equipment and readable storage medium
CN112802097A (en) Positioning method, positioning device, electronic equipment and storage medium
CN112699884A (en) Positioning method, positioning device, electronic equipment and storage medium
CN113938674B (en) Video quality detection method, device, electronic equipment and readable storage medium
US20230316460A1 (en) Binocular image quick processing method and apparatus and corresponding storage medium
CN111988732A (en) Multi-user set method and device applied to multi-user set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination