CN108871314B - Positioning and attitude determining method and device - Google Patents

Positioning and attitude determining method and device Download PDF

Info

Publication number
CN108871314B
CN108871314B CN201810792893.6A CN201810792893A CN108871314B CN 108871314 B CN108871314 B CN 108871314B CN 201810792893 A CN201810792893 A CN 201810792893A CN 108871314 B CN108871314 B CN 108871314B
Authority
CN
China
Prior art keywords
pose information
target image
virtual
determining
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810792893.6A
Other languages
Chinese (zh)
Other versions
CN108871314A (en
Inventor
樊自伟
田春亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Shijing Information Technology Co ltd
Original Assignee
Jiangsu Shijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Shijing Information Technology Co ltd filed Critical Jiangsu Shijing Information Technology Co ltd
Priority to CN201810792893.6A priority Critical patent/CN108871314B/en
Publication of CN108871314A publication Critical patent/CN108871314A/en
Application granted granted Critical
Publication of CN108871314B publication Critical patent/CN108871314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles

Abstract

The application provides a positioning and attitude determination method, which comprises the following steps: the method comprises the steps that a server obtains a target image and traditional positioning pose information of a camera when the camera shoots the target image; determining a virtual image matched with the target image according to the traditional positioning pose information and prestored point cloud data; acquiring pose information when a camera shoots a real scene corresponding to the virtual image; and determining the pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image. By the method and the device, the accuracy of the pose information of the camera when shooting the target image is improved.

Description

Positioning and attitude determining method and device
Technical Field
The application relates to the field of positioning and navigation, in particular to a positioning and attitude determining method and device.
Background
With rapid development of science and technology, the requirements of a user on the accuracy of map positioning and navigation are continuously increased, the high-accuracy positioning and gesture-fixing system can accurately determine the position of the user and the position of a destination, an optimal navigation route is given according to the position of the user and the position of the destination, and the user is guided to reach the destination along the selected route; the automatic driving vehicle can be safer and more convenient to drive to the destination according to accurate positioning.
The current common method is to locate and fix the posture by a Global Positioning System (GPS) or an Inertial Navigation System (INS), and to obtain the position information of the ground object by setting an image sensor. However, the GPS is a method in which a receiver receives satellite signals and calculates the position of the receiver, and once the receiver is located in an area where the satellite signals are weak or affected by atmospheric propagation delay, the obtained positioning accuracy is poor; the INS determines the position of the user according to the initial position of the user and the acceleration measured by the accelerometer, specifically, the user displacement information can be calculated by continuous mathematical integration of the acceleration over time, the current position of the user can be determined by combining the initial position of the user, and the positioning accuracy becomes worse and worse along with the time when the acceleration measured by the accelerometer is subjected to integral calculation positioning because a certain error exists.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a method and an apparatus for positioning and determining a pose, so as to improve positioning accuracy of a user and a target object.
In a first aspect, an embodiment of the present application provides a method for positioning and determining a posture, where the method includes:
the method comprises the steps that a server obtains a target image and traditional positioning pose information of a camera when the camera shoots the target image;
determining a virtual image matched with the target image according to the traditional positioning pose information and prestored point cloud data;
acquiring pose information when a camera shoots a real scene corresponding to the virtual image;
and determining the pose information of the camera shooting the target image according to the pose information of the camera shooting the real scene corresponding to the virtual image.
With reference to the first aspect, an embodiment of the present application provides a first possible implementation manner of the first aspect, where determining a virtual image matched with the target image according to the traditional positioning pose information and pre-stored point cloud data includes:
acquiring a plurality of virtual images according to the traditional positioning pose information and prestored point cloud data; the difference between the shooting poses of the real scenes corresponding to the virtual images obtained based on the traditional positioning mode and the poses indicated by the traditional positioning pose information meets a preset condition;
extracting a plurality of target image feature points from the target image, and extracting a plurality of virtual image feature points from each obtained virtual image;
determining at least one virtual image matching the target image from the plurality of virtual images by matching the target image feature points with the virtual image feature points.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present application provides a second possible implementation manner of the first aspect, where obtaining a plurality of virtual images according to the conventional positioning pose information and pre-stored point cloud data includes:
according to the traditional positioning pose information, point cloud data matched with the traditional positioning pose information are searched in prestored point cloud data; the position corresponding to the point cloud data matched with the traditional positioning pose information falls into an area range which takes the position coordinate corresponding to the traditional positioning pose information as a center and takes a first preset threshold value as a radius, and the difference between the posture corresponding to the point cloud data matched with the traditional positioning pose information and the posture corresponding to the traditional positioning pose information is smaller than a second preset threshold value;
and based on the found point cloud data matched with the traditional positioning pose information, performing image rendering to obtain a plurality of virtual images.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present application provides a third possible implementation manner of the first aspect, where determining, according to pose information of a camera shooting a real scene corresponding to the virtual image, pose information of the camera shooting the target image includes:
for each virtual image feature point successfully matched with the target image feature point, determining a spatial coordinate of the virtual image feature point according to the pixel coordinate of the virtual image feature point and pose information of a camera shooting a real scene corresponding to the virtual image feature point, and taking the spatial coordinate as the spatial coordinate of the target image feature point matched with the virtual image feature point;
and determining pose information when the camera shoots the target image according to the pixel coordinates and the space coordinates of the plurality of target image feature points of the target image and the shooting parameter information when the camera shoots the target image.
With reference to the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where the method further includes:
and determining the spatial position information of the target object according to the pose information of the camera shooting the target image and the pixel coordinate information of the target object in the target image.
In a second aspect, an embodiment of the present application further provides a positioning and attitude determining device, where the device includes:
the first acquisition module is used for acquiring a target image and traditional positioning pose information of the camera when the camera shoots the target image;
the second acquisition module is used for determining a virtual image matched with the target image according to the traditional positioning pose information and pre-stored point cloud data;
the third acquisition module is used for acquiring pose information when the camera shoots a real scene corresponding to the virtual image;
and the first determining module is used for determining the pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image.
With reference to the second aspect, an embodiment of the present application provides a first possible implementation manner of the second aspect, where the second obtaining module includes:
the searching unit is used for acquiring a plurality of virtual images according to the traditional positioning pose information and pre-stored point cloud data; the difference between the shooting poses of the real scenes corresponding to the virtual images obtained based on the traditional positioning mode and the poses indicated by the traditional positioning pose information meets a preset condition;
the extraction unit is used for extracting a plurality of target image feature points from the target images and extracting a plurality of virtual image feature points from each acquired virtual image;
a determining unit, configured to determine at least one virtual image matching the target image from the plurality of virtual images by matching the target image feature points with the virtual image feature points.
With reference to the first possible implementation manner of the second aspect, an embodiment of the present application provides a second possible implementation manner of the second aspect, where the searching unit includes:
the data searching subunit is used for searching the point cloud data matched with the traditional positioning pose information in the pre-stored point cloud data according to the traditional positioning pose information; the position corresponding to the point cloud data matched with the traditional positioning pose information falls into an area range which takes the position coordinate corresponding to the traditional positioning pose information as a center and takes a first preset threshold value as a radius, and the difference between the posture corresponding to the point cloud data matched with the traditional positioning pose information and the posture corresponding to the traditional positioning pose information is smaller than a second preset threshold value;
and the image determining subunit is used for performing image rendering to obtain a plurality of virtual images based on the searched point cloud data matched with the traditional positioning pose information.
With reference to the first possible implementation manner of the second aspect, an embodiment of the present application provides a third possible implementation manner of the second aspect, where the first determining module includes:
the coordinate determination unit is used for determining the space coordinate of each virtual image feature point successfully matched with the target image feature point according to the pixel coordinate of the virtual image feature point and the pose information of the camera shooting the real scene corresponding to the virtual image feature point, and taking the space coordinate as the space coordinate of the target image feature point matched with the virtual image feature point;
and the pose determining unit is used for determining pose information when the target image is shot by the camera according to the pixel coordinates and the space coordinates of the plurality of target image feature points of the target image and the shooting parameter information when the target image is shot by the camera.
With reference to the second aspect, the present application provides a fourth possible implementation manner of the second aspect, where the apparatus further includes:
and the second determining module is used for determining the spatial position information of the target object according to the pose information when the camera shoots the target image and the pixel coordinate information of the target object in the target image.
According to the positioning and attitude determining method and device provided by the embodiment of the application, a server firstly acquires a target image and traditional positioning attitude information of a camera when the camera shoots the target image, and acquires a virtual image matched with the target image from a three-dimensional live view according to the traditional positioning attitude information; secondly, acquiring pose information when the camera shoots the real scene corresponding to the virtual image; and finally, determining the pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image, wherein the positioning mode of obtaining the pose information of the camera by shooting the target image by the camera is not influenced by satellite signals and atmospheric propagation delay and acceleration measurement errors, and has higher positioning precision.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a flow chart illustrating a method for positioning and determining a posture provided by an embodiment of the present application;
FIG. 2 is a flow chart illustrating another method for determining pose provided by an embodiment of the present application;
FIG. 3 is a flow chart illustrating another method for determining pose provided by an embodiment of the present application;
FIG. 4 is a flow chart illustrating another method for determining pose provided by embodiments of the present application;
FIG. 5 is a schematic structural diagram illustrating a positioning and attitude determination apparatus according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another positioning and attitude determination device provided in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of another positioning and attitude determination device provided in an embodiment of the present application;
FIG. 8 is a schematic structural diagram of another positioning and attitude determination device provided in an embodiment of the present application;
fig. 9 shows a schematic structural diagram of a server provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In view of the problem of relatively low positioning accuracy of GPS positioning or INS positioning generally adopted in the prior art, embodiments of the present application provide a positioning and attitude determining method and apparatus that are not limited to satellite signal transmission and acceleration measurement, which are described in detail in the following embodiments.
As shown in fig. 1, an executing body of the positioning and pose determination method provided in the embodiment of the present application may be a server, and the method specifically includes the following steps:
s101, the server acquires a target image and traditional positioning pose information of the camera when the camera shoots the target image.
Here, the target image may be an image captured by any imaging system, and the embodiment of the present application is described with reference to the camera capturing the target image. When the camera shoots target images, traditional positioning and attitude determination systems such as GPS and MEMS can be installed on the camera, and traditional positioning and attitude information when the camera shoots each target image is recorded. The pose information comprises position information and pose information, the position information is space coordinates of the camera, and the pose information refers to angles of rotation of the camera around an X axis, a Y axis and a Z axis when the camera shoots a target image.
And S102, determining a virtual image matched with the target image according to the traditional positioning pose information and the pre-stored point cloud data.
In the specific implementation, the three-dimensional real scene is obtained by completely recording the scene by a professional camera, and the point cloud data is completely obtained by acquiring images of a real scene or detecting the real scene by a laser radar. The point cloud data may include pose information of all discrete points and RGB values of all discrete points. The three-dimensional engine can render a corresponding virtual image according to the point cloud data.
Here, when searching for a plurality of virtual images corresponding to the conventional positioning pose information, a position coordinate range may be determined with a position coordinate corresponding to the conventional positioning pose information as a center and a preset distance as a radius; and a posture range can be determined according to the posture corresponding to the traditional positioning posture information. Then, the point cloud data of which the corresponding position falls in the position coordinate range and the posture falls in the posture range can be searched in the point cloud data. And rendering by using the searched point cloud data to obtain a plurality of virtual images, wherein the virtual images are a plurality of virtual images corresponding to the traditional positioning pose information.
After the plurality of virtual images are found, extracting a plurality of target image feature points from the target images through digital image processing, extracting a plurality of virtual image feature points from the plurality of virtual images, matching the target image feature points with the virtual image feature points, and determining at least one virtual image matched with the target image from the plurality of virtual images.
S103, acquiring pose information of the camera shooting the real scene corresponding to the virtual image.
In step S102, the three-dimensional engine renders a plurality of virtual images according to the point cloud data. Specifically, within the range determined by the traditional positioning pose information, any pose is selected, the point cloud data corresponding to the pose is acquired, and the three-dimensional engine renders the acquired point cloud data to obtain a virtual image corresponding to the point cloud data, namely the virtual image corresponding to the selected pose. Because there are multiple positions within the range determined according to the traditional positioning position and position information, the traditional positioning position and position information corresponds to multiple virtual images, and each virtual image has a corresponding position and position.
Here, since the point cloud data is derived from image acquisition of a real scene or acquisition of a real scene detected by a laser radar, a virtual image rendered according to the point cloud data can be obtained by shooting the real scene by a camera, and a pose corresponding to the virtual image is a pose when the real scene corresponding to the virtual image is shot by the camera.
And S104, determining the pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image.
Here, the position information of the camera when shooting the target image may be determined according to the pose information of the camera when shooting the real scene corresponding to the virtual image, or the pose information of the camera when shooting the target image may be determined according to the pose information of the camera when shooting the real scene corresponding to the virtual image. In practical implementation, according to practical application, only the position information of the camera when shooting the target image may be determined, or the position information and the posture information of the camera when shooting the target image may be determined at the same time.
In a specific implementation, the spatial coordinates of the virtual image feature points may be determined based on pose information of a real scene corresponding to the virtual image captured by the camera and pixel coordinates of the virtual image feature points, and since the target image feature points and the virtual image feature points are in one-to-one correspondence, the spatial coordinates of the virtual image feature points may be used as the spatial coordinates of the target image feature points matching the virtual image feature points. And determining the pose information of the target image shot by the camera according to the space coordinates of the target image feature points, the pixel coordinates of the target image feature points and the shooting parameter information when the camera shoots the target image.
In addition, according to the accurate position information and the accurate attitude information, the spatial position information of any object in the target image can be reversely deduced, namely:
and S105, determining the spatial position information of the target object according to the pose information when the camera shoots the target image and the pixel coordinate information of the target object in the target image.
According to the positioning and posture determining method provided by the embodiment of the application, when a user encounters danger and asks for help, a rescuer shoots images through the camera and transmits target images to the server, and the server resolves the posture information of the rescuer by using the method provided by the embodiment of the application and transmits the posture information of the rescuer to rescue workers, so that the rescue workers can smoothly rescue the rescuer. The positioning and attitude determination method can also be used in the military field, mapping geographic information and the like.
As shown in fig. 2, in step S102, a virtual image matched with the target image is determined according to the conventional positioning pose information and the pre-stored point cloud data, and the specific method is as follows:
s201, acquiring a plurality of virtual images according to traditional positioning pose information and pre-stored point cloud data; the difference between the shooting poses of the real scenes corresponding to the virtual images obtained based on the traditional positioning mode and the pose indicated by the traditional positioning pose information meets a preset condition;
s202, extracting a plurality of target image feature points from the target images, and extracting a plurality of virtual image feature points from each obtained virtual image;
and S203, matching the target image feature points with the virtual image feature points to determine at least one virtual image matched with the target image from the plurality of virtual images.
Here, according to the point cloud data matched with the conventional positioning pose information, the three-dimensional engine may render a plurality of virtual images corresponding to the conventional positioning pose information. The server extracts a plurality of target image feature points of the target image and a plurality of virtual image feature points of each virtual image according to image digital processing, after the target image feature points and the virtual image feature points are obtained, at least one virtual image feature point matched with the target image feature points is searched from the obtained virtual image feature points, and the virtual image matched with the target image is determined through matching of the feature points. As long as the virtual image has at least one virtual image feature point that is the same as the feature point of the target image, the virtual image is a virtual image that matches the target image, and at least one virtual image that corresponds to one target image.
The characteristic points are pixel regions formed by a plurality of pixel points, can reflect the essential characteristics of the images and can identify target objects in the images, the matching of the images can be completed through the matching of the characteristic points, and each image can have more than one characteristic point. Certainly, the virtual image feature points may be extracted already in the process of rendering the virtual image by the three-dimensional engine, and are used as a type of parameter information of the three-dimensional real scene, that is, there is a one-to-one correspondence relationship between the virtual image and the corresponding feature point information; after a plurality of virtual images are acquired through traditional positioning pose information and point cloud data, feature point extraction can be performed on the acquired virtual images. The examples of the present application are explained in the following methods.
As shown in fig. 3, acquiring a plurality of virtual images according to the conventional positioning pose information and the pre-stored point cloud data includes:
s301, searching point cloud data matched with the traditional positioning pose information in prestored point cloud data according to the traditional positioning pose information; the position corresponding to the point cloud data matched with the traditional positioning pose information falls into an area range which takes the position coordinate corresponding to the traditional positioning pose information as the center and takes a first preset threshold value as the radius, and the difference between the posture corresponding to the point cloud data matched with the traditional positioning pose information and the posture corresponding to the traditional positioning pose information is smaller than a second preset threshold value;
and S302, based on the point cloud data matched with the traditional positioning pose information, performing image rendering to obtain a plurality of virtual images.
The pre-stored point cloud data has a corresponding pose, and point cloud data matched with the traditional positioning pose information can be searched in the pre-stored point cloud data according to the traditional positioning pose information. Specifically, if the position corresponding to the point cloud data falls within the area range with the position coordinate corresponding to the conventional positioning pose information as the center and the first preset threshold as the radius, and the difference between the posture corresponding to the point cloud data and the posture (such as the shooting angle) corresponding to the conventional positioning pose information is smaller than the second preset threshold, the point cloud data is the point cloud data matched with the conventional positioning pose information.
According to the point cloud data matched with the traditional positioning pose information, the three-dimensional engine can render a plurality of virtual images corresponding to the traditional positioning pose information.
As shown in fig. 4, in step S104, the pose information of the camera shooting the target image is determined according to the pose information of the camera shooting the real scene corresponding to the virtual image, and the specific method is as follows:
s401, aiming at each virtual image feature point successfully matched with the target image feature point, determining a spatial coordinate of the virtual image feature point according to a pixel coordinate of the virtual image feature point and pose information when a camera shoots a real scene corresponding to the virtual image feature point, and taking the spatial coordinate as the spatial coordinate of the target image feature point matched with the virtual image feature point;
s402, determining pose information when the camera shoots the target image according to the pixel coordinates and the space coordinates of the plurality of target image feature points of the target image and the shooting parameter information when the camera shoots the target image.
The relation among the shooting center point, the image point and the object point can be determined according to a collinear equation. Here, the object point is determined according to the image point and the shooting center point, that is, the spatial coordinate of the virtual image feature point can be determined according to the pixel coordinate of the virtual image feature point and the pose information when the camera shoots the real scene corresponding to the virtual image feature point. Since the feature points calculated here are the basis of one-to-one correspondence between the virtual images and the target images, the spatial coordinates of the virtual image feature points are the spatial coordinates of the target image feature points that are matched with the virtual image feature points.
At the moment, the relation among the shooting central point, the image point and the object point is determined according to a collinearity equation, and the pose information of the camera for shooting the target image can be determined through the collinearity equation. Firstly, extracting the pixel coordinates of the target image feature points in a target image, and then, according to the pixel coordinates of the target image feature points, the space coordinates of the target image feature points and the parameters of a camera for shooting the target image, calculating the pose information of the camera for shooting the target image through a collinear equation.
It should be noted that if there is more than one feature point of the target image, there is more than one set of pose information of the corresponding cameras, and the most accurate set of pose information can be calculated according to the characteristics of the collinearity equation adjustment.
The positioning and attitude determining method provided by the embodiment of the application has the advantage that the determined pose information precision is higher than that of a traditional positioning and attitude determining mode. Certainly, a high-precision map can be created based on the pose information with higher precision, and the unmanned automobile can be safe and convenient when running based on the high-precision map; the rescue can be carried out on the people in distress outside the sight line range based on the pose information with higher precision; and the landform and geographic position can be accurately restored based on the pose information with higher precision.
Based on the same inventive concept, the embodiment of the present application further provides a positioning and attitude determination device corresponding to the above positioning and attitude determination method, and because the principle of solving the problem of the device in the embodiment of the present application is similar to that of the above positioning and attitude determination method in the embodiment of the present application, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated. As shown in fig. 5, which is a schematic structural diagram of a positioning and attitude determining apparatus provided in an embodiment of the present application, the data transmission apparatus includes:
the first acquisition module 11 is configured to acquire a target image and traditional positioning pose information of the target image captured by the camera;
the second acquisition module 12 is configured to determine a virtual image matched with the target image according to the traditional positioning pose information and pre-stored point cloud data;
the third obtaining module 13 is configured to obtain pose information of a real scene corresponding to the virtual image captured by the camera;
the first determining module 14 is configured to determine pose information when the camera shoots a target image according to the pose information when the camera shoots a real scene corresponding to the virtual image;
and the second determining module 15 is configured to determine spatial position information of the target object according to the pose information of the target image captured by the camera and the pixel coordinate information of the target object in the target image.
In a specific implementation, as shown in fig. 6, the second obtaining module 12 includes:
the searching unit 21 is configured to obtain a plurality of virtual images according to the traditional positioning pose information and pre-stored point cloud data; the difference between the shooting poses of the real scenes corresponding to the virtual images obtained based on the traditional positioning mode and the pose indicated by the traditional positioning pose information meets a preset condition;
an extracting unit 22, configured to extract a plurality of target image feature points from the target image, and extract a plurality of virtual image feature points from each acquired virtual image;
a determining unit 23, configured to determine at least one virtual image matching the target image from the plurality of virtual images by matching the target image feature points with the virtual image feature points.
In a specific implementation, as shown in fig. 7, the search unit 21 includes:
a data searching subunit 31, configured to search, according to the conventional positioning pose information, point cloud data matched with the conventional positioning pose information from pre-stored point cloud data; the position corresponding to the point cloud data matched with the traditional positioning pose information falls into an area range which takes the position coordinate corresponding to the traditional positioning pose information as the center and takes a first preset threshold value as the radius, and the difference between the posture corresponding to the point cloud data matched with the traditional positioning pose information and the posture corresponding to the traditional positioning pose information is smaller than a second preset threshold value;
and the image determining subunit 32 performs image rendering to obtain a plurality of virtual images based on the found point cloud data matched with the traditional positioning pose information.
In a specific implementation, as shown in fig. 8, the first determining module 14 includes:
a coordinate determining unit 41, configured to determine, for each virtual image feature point successfully matched with the target image feature point, a spatial coordinate of the virtual image feature point according to a pixel coordinate of the virtual image feature point and pose information of a camera capturing a real scene corresponding to the virtual image feature point, and use the spatial coordinate as the spatial coordinate of the target image feature point matched with the virtual image feature point;
and a pose determining unit 42, configured to determine pose information when the camera captures the target image according to the pixel coordinates and the spatial coordinates of the plurality of target image feature points of the target image, and the imaging parameter information when the camera captures the target image.
As shown in fig. 9, a schematic structural diagram of a server provided in the embodiment of the present application includes: a processor 901, a memory 902 and a bus 903, the memory 902 storing machine readable instructions executable by the processor 901, the processor 901 and the memory 902 communicating via the bus 903 when the server is running, the machine readable instructions when executed by the processor 901 performing the following:
the method comprises the steps that a server obtains a target image and traditional positioning pose information of a camera when the camera shoots the target image;
determining a virtual image matched with the target image according to the traditional positioning pose information and prestored point cloud data;
acquiring pose information when a camera shoots a real scene corresponding to a virtual image;
and determining the pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image.
In the processing executed by the processor 901, determining a virtual image matched with the target image according to the traditional positioning pose information and the pre-stored point cloud data includes:
acquiring a plurality of virtual images according to traditional positioning pose information and prestored point cloud data; the difference between the shooting poses of the real scenes corresponding to the virtual images obtained based on the traditional positioning mode and the pose indicated by the traditional positioning pose information meets a preset condition;
extracting a plurality of target image feature points from the target image, and extracting a plurality of virtual image feature points from each obtained virtual image;
and determining at least one virtual image matched with the target image from the plurality of virtual images by matching the target image feature points with the virtual image feature points.
In the processing executed by the processor 901, a plurality of virtual images are acquired according to the traditional positioning pose information and the pre-stored point cloud data, including:
searching point cloud data matched with the traditional positioning pose information in prestored point cloud data according to the traditional positioning pose information; the position corresponding to the point cloud data matched with the traditional positioning pose information falls into an area range which takes the position coordinate corresponding to the traditional positioning pose information as the center and takes a first preset threshold value as the radius, and the difference between the posture corresponding to the point cloud data matched with the traditional positioning pose information and the posture corresponding to the traditional positioning pose information is smaller than a second preset threshold value;
and based on the found point cloud data matched with the traditional positioning pose information, performing image rendering to obtain a plurality of virtual images.
In the processing executed by the processor 901, determining the pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image includes:
for each virtual image feature point successfully matched with the target image feature point, determining the spatial coordinate of the virtual image feature point according to the pixel coordinate of the virtual image feature point and the pose information of the camera shooting the real scene corresponding to the virtual image feature point, and taking the spatial coordinate as the spatial coordinate of the target image feature point matched with the virtual image feature point;
and determining pose information when the camera shoots the target image according to the pixel coordinates and the space coordinates of the plurality of target image feature points of the target image and the shooting parameter information when the camera shoots the target image.
In a specific implementation, the processing executed by the processor 901 further includes:
and determining the spatial position information of the target object according to the pose information when the camera shoots the target image and the pixel coordinate information of the target object in the target image.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the information transmission method.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when a computer program on the storage medium is executed, the information transmission method can be executed, so that the problem of high adjacent channel leakage power caused by the current single carrier transmission mode is solved, and the effects of reducing the adjacent channel leakage power and improving the communication performance of the electric power wireless private network communication system are achieved.
The computer program product of the information transmission method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and details are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (6)

1. A method for positioning and determining attitude, the method comprising:
the method comprises the steps that a server obtains a target image and traditional positioning pose information of a camera when the camera shoots the target image;
determining a virtual image matched with the target image according to the traditional positioning pose information and prestored point cloud data;
acquiring pose information when a camera shoots a real scene corresponding to the virtual image;
determining pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image;
the determining a virtual image matched with the target image according to the traditional positioning pose information and pre-stored point cloud data comprises the following steps:
acquiring a plurality of virtual images according to the traditional positioning pose information and prestored point cloud data; the difference between the shooting poses of the real scenes corresponding to the virtual images obtained based on the traditional positioning mode and the poses indicated by the traditional positioning pose information meets a preset condition;
extracting a plurality of target image feature points from the target image, and extracting a plurality of virtual image feature points from each obtained virtual image;
determining at least one virtual image matching the target image from the plurality of virtual images by matching the target image feature points with the virtual image feature points;
the determining the pose information of the camera shooting the target image according to the pose information of the camera shooting the real scene corresponding to the virtual image comprises:
for each virtual image feature point successfully matched with the target image feature point, determining a spatial coordinate of the virtual image feature point according to the pixel coordinate of the virtual image feature point and pose information of a camera shooting a real scene corresponding to the virtual image feature point, and taking the spatial coordinate as the spatial coordinate of the target image feature point matched with the virtual image feature point;
and determining pose information when the camera shoots the target image according to the pixel coordinates and the space coordinates of the plurality of target image feature points of the target image and the shooting parameter information when the camera shoots the target image.
2. The method of claim 1, wherein acquiring a plurality of virtual images according to the conventional positioning pose information and pre-stored point cloud data comprises:
according to the traditional positioning pose information, point cloud data matched with the traditional positioning pose information are searched in prestored point cloud data; the position corresponding to the point cloud data matched with the traditional positioning pose information falls into an area range which takes the position coordinate corresponding to the traditional positioning pose information as a center and takes a first preset threshold value as a radius, and the difference between the posture corresponding to the point cloud data matched with the traditional positioning pose information and the posture corresponding to the traditional positioning pose information is smaller than a second preset threshold value;
and based on the found point cloud data matched with the traditional positioning pose information, performing image rendering to obtain a plurality of virtual images.
3. The method of claim 1, further comprising:
and determining the spatial position information of the target object according to the pose information of the camera shooting the target image and the pixel coordinate information of the target object in the target image.
4. A position and attitude determination apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a target image and traditional positioning pose information of the camera when the camera shoots the target image;
the second acquisition module is used for determining a virtual image matched with the target image according to the traditional positioning pose information and pre-stored point cloud data;
the third acquisition module is used for acquiring pose information when the camera shoots a real scene corresponding to the virtual image;
the first determining module is used for determining the pose information when the camera shoots the target image according to the pose information when the camera shoots the real scene corresponding to the virtual image;
the second acquisition module includes:
the searching unit is used for acquiring a plurality of virtual images according to the traditional positioning pose information and pre-stored point cloud data; the difference between the shooting poses of the real scenes corresponding to the virtual images obtained based on the traditional positioning mode and the poses indicated by the traditional positioning pose information meets a preset condition;
the extraction unit is used for extracting a plurality of target image feature points from the target images and extracting a plurality of virtual image feature points from each acquired virtual image;
a determining unit configured to determine at least one virtual image matching the target image from the plurality of virtual images by matching the target image feature points with the virtual image feature points;
the first determining module includes:
the coordinate determination unit is used for determining the space coordinate of each virtual image feature point successfully matched with the target image feature point according to the pixel coordinate of the virtual image feature point and the pose information of the camera shooting the real scene corresponding to the virtual image feature point, and taking the space coordinate as the space coordinate of the target image feature point matched with the virtual image feature point;
and the pose determining unit is used for determining pose information when the target image is shot by the camera according to the pixel coordinates and the space coordinates of the plurality of target image feature points of the target image and the shooting parameter information when the target image is shot by the camera.
5. The apparatus of claim 4, wherein the lookup unit comprises:
the data searching subunit is used for searching the point cloud data matched with the traditional positioning pose information in the pre-stored point cloud data according to the traditional positioning pose information; the position corresponding to the point cloud data matched with the traditional positioning pose information falls into an area range which takes the position coordinate corresponding to the traditional positioning pose information as a center and takes a first preset threshold value as a radius, and the difference between the posture corresponding to the point cloud data matched with the traditional positioning pose information and the posture corresponding to the traditional positioning pose information is smaller than a second preset threshold value;
and the image determining subunit is used for performing image rendering to obtain a plurality of virtual images based on the searched point cloud data matched with the traditional positioning pose information.
6. The apparatus of claim 4, further comprising:
and the second determining module is used for determining the spatial position information of the target object according to the pose information when the camera shoots the target image and the pixel coordinate information of the target object in the target image.
CN201810792893.6A 2018-07-18 2018-07-18 Positioning and attitude determining method and device Active CN108871314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810792893.6A CN108871314B (en) 2018-07-18 2018-07-18 Positioning and attitude determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810792893.6A CN108871314B (en) 2018-07-18 2018-07-18 Positioning and attitude determining method and device

Publications (2)

Publication Number Publication Date
CN108871314A CN108871314A (en) 2018-11-23
CN108871314B true CN108871314B (en) 2021-08-17

Family

ID=64303198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810792893.6A Active CN108871314B (en) 2018-07-18 2018-07-18 Positioning and attitude determining method and device

Country Status (1)

Country Link
CN (1) CN108871314B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903337B (en) * 2019-02-28 2022-06-14 北京百度网讯科技有限公司 Method and apparatus for determining pose of bucket of excavator
JP7307908B2 (en) 2019-03-29 2023-07-13 成典 田中 Feature management system
CN112348887A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Terminal pose determining method and related device
CN110634161B (en) * 2019-08-30 2023-05-05 哈尔滨工业大学(深圳) Rapid high-precision estimation method and device for workpiece pose based on point cloud data
CN113155097B (en) * 2020-01-22 2024-01-26 台达电子工业股份有限公司 Dynamic tracking system with pose compensation function and pose compensation method thereof
CN111415388B (en) * 2020-03-17 2023-10-24 Oppo广东移动通信有限公司 Visual positioning method and terminal
CN111127551A (en) * 2020-03-26 2020-05-08 北京三快在线科技有限公司 Target detection method and device
CN111964673A (en) * 2020-08-25 2020-11-20 一汽解放汽车有限公司 Unmanned vehicle positioning system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528568A (en) * 2013-10-08 2014-01-22 北京理工大学 Wireless channel based target pose image measuring method
CN104204726A (en) * 2012-03-06 2014-12-10 日产自动车株式会社 Moving-object position/attitude estimation apparatus and method for estimating position/attitude of moving object
CN108076280A (en) * 2016-11-11 2018-05-25 北京佳艺徕经贸有限责任公司 A kind of image sharing method and device based on image identification

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226838A (en) * 2013-04-10 2013-07-31 福州林景行信息技术有限公司 Real-time spatial positioning method for mobile monitoring target in geographical scene
CN104748738B (en) * 2013-12-31 2018-06-15 深圳先进技术研究院 Indoor positioning air navigation aid and system
CN103955948B (en) * 2014-04-03 2016-10-05 西北工业大学 A kind of space movement target detection method under dynamic environment
US9955059B2 (en) * 2014-10-29 2018-04-24 Kabushiki Kaisha Toshiba Electronic device, method, and computer program product
CN104596502B (en) * 2015-01-23 2017-05-17 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN105225240B (en) * 2015-09-25 2017-10-03 哈尔滨工业大学 The indoor orientation method that a kind of view-based access control model characteristic matching is estimated with shooting angle
CN105354296B (en) * 2015-10-31 2018-06-29 广东欧珀移动通信有限公司 A kind of method of locating terminal and user terminal
US10127685B2 (en) * 2015-12-16 2018-11-13 Objectvideo Labs, Llc Profile matching of buildings and urban structures
WO2017127711A1 (en) * 2016-01-20 2017-07-27 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
CN105869216A (en) * 2016-03-29 2016-08-17 腾讯科技(深圳)有限公司 Method and apparatus for presenting object target
CN106780601B (en) * 2016-12-01 2020-03-27 北京未动科技有限公司 Spatial position tracking method and device and intelligent equipment
CN107015642A (en) * 2017-03-13 2017-08-04 武汉秀宝软件有限公司 A kind of method of data synchronization and system based on augmented reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104204726A (en) * 2012-03-06 2014-12-10 日产自动车株式会社 Moving-object position/attitude estimation apparatus and method for estimating position/attitude of moving object
CN103528568A (en) * 2013-10-08 2014-01-22 北京理工大学 Wireless channel based target pose image measuring method
CN108076280A (en) * 2016-11-11 2018-05-25 北京佳艺徕经贸有限责任公司 A kind of image sharing method and device based on image identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"机载激光雷达点云与定位定姿系统数据辅助的航空影像自动匹配方法";张永军等;《测绘学报》;20140430;第43卷(第4期);380-388 *

Also Published As

Publication number Publication date
CN108871314A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108871314B (en) Positioning and attitude determining method and device
CN109003305B (en) Positioning and attitude determining method and device
CN111436216B (en) Method and system for color point cloud generation
CN107727079B (en) Target positioning method of full-strapdown downward-looking camera of micro unmanned aerial vehicle
EP2133662B1 (en) Methods and system of navigation using terrain features
CN109211103B (en) Estimation system
CN107980138B (en) False alarm obstacle detection method and device
CN110146096B (en) Vehicle positioning method and device based on image perception
US20150234055A1 (en) Aerial and close-range photogrammetry
KR20170115778A (en) Method and apparatus for generating road surface, method and apparatus for processing point cloud data, computer program and computer readable recording medium
KR101442703B1 (en) GPS terminal and method for modifying location position
KR101942288B1 (en) Apparatus and method for correcting information of position
WO2020039937A1 (en) Position coordinates estimation device, position coordinates estimation method, and program
KR20190045220A (en) Magnetic position estimation method and magnetic position estimation apparatus
KR102239562B1 (en) Fusion system between airborne and terrestrial observation data
WO2011024116A2 (en) System and method for virtual range estimation
US20220113139A1 (en) Object recognition device, object recognition method and program
CN113340272B (en) Ground target real-time positioning method based on micro-group of unmanned aerial vehicle
JP6698430B2 (en) Measuring device, measuring method and program
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
CN108981700B (en) Positioning and attitude determining method and device
CN107323677B (en) Unmanned aerial vehicle auxiliary landing method, device, equipment and storage medium
KR101459522B1 (en) Location Correction Method Using Additional Information of Mobile Instrument
CN113375679A (en) Lane level positioning method, device and system and related equipment
CN113252066A (en) Method and device for calibrating parameters of odometer equipment, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant