WO2020107931A1 - 位姿信息确定方法和装置、视觉点云构建方法和装置 - Google Patents
位姿信息确定方法和装置、视觉点云构建方法和装置 Download PDFInfo
- Publication number
- WO2020107931A1 WO2020107931A1 PCT/CN2019/099207 CN2019099207W WO2020107931A1 WO 2020107931 A1 WO2020107931 A1 WO 2020107931A1 CN 2019099207 W CN2019099207 W CN 2019099207W WO 2020107931 A1 WO2020107931 A1 WO 2020107931A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pose information
- translation parameters
- relative
- acquisition device
- acquiring
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
- G01C11/12—Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present application relates to the field of computer vision. Specifically, the present application relates to a pose information determination method, a pose information determination apparatus, a visual point cloud construction method and a visual point cloud construction apparatus, an electronic device, and a computer-readable storage medium.
- the map is the foundation of the unmanned driving field.
- the monocular camera SLAM due to the uncertainty of the monocular camera's scale, it is impossible to construct a vector map with uniform global scale, and because of the uncertainty of the monocular camera's scale, the monocular SLAM tracks images in multiple frames It is easy to accumulate tracking result errors due to scale drift, which eventually leads to tracking failure.
- the real three-dimensional scale of the map point is obtained directly at every moment, or the high-precision integrated navigation module (IMU) is integrated, and the real scale linear acceleration integral is directly measured through the IMU module to obtain Pose information of real scale between frames.
- IMU integrated navigation module
- the real scale of the inter-frame image can be obtained by using binocular vision technology or the IMU module, due to the high cost of the sensor, high calculation cost, high production work cost, complex calibration, and complicated algorithm, and is affected by the cost of the IMU itself
- the large size makes the use of visual point clouds a huge obstacle.
- the embodiments of the present application provide a pose information determination method, a pose information determination device, a visual point cloud construction method and a visual point cloud construction device, an electronic device, and a computer-readable storage medium, with low cost and high accuracy 2. Widely determine the posture information of the image acquisition device.
- a method for determining pose information includes determining the relative pose information of an image acquisition device when acquiring an image of a current frame relative to when acquiring an image of a previous frame;
- the image acquisition device acquires a first set of translation parameters during motion between the current frame and the previous frame image; adjusts the relative pose information based on the relative pose information and the first set of translation parameters; and based on the adjusted
- the relative pose information determines the pose information of the image acquisition device when acquiring the current frame image.
- a method for constructing a visual point cloud comprising: acquiring the pose information of the image acquisition device through the above pose information determination method; and constructing a vision based on the pose information of the image acquisition device Point cloud.
- a pose information determination apparatus including a relative pose information determination unit, for determining relative pose information of an image acquisition device when acquiring a current frame image relative to when acquiring a previous frame image ; Relative displacement parameter acquisition unit for determining the first set of translation parameters of the image acquisition device during the acquisition of the current frame and the previous frame image by a sensor with an absolute scale; relative position and posture information adjustment unit for The relative pose information and the first set of translation parameters to adjust the relative pose information; and a pose information determination unit for determining that the image acquisition device is based on the adjusted relative pose information Get the pose information of the current frame image.
- a visual point cloud construction device which includes a relative pose information determination unit for determining relative pose information of an image acquisition device when acquiring a current frame image relative to when acquiring a previous frame image ; Relative displacement parameter acquisition unit, used to determine the first group of translation parameters of the image acquisition device during the acquisition of the current frame and the previous frame image by the sensor with absolute scale; relative position and posture information adjustment unit, Adjust the relative pose information based on the relative pose information and the first set of translation parameters; a pose information determination unit for determining that the image acquisition device is based on the adjusted relative pose information Pose information when acquiring the current frame image; and a visual point cloud construction unit for constructing a visual point cloud based on the posture information of the image acquisition device when acquiring the current frame image.
- an electronic device including a processor, and a memory, in which a computer program instruction is stored, the computer program instruction, when executed by the processor, causes the processing
- the device executes the above-mentioned pose information determination method or the above-mentioned visual point cloud construction method.
- a computer-readable storage medium on which instructions for executing the above-mentioned pose information determination method or the above-mentioned visual point cloud construction method are stored.
- the posture information determination method, the posture information determination device, the visual point cloud construction method and the visual point cloud construction device, the electronic device, and the computer-readable storage medium can be adopted Determine the relative pose information, the first set of translation parameters and the adjustment coefficients of the image acquisition device when acquiring the current frame and the previous frame image to obtain more accurate pose information of the image acquisition device when acquiring the current frame image. Therefore, by directly using the scale of the external sensor to scale the translation vector of the posture information of the image acquisition device, more accurate posture information is obtained, which will not affect the algorithm framework due to the change of the sensor configuration, and also reduces the The cost of sensors and the cost of calculation further reduce the difficulty of deploying a monocular vision system.
- FIG. 1 illustrates a schematic diagram of an application scenario of a method for determining pose information according to an embodiment of the present application
- FIG. 2 illustrates a flowchart of a method for determining pose information according to an embodiment of the present application.
- FIG. 3 illustrates a schematic diagram of an apparatus for determining pose information according to an embodiment of the present application.
- FIG. 4 illustrates a block diagram of an electronic device according to an embodiment of the application.
- the pose information of the camera and thus the current position of the camera is very important.
- the cost of obtaining accurate pose information of the camera and the current position is relatively high. Therefore, an improved method for determining pose information is needed to reduce the cost of obtaining accurate pose information of the camera.
- the basic idea of this application is to propose a pose information determination method, a pose information determination device, a visual point cloud construction method and a visual point cloud construction device, an electronic device, and a computer-readable storage medium, which can By directly using the scale of the external sensor, especially the scalar scale, the translation vector of the image acquisition device is scaled, thereby reducing the cost and making the deployment of the monocular vision system less difficult.
- the method and device for determining posture information of the present application can obtain more accurate posture information without using high-precision sensors or excessive manual intervention, and then obtain a globally consistent visual point cloud. And thus establish a high-precision vector map, thus reducing the production cost of high-precision maps.
- FIG. 1 illustrates a schematic diagram of an application scenario of a pose information determination method according to an embodiment of the present application.
- the vehicle 10 may include an image acquisition device, such as an on-board camera 12, which may be a commonly used monocular camera, binocular camera, or more.
- an on-board camera 12 may be a commonly used monocular camera, binocular camera, or more.
- FIG. 1 shows that the in-vehicle camera 12 is installed on the top of the vehicle 10, it should be understood that the in-vehicle camera may also be installed in other positions of the vehicle 10, such as a front portion, a front windshield, and so on.
- the coordinate system shown in Figure 1 is the local coordinate system of the vehicle camera (X c , Y c , Z c ), where the direction of the Z c axis is the optical axis direction of the vehicle camera, and the direction of the Y c axis is perpendicular to the Z c axis In the lower direction, the X c axis direction is the direction perpendicular to the Y c axis and Z c axis.
- the vehicle 10 may include a pose information determination device 14 that can communicate with the image acquisition device and be used to perform the pose information determination method provided by the present application.
- the vehicle-mounted camera 12 continuously captures video images while the vehicle 10 is running
- the posture information determination device 14 obtains the image captured by the vehicle-mounted camera 12, and determines that the vehicle-mounted camera 12 acquires the current frame and the previous frame images
- the relative pose information, the first set of translation parameters, and the adjustment coefficients of the onboard camera 12 determine the pose information of the onboard camera 12 when acquiring the current frame image.
- the posture information determination method proposed by the present application is executed by the posture information determination device 14 to determine the posture relationship of the in-vehicle camera 12, and then locate the in-vehicle camera 12.
- the method 100 for determining pose information according to the present application includes the following steps:
- Step S110 Determine the relative pose information of the image acquisition device when acquiring the current frame image relative to when acquiring the previous frame image.
- the image acquisition device may be a camera, a camera, or the like.
- the camera may be a commonly used monocular camera, binocular camera, or more.
- any other type of camera known in the art and likely to appear in the future can be applied to the present application, and the method of capturing images is not particularly limited in this application, as long as a clear image can be obtained.
- the image data collected by the camera may be, for example, a sequence of continuous image frames (ie, a video stream) or a sequence of discrete image frames (ie, an image data group sampled at a predetermined sampling time point).
- the previous frame image acquired by the image acquisition device refers to the previous frame image before the current frame image acquired by the image acquisition device, the penultimate frame image before the current frame image, or any frame image before the current frame image, etc. .
- the previous frame image and the current frame image can be separated by one frame image, two frame images, or any frame image.
- the previous frame image refers to the previous frame image before the current frame image, and selecting the previous frame image before the current frame image as the previous frame can reduce calculation errors.
- the posture information of the image acquisition device when acquiring the current frame image refers to the posture information of the image acquisition device when acquiring the current frame image, including the rotation matrix R and the translation vector t, where the translation vector t is 3* 1 vector, which represents the position of the image acquisition device relative to the origin, the rotation matrix R is a 3*3 matrix, which represents the attitude of the image acquisition device at this time, and the rotation matrix R can also be expressed as the Euler angle In the form of ⁇ , where ⁇ represents the yaw angle of rotation around the Y axis, and ⁇ represents the pitch angle of rotation along the X axis, Represents the roll angle of rotation along the Z axis.
- the relative posture information of the image acquisition device when acquiring the current frame image relative to the previous frame image refers to the image acquisition device's posture information when acquiring the previous frame image, and the image acquisition device is acquiring The relative change amount of the pose information of the current frame image with respect to the pose information of the image acquisition device when acquiring the previous frame image.
- the relative position and posture information of the image acquisition device when acquiring the current frame image relative to the previous frame image is obtained through a visual odometer or visual SLAM system, or calculated from relative pose information known in the art Calculated by the method, for example, the relative pose information can also be obtained through the IMU.
- step S120 a first group of translation parameters of the movement of the image acquisition device during acquiring the current frame and the previous frame images is determined by a sensor with an absolute scale.
- the senor with absolute scale may be, for example, a wheel speed encoder, speedometer, odometer, or the like.
- the absolute scale is also called absolute position, and the sensor with absolute scale can measure the positional relationship relative to the real physical world.
- the first set of translation parameters of the image acquisition device during the acquisition of the current frame and the previous frame image is determined by the sensor with an absolute scale is the displacement vector obtained by the sensor between the acquisition of the previous frame image and the current frame image .
- Step S130 Adjust the relative pose information based on the relative pose information and the first set of translation parameters.
- the relative pose information includes a second set of translation parameters, and the second set of translation parameters is a translation vector t in the relative pose information.
- the adjustment of the second set of translation parameters that is, the translation vector in the relative pose information, based on the second set of translation parameters and the first set of translation parameters based on the relative pose information, is the adjustment of the relative pose information.
- the rotation matrix of the relative pose information can also be adjusted based on the rotation matrix of the relative pose information and the first set of translation parameters.
- Step S140 based on the adjusted relative pose information, determine the pose information of the image acquisition device when acquiring the current frame image.
- the determining the pose information of the current frame of the image acquisition device based on the adjusted relative pose information includes: based on the adjusted relative pose information and the image acquisition device in Acquiring posture information of the previous frame image, and determining posture information of the image acquisition device when acquiring the current frame image. For example, after obtaining the adjusted relative pose information, vector addition is performed on the adjusted relative pose information and the pose information of the image acquisition device when acquiring the previous frame image to obtain the image acquisition device when acquiring the current frame image Posture information.
- the image acquisition device based on the sensor with absolute scale acquires the scale of the translation parameter (that is, the translation distance) of the movement between the current frame image and the previous frame image, the image obtained by using the sensor with absolute scale
- the scale of the translation parameter of the acquisition device moving between the current frame image and the previous frame image adjusts the scale of the translation parameter of the relative pose information of the image acquisition device to eliminate or at least reduce the scale drift that may be generated by the image acquisition device,
- the accuracy of the adjusted pose information is improved.
- a more accurate pose information of the camera can be obtained at low cost.
- step S130 includes: determining an adjustment coefficient based on the first set of translation parameters and a second set of translation parameters; adjusting the second set of translation parameters based on the adjustment coefficient and the second set of translation parameters .
- the adjustment coefficient refers to a coefficient for adjusting the second group of translation parameters according to the first group of translation parameters and the second group of translation parameters. That is to say, the adjustment coefficient is a factor related to the first group of translation parameters and the second group of translation parameters, for example: determining the second norm of the first group of translation parameters and the second norm of the second group of translation parameters; The adjustment coefficient is determined based on the ratio of the second norm of the first set of translation parameters and the second norm of the second set of translation parameters.
- the second set of translation parameters that is, the translation vector t(x, y, z) in the relative pose information of the image acquisition device when acquiring the current frame image, respectively calculate the translation parameters t_s (x_s, y_s, z_s) and translation
- the second norm of the vector t(x,y,z) yields
- the scale of the first group of translation parameters can be determined by calculating the ratio of the second norm of the first group of translation parameters and the second norm of the second group of translation parameters The ratio of the scale of the second set of translation parameters.
- the determination of the adjustment coefficient based on the first set of translation parameters and the second set of translation parameters may further include: increasing the ratio of the second norm of the first set of translation parameters and the second norm of the second set of translation parameters The offset determines the adjustment coefficient, and fine-tuning the comparison value itself to determine a more accurate adjustment coefficient.
- the adjusting the second set of translation parameters based on the adjustment coefficient and the second set of translation parameters includes adjusting based on a product of the adjustment coefficient and the second set of translation parameters The second set of translation parameters.
- the adjustment coefficient is
- adjust the translation vector of the pose information of the image acquisition device when acquiring the current frame image to obtain the adjusted translation vector of the relative pose information: t_updated t*(
- the second set of translation parameters is adjusted by the product of the ratio of the second norm of the first set of translation parameters and the second set of translation parameters and the second set of translation parameters, so that the scale of the adjusted second set of translation parameters
- the scale of the first group of translation parameters is consistent, that is, the scale obtained by the absolute sensor is used to adjust the scale of the relative pose.
- the adjusting the second set of translation parameters based on the adjustment coefficient and the second set of translation parameters may further include: increasing the offset after the product The product itself is fine-tuned so that the second set of translation parameters can be adjusted more accurately.
- a method for constructing a visual point cloud which includes: acquiring the pose information of an image acquisition device by the method for determining pose information according to the present application; Construct a visual point cloud based on the pose information of the image acquisition device.
- a sensor with an absolute scale is used to measure the scale of the movement of the image acquisition device during the acquisition of the current frame image and the previous frame image, and the scale is used to shift the translation vector in the relative pose information of the image acquisition device. Carry out correction, so that the image acquisition device can obtain more accurate posture information, and further obtain a more accurate visual point cloud.
- target detection is performed on an image acquired by an image acquisition device to acquire pixel targets and their attribute information in the image; based on the pose information of the image acquisition device, the world coordinate system of each pixel target in the image is determined Three-dimensional coordinates; and combining the three-dimensional coordinates of each pixel target in the world coordinate system of each pixel target to generate a visual point cloud.
- FIG. 2 shows a schematic diagram of a specific example posture information determination apparatus according to an embodiment of the present application.
- the pose information determination apparatus 200 includes a relative pose information determination unit 210 for determining the relative pose of the image acquisition device when acquiring the current frame image relative to when acquiring the previous frame image Information; relative displacement parameter acquisition unit 220, used to determine the first set of translation parameters of the image acquisition device during the acquisition of the current frame and the previous frame image by an absolute scale sensor; absolute distance acquisition unit 220, with For obtaining the current translation distance of the camera; the relative pose information adjustment unit 230, used to adjust the relative pose information based on the relative pose information and the first set of translation parameters; and the pose information determination unit 240, It is used to determine the pose information of the image acquisition device when acquiring the current frame image based on the adjusted relative pose information.
- the relative pose information includes a second set of translation parameters
- the relative pose information adjustment unit 230 is further configured to determine an adjustment coefficient based on the first set of translation parameters and the second set of translation parameters; based on the adjustment Coefficients and the second set of translation parameters to adjust the second set of translation parameters.
- the relative pose information adjustment unit 230 is further used to determine the second norm of the first set of translation parameters and the second norm of the second set of translation parameters; based on the first set of translation parameters The ratio of the second norm to the second norm of the second set of translation parameters determines the adjustment coefficient.
- the relative pose information adjustment unit 230 is further configured to adjust the second set of translation parameters based on the product of the adjustment coefficient and the second set of translation parameters.
- the relative pose information determination unit 210 determines the relative pose information of the image acquisition device when acquiring the current frame image relative to when acquiring the previous frame image based on the visual odometer.
- the pose information determination unit 240 is used to determine that the image acquisition device is acquiring based on the adjusted relative pose information and the pose information of the image acquisition device when acquiring the previous frame image The pose information of the current frame image.
- a visual point cloud construction device which not only includes all units of the posture information determination device of the present application, but also includes visual point cloud construction
- the unit is configured to construct a visual point cloud based on the pose information of the image acquisition device when acquiring the current frame image.
- FIG. 4 illustrates a structural block diagram of an electronic device 300 according to an embodiment of the present application.
- an electronic device 300 according to an embodiment of the present application will be described with reference to FIG. 4.
- the electronic device 300 may be implemented as the posture information determination device 14 in the vehicle 10 shown in FIG. 1, which may communicate with the vehicle-mounted camera 12. To receive their output signals.
- the electronic device 300 may include a processor 310 and a memory 320.
- the processor 310 may be a central processing unit (CPU) or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 300 to perform desired functions.
- CPU central processing unit
- the processor 310 may be a central processing unit (CPU) or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 300 to perform desired functions.
- the memory 320 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
- the volatile memory may include, for example, random access memory (RAM) and/or cache memory.
- the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
- One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 310 may execute the program instructions to implement the pose information determination method and vision of various embodiments of the present application described above Point cloud construction method and/or other desired functions.
- Various contents such as camera-related information, sensor-related information, and driver programs can also be stored in the computer-readable storage medium.
- the electronic device 300 may further include an interface 330, an input device 340, and an output device 350, and these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
- the interface 330 may be used to connect to a camera, such as a video camera.
- the interface 330 may be a commonly used USB interface of a camera, and of course, it may be other interfaces such as a Type-C interface.
- the electronic device 300 may include one or more interfaces 330 to connect to corresponding cameras, and receive images captured by the cameras for performing the pose information determination method and visual point cloud construction method described above.
- the input device 340 may be used to receive external input, such as receiving physical point coordinate values input by a user.
- the input device 340 may be, for example, a keyboard, a mouse, a tablet, a touch screen, and so on.
- the output device 350 can output the calculated camera external parameters.
- the output device 350 may include a display, a speaker, a printer, and a communication network and its connected remote output device.
- the input device 340 and the output device 350 may be an integrated touch display screen.
- FIG. 4 only shows some components of the electronic device 300 related to the present application, and omits some related peripheral or auxiliary components.
- the electronic device 300 may further include any other suitable components.
- embodiments of the present application may also be computer program products, which include computer program instructions that when executed by a processor cause the processor to perform the above-described "exemplary method" of this specification
- the computer program product may write program codes for performing operations of the embodiments of the present application in any combination of one or more programming languages, and the programming languages include object-oriented programming languages, such as Java, C++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages.
- the program code may be executed entirely on the user's computing device, partly on the user's device, as an independent software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server On the implementation.
- an embodiment of the present application may also be a computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor causes the processor to perform the above-mentioned "exemplary method" part of the specification The steps in the method for determining pose information and the method for constructing a visual point cloud according to various embodiments of the present application described in.
- the computer-readable storage medium may employ any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may include, but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any combination of the above, for example. More specific examples of readable storage media (non-exhaustive list) include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- each component or each step can be decomposed and/or recombined.
- decompositions and/or recombinations shall be regarded as equivalent solutions of this application.
Abstract
Description
Claims (11)
- 一种位姿信息确定方法,包括:确定图像获取设备在获取当前帧图像相对于获取先前帧图像时的相对位姿信息;通过带有绝对尺度的传感器确定所述图像获取设备在获取所述当前帧与先前帧图像期间运动的第一组平移参数;基于所述相对位姿信息和第一组平移参数,调整所述相对位姿信息;以及基于调整后的所述相对位姿信息,确定所述图像获取设备在获取所述当前帧图像时的位姿信息。
- 根据权利要求1所述的位姿信息确定方法,其中,所述相对位姿信息包括第二组平移参数;所述基于所述相对位姿信息和第一组平移参数,调整所述相对位姿信息包括:基于所述第一组平移参数和所述第二组平移参数确定调整系数;基于所述调整系数和所述第二组平移参数,调整所述第二组平移参数。
- 根据权利要求2所述的位姿信息确定方法,其中所述基于第一组平移参数和第二组平移参数确定调整系数,包括:确定所述第一组平移参数的二范数和所述第二组平移参数的二范数;基于所述第一组平移参数的二范数和所述第二组平移参数的二范数的比值,确定所述调整系数。
- 根据权利要求3所述的位姿信息确定方法,所述基于所述调整系数和所述第二组平移参数,调整所述第二组平移参数,包括:基于所述调整系数与所述第二组平移参数的乘积,调整所述第二组平移参数。
- 根据权利要求1所述的位姿信息确定方法,其中,所述确定图像获 取设备在获取当前帧图像时相对于获取先前帧图像时的相对位姿信息,包括:基于视觉里程计确定所述图像获取设备在获取所述当前帧图像时相对于获取先前帧图像时的相对位姿信息。
- 根据权利要求1所述的位姿信息确定方法,其中,所述基于调整后的所述相对位姿信息,确定所述图像获取设备当前帧的位姿信息,包括:基于调整后的所述相对位姿信息以及所述图像获取设备在获取所述先前帧图像时的位姿信息,确定所述图像获取设备在获取所述当前帧图像时的位姿信息。
- 一种视觉点云构建方法,包括:通过权利要求1-6任一所述的方法获取所述图像获取设备的位姿信息;基于所述图像获取设备的位姿信息构建视觉点云。
- 一种位姿信息确定装置,包括:相对位姿信息确定单元,用于确定图像获取设备在获取当前帧图像时相对于获取先前帧图像时的相对位姿信息;相对位移参数获取单元,用于通过带有绝对尺度的传感器确定所述图像获取设备在获取所述当前帧与先前帧图像期间运动的第一组平移参数;相对位姿信息调整单元,用于基于所述相对位姿信息和所述第一组平移参数,调整所述相对位姿信息;和位姿信息确定单元,用于基于调整后的所述相对位姿信息,确定所述图像获取设备在获取当前帧图像时的位姿信息。
- 一种视觉点云构建装置,包括:相对位姿信息确定单元,用于确定图像获取设备在获取当前帧图像时相对于获取先前帧图像时的相对位姿信息;相对位移参数获取单元,用于通过带有绝对尺度的传感器确定所述图像获取设备在获取所述当前帧与先前帧图像期间运动的第一组平移参数;相对位姿信息调整单元,用于基于所述相对位姿信息和所述第一组平移参数,调整所述相对位姿信息;位姿信息确定单元,用于基于调整后的所述相对位姿信息,确定所述图像获取设备在获取当前帧图像时的位姿信息;和视觉点云构建单元,用于基于所述图像获取设备在获取当前帧图像时的位姿信息,构建视觉点云。
- 一种电子设备,包括:处理器;以及存储器,在所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器运行时使得所述处理器执行如权利要求1-6中任一项所述的位姿信息确定方法、或如权利要求7所述的视觉点云构建方法。
- 一种计算机可读的存储介质,其上存储有用于执行权利要求1-6中任一项所述的位姿信息确定方法、或如权利要求7所述的视觉点云构建方法的指令。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811459199.9A CN109544630B (zh) | 2018-11-30 | 2018-11-30 | 位姿信息确定方法和装置、视觉点云构建方法和装置 |
CN201811459199.9 | 2018-11-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020107931A1 true WO2020107931A1 (zh) | 2020-06-04 |
Family
ID=65851743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/099207 WO2020107931A1 (zh) | 2018-11-30 | 2019-08-05 | 位姿信息确定方法和装置、视觉点云构建方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109544630B (zh) |
WO (1) | WO2020107931A1 (zh) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544630B (zh) * | 2018-11-30 | 2021-02-02 | 南京人工智能高等研究院有限公司 | 位姿信息确定方法和装置、视觉点云构建方法和装置 |
CN111829489B (zh) * | 2019-04-16 | 2022-05-13 | 杭州海康机器人技术有限公司 | 一种视觉定位的方法及装置 |
CN112097742B (zh) * | 2019-06-17 | 2022-08-30 | 北京地平线机器人技术研发有限公司 | 一种位姿确定方法及装置 |
CN112444242B (zh) * | 2019-08-31 | 2023-11-10 | 北京地平线机器人技术研发有限公司 | 一种位姿优化方法及装置 |
CN110738699A (zh) * | 2019-10-12 | 2020-01-31 | 浙江省北大信息技术高等研究院 | 一种无监督绝对尺度计算方法及系统 |
CN110820447A (zh) * | 2019-11-22 | 2020-02-21 | 武汉纵横天地空间信息技术有限公司 | 一种基于双目视觉的轨道几何状态测量系统及其测量方法 |
CN110889871B (zh) * | 2019-12-03 | 2021-03-23 | 广东利元亨智能装备股份有限公司 | 机器人行驶方法、装置及机器人 |
CN113748693B (zh) * | 2020-03-27 | 2023-09-15 | 深圳市速腾聚创科技有限公司 | 路基传感器的位姿校正方法、装置和路基传感器 |
CN113034594A (zh) * | 2021-03-16 | 2021-06-25 | 浙江商汤科技开发有限公司 | 位姿优化方法、装置、电子设备及存储介质 |
CN113793381A (zh) * | 2021-07-27 | 2021-12-14 | 武汉中海庭数据技术有限公司 | 单目视觉信息和轮速信息融合的定位方法及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103139446A (zh) * | 2011-10-14 | 2013-06-05 | 斯凯普公司 | 接收的视频稳定化 |
CN106873619A (zh) * | 2017-01-23 | 2017-06-20 | 上海交通大学 | 一种无人机飞行路径的处理方法 |
CN107481292A (zh) * | 2017-09-05 | 2017-12-15 | 百度在线网络技术(北京)有限公司 | 车载摄像头的姿态误差估计方法和装置 |
US20180199039A1 (en) * | 2017-01-11 | 2018-07-12 | Microsoft Technology Licensing, Llc | Reprojecting Holographic Video to Enhance Streaming Bandwidth/Quality |
CN108648240A (zh) * | 2018-05-11 | 2018-10-12 | 东南大学 | 基于点云特征地图配准的无重叠视场相机姿态标定方法 |
CN108827306A (zh) * | 2018-05-31 | 2018-11-16 | 北京林业大学 | 一种基于多传感器融合的无人机slam导航方法及系统 |
CN109544630A (zh) * | 2018-11-30 | 2019-03-29 | 南京人工智能高等研究院有限公司 | 位姿信息确定方法和装置、视觉点云构建方法和装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9984301B2 (en) * | 2015-04-20 | 2018-05-29 | Qualcomm Incorporated | Non-matching feature-based visual motion estimation for pose determination |
US10282859B2 (en) * | 2016-12-12 | 2019-05-07 | The Boeing Company | Intra-sensor relative positioning |
CN108225345A (zh) * | 2016-12-22 | 2018-06-29 | 乐视汽车(北京)有限公司 | 可移动设备的位姿确定方法、环境建模方法及装置 |
CN106997614B (zh) * | 2017-03-17 | 2021-07-20 | 浙江光珀智能科技有限公司 | 一种基于深度相机的大规模场景3d建模方法及其装置 |
CN107945265B (zh) * | 2017-11-29 | 2019-09-20 | 华中科技大学 | 基于在线学习深度预测网络的实时稠密单目slam方法与系统 |
CN108171728B (zh) * | 2017-12-25 | 2020-06-19 | 清华大学 | 基于混合相机系统的无标记运动物体姿态恢复方法及装置 |
CN108364319B (zh) * | 2018-02-12 | 2022-02-01 | 腾讯科技(深圳)有限公司 | 尺度确定方法、装置、存储介质及设备 |
CN108629793B (zh) * | 2018-03-22 | 2020-11-10 | 中国科学院自动化研究所 | 使用在线时间标定的视觉惯性测程法与设备 |
-
2018
- 2018-11-30 CN CN201811459199.9A patent/CN109544630B/zh active Active
-
2019
- 2019-08-05 WO PCT/CN2019/099207 patent/WO2020107931A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103139446A (zh) * | 2011-10-14 | 2013-06-05 | 斯凯普公司 | 接收的视频稳定化 |
US20180199039A1 (en) * | 2017-01-11 | 2018-07-12 | Microsoft Technology Licensing, Llc | Reprojecting Holographic Video to Enhance Streaming Bandwidth/Quality |
CN106873619A (zh) * | 2017-01-23 | 2017-06-20 | 上海交通大学 | 一种无人机飞行路径的处理方法 |
CN107481292A (zh) * | 2017-09-05 | 2017-12-15 | 百度在线网络技术(北京)有限公司 | 车载摄像头的姿态误差估计方法和装置 |
CN108648240A (zh) * | 2018-05-11 | 2018-10-12 | 东南大学 | 基于点云特征地图配准的无重叠视场相机姿态标定方法 |
CN108827306A (zh) * | 2018-05-31 | 2018-11-16 | 北京林业大学 | 一种基于多传感器融合的无人机slam导航方法及系统 |
CN109544630A (zh) * | 2018-11-30 | 2019-03-29 | 南京人工智能高等研究院有限公司 | 位姿信息确定方法和装置、视觉点云构建方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
CN109544630B (zh) | 2021-02-02 |
CN109544630A (zh) | 2019-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020107931A1 (zh) | 位姿信息确定方法和装置、视觉点云构建方法和装置 | |
WO2020140431A1 (zh) | 相机位姿确定方法、装置、电子设备及存储介质 | |
US20230360260A1 (en) | Method and device to determine the camera position and angle | |
CN110243358B (zh) | 多源融合的无人车室内外定位方法及系统 | |
CN109887057B (zh) | 生成高精度地图的方法和装置 | |
WO2021232470A1 (zh) | 基于多传感器融合的slam制图方法、系统 | |
CN109544629B (zh) | 摄像头位姿确定方法和装置以及电子设备 | |
CN110880189B (zh) | 联合标定方法及其联合标定装置和电子设备 | |
CN108932737B (zh) | 车载相机俯仰角标定方法和装置、电子设备以及车辆 | |
US8698875B2 (en) | Estimation of panoramic camera orientation relative to a vehicle coordinate frame | |
JP2019526101A (ja) | シーンにおけるカメラの姿勢を特定するシステム及び方法 | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
CN113551665B (zh) | 一种用于运动载体的高动态运动状态感知系统及感知方法 | |
CN112050806B (zh) | 一种移动车辆的定位方法及装置 | |
CN113516692A (zh) | 一种多传感器融合的slam方法和装置 | |
CN113112413A (zh) | 图像生成方法、图像生成装置和车载抬头显示系统 | |
CN110793526A (zh) | 基于可穿戴单目视觉和惯性传感器融合的行人导航方法及系统 | |
CN110728716B (zh) | 一种标定方法、装置及飞行器 | |
CN114777768A (zh) | 一种卫星拒止环境高精度定位方法、系统及电子设备 | |
CN114662587A (zh) | 一种基于激光雷达的三维目标感知方法、装置及系统 | |
CN114022561A (zh) | 一种基于gps约束和动态校正的城区单目测图方法和系统 | |
CN108961337B (zh) | 车载相机航向角标定方法和装置、电子设备以及车辆 | |
CN116429098A (zh) | 低速无人飞行器视觉导航定位方法及系统 | |
CN112652018B (zh) | 外参确定方法、外参确定装置及电子设备 | |
CN110836656B (zh) | 用于单目adas的防抖测距方法、装置和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19888820 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19888820 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19888820 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.12.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19888820 Country of ref document: EP Kind code of ref document: A1 |