CN114429515A - Point cloud map construction method, device and equipment - Google Patents

Point cloud map construction method, device and equipment Download PDF

Info

Publication number
CN114429515A
CN114429515A CN202111573064.7A CN202111573064A CN114429515A CN 114429515 A CN114429515 A CN 114429515A CN 202111573064 A CN202111573064 A CN 202111573064A CN 114429515 A CN114429515 A CN 114429515A
Authority
CN
China
Prior art keywords
information
vehicle
speed
point cloud
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111573064.7A
Other languages
Chinese (zh)
Inventor
邱舒翰
隋伟
张骞
黄畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN202111573064.7A priority Critical patent/CN114429515A/en
Publication of CN114429515A publication Critical patent/CN114429515A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Navigation (AREA)

Abstract

Disclosed are a point cloud map construction method, device and equipment, wherein the method comprises the following steps: acquiring feature point information of a plurality of frames of images, first speed information of a vehicle and positioning information of the vehicle, wherein the feature point information comprises: the matching relation of the characteristic points of each frame image and the inter-frame characteristic points is obtained, the first speed information of the vehicle comprises a first linear speed and/or a first angular speed of the vehicle, and the first speed information is measured through a wheel speed sensor; determining first attitude information of the vehicle according to the feature point information and the first speed information; determining second position information of the vehicle according to the first position information and the positioning information of the vehicle; and performing point cloud reconstruction by using the feature point information and the second position and attitude information of the multi-frame image to obtain a point cloud map. According to the method, the linear velocity and/or the angular velocity of the vehicle are/is obtained through the measurement of the wheel speed sensor, and the velocity information of the vehicle is measured by replacing an IMU (inertial measurement unit), so that the noise caused by IMU measurement is avoided, meanwhile, the erection and arrangement of the IMU are avoided, and the production cost is saved.

Description

Point cloud map construction method, device and equipment
Technical Field
The application relates to the field of computer vision and point cloud map construction, in particular to a point cloud map construction method, device and equipment.
Background
For automatic driving, the acquisition of positioning information is crucial, the current position of the vehicle and objects around the vehicle are accurately positioned, roads and surrounding environments at two sides of the vehicle are simulated, and the accuracy of automatic driving can be improved. When roads on two sides of a vehicle are mapped, sensors on the vehicle, such as a camera, an Inertial Measurement Unit (IMU) and the like, collect vehicle surrounding information in real time, such as speed information of the vehicle, surrounding moving objects, pedestrians, trees, grasslands and the like, and then send the collected information to a server.
The server maps the surrounding environment of the vehicle according to the measurement data reported by each sensor, but the reported original data is easily influenced by the surrounding environment, for example, the result of the vehicle pose positioned is inaccurate due to noise interference, shielding of buildings and natural objects, and further, when the map is built by utilizing the inaccurate vehicle pose, the matching information lacks robustness.
Disclosure of Invention
The present application is proposed to solve the problems caused by the above-mentioned vision sensor mapping. The embodiment of the application provides a point cloud map construction method and device, which are used for overcoming the defects caused by pure visual map construction and multi-sensor map construction.
Specifically, the following technical solutions are provided in the embodiments of the present application:
in a first aspect, the present disclosure provides a point cloud mapping method, which may be implemented by a network device, such as a server, or a terminal device, such as a PC, and includes:
acquiring feature point information of a multi-frame image, first speed information of a vehicle and positioning information of the vehicle, and determining first position and orientation information of the vehicle according to the feature point information and the first speed information; determining second position information of the vehicle according to the first position information and the positioning information of the vehicle; and performing point cloud reconstruction by using the feature point information of the multi-frame image and the second position and orientation information to obtain a point cloud map.
Wherein the feature point information includes: the first speed information of the vehicle comprises a first linear speed and/or a first angular speed of the vehicle, and the first speed information is measured by a wheel speed sensor. The positioning information of the vehicle includes GPS position information of the vehicle, which has a physical meaning in a vertical direction.
According to the method provided by the aspect, the linear speed and/or the angular speed of the vehicle are/is obtained through the measurement of the wheel speed sensor, and the speed information of the vehicle is measured by replacing an IMU (inertial measurement Unit), so that the noise caused by IMU measurement is avoided, meanwhile, the erection and arrangement of IMUs of different types are avoided, and the production cost is saved.
In addition, the linear velocity and/or angular velocity of the vehicle measured by the wheel speed sensor or the wheel speed meter is combined with the matching relation of the feature points to obtain accurate first position and attitude information of the vehicle, and then the first position and attitude information and the positioning information of the vehicle are fused to obtain second position and attitude information of the vehicle, so that the dimension significance of the positioning information in the vertical direction is increased on the basis of the original accurate camera position and attitude of the vehicle, the second position and attitude information with three-dimensional features is obtained, and robust input can be provided for other calculations and research and development of a group Truth map.
In a second aspect of the disclosure, a point cloud map building apparatus is further provided, the apparatus includes: an acquisition module, a calculation module, and a composition module, wherein,
the acquiring module is used for acquiring feature point information of a plurality of frames of images, first speed information of a vehicle and positioning information of the vehicle, wherein the feature point information comprises: matching relation between feature points of each frame image and inter-frame feature points, wherein first speed information of the vehicle comprises a first linear speed and/or a first angular speed of the vehicle, and the first speed information is measured by a wheel speed sensor;
the calculation module is used for determining first position and attitude information of the vehicle according to the feature point information and the first speed information obtained from the acquisition module, and determining second position and attitude information of the vehicle according to the first position and attitude information and the positioning information of the vehicle obtained by the acquisition module;
the composition module is used for carrying out point cloud reconstruction by utilizing the feature point information of the multi-frame images and the second position and posture information obtained by the calculation module to obtain a point cloud map.
In a third aspect of the present disclosure, a computer-readable storage medium is further provided, where the storage medium stores a computer program, and when the computer program is executed, the point cloud map construction method according to the first aspect is implemented.
In a fourth aspect of the disclosure, an electronic device is further provided, including: a processor and a memory, the memory to store executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the point cloud mapping method according to the first aspect.
Optionally, the electronic device is a network device, such as a server or a controller.
In a fifth aspect of the present disclosure, a point cloud mapping system is further provided, where the system includes a server and at least one client, where the at least one client may be any terminal device, such as user equipment UE, or the client may also be a car machine, and further, taking the car machine as an example, the car machine is provided with a plurality of sensors, such as but not limited to a camera/video camera, a wheel speed sensor/wheel speed meter, and a GPS sensor.
The vehicle machine is used for acquiring multi-frame images shot by the camera, speed information of a vehicle measured by the wheel speed sensor and positioning information of the vehicle measured by the GPS sensor, such as GPS position information of the vehicle, and sending the information to the server.
The server is used for receiving various information from the vehicle machine and determining first attitude information of the vehicle according to the characteristic point information and the first speed information; determining second position information of the vehicle according to the first position information and the positioning information of the vehicle; and performing point cloud reconstruction by using the feature point information of the multi-frame image and the second position and posture information to obtain a point cloud map.
Optionally, the server includes a display or a display screen for displaying the point cloud map.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic view of a scene structure provided in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a vehicle machine provided in an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of a point cloud map construction method provided in the embodiment of the present application.
Fig. 5 is a schematic diagram for acquiring information reported by a sensor according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a method for determining first position information of a vehicle according to an embodiment of the present application.
FIG. 7 is a flowchart of a method for determining second position information of a vehicle according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a map building apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of another map building apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of another vehicle machine provided in the embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
Currently, a single mapping method based on a visual sensor mainly includes two methods, namely pure visual mapping and multi-sensor mapping. The pure visual image construction method is characterized in that the vehicle pose is optimized by means of extracting visual information, and cloud images are constructed by utilizing information such as visual feature points and lines. The multi-sensor map building method is characterized in that IMU and visual information are fused, IMU pre-integration is utilized to obtain pose information of a vehicle, and the pose information and characteristic points are optimized based on a reprojection error to obtain accurate pose information of the vehicle.
However, both methods have certain defects, for example, in a pure visual single mapping method, due to the fact that scale information is lacked in pure visual information extraction, mapping results do not have physical significance, and the mapping results cannot be used as a group route map to provide robust input for other follow-up computer visual research and development schemes. In addition, if a multi-sensor mapping method is adopted, noise interference is introduced due to the fact that IMU is used for sensor fusion, and accuracy of the pose of the vehicle is reduced. Moreover, different policies need to be deployed in the composition process for IMUs of different models, which may further increase development cost. In order to solve the above technical problems, embodiments of the present application first introduce a technical scenario related to a technical solution of the present application.
The method is applied to a point cloud map construction scene, and is particularly used for overcoming the defects caused by pure visual map construction and multi-sensor map construction, for example, in the pure visual single map construction process, the pure visual information extraction lacks scale information, so that the map construction result has no physical significance, and the map construction result can not be used as a label (Ground Truth) map to provide robust input for subsequent other computer visual research and development schemes. When multiple sensors are used for map building, noise is introduced when IMU is used for sensor fusion, accuracy of vehicle pose is reduced, and IMU deployment cost is increased.
The "group channel" refers to the accuracy of the classification of the supervised learning technique by the training set in the mechanical learning process. Are used in statistical models to prove or negate research hypotheses. "Ground route" in this embodiment refers to a process of collecting appropriate target data for point cloud map construction.
In the traditional vehicle positioning and road construction process, the IMU is adopted to determine the speed information of the vehicle, including linear speed and/or angular speed, and the wheel speed sensor or the wheel speed meter is used for replacing the IMU to measure the speed information of the vehicle, so that the problems of high deployment cost, large error and low positioning accuracy of the IMU are solved.
Exemplary System
Referring to fig. 1, a scene structure diagram provided in the embodiment of the present application is shown. The system comprises: the system comprises a server 100 and at least one mobile terminal, wherein the at least one mobile terminal can be a vehicle-mounted terminal or a vehicle machine, such as the vehicle machine 200, wherein the vehicle machine 200 is connected with the server 100 through a wireless network, such as a base station 300 for communication with the server 100.
The server 100 is included as a network device. Optionally, the network device may also be a controller, a data center, a cloud platform, or the like.
The wireless network may be any wireless communication system, such as a Long Term Evolution (LTE) system or a fifth generation mobile communication system (5G), and may also be applied to subsequent communication systems, such as a sixth generation mobile communication system and a seventh generation mobile communication system.
The Base Station 300 includes, but is not limited to, a Base Station (BS) or a Base Transceiver Station (BTS), and further, the Base Station may be a Base Transceiver Station (BTS) in a global system for mobile communication (GSM) or Code Division Multiple Access (CDMA), and may also be an evolved node b (eNB/e-NodeB) in LTE, or a next evolved node b (ng-eNB) in next generation LTE, or a Base Station (gNB) in NR.
The mobile terminal includes any type of mobile device or mobile means, such as a vehicle, aircraft, spacecraft, etc., or a computing device having a mobile terminal. As shown in fig. 2, in one example, the in-vehicle machine 200 includes a communication interface 201, a processor 202, and a memory 203.
In addition, various types of sensors are also provided in the car machine 200 for acquiring sample data of a current moving environment in which the mobile terminal is moving. Such as an image sensor 204, a wheel speed sensor 205, a GPS sensor 206, etc.
Further, the image sensor 204 is used for capturing image data, such as acquiring a plurality of frames of images taken by the vehicle during driving, and the image sensor 204 may be a camera or a camera array. The wheel speed sensor 205 may also be referred to as a "wheel speed meter" which is a sensor used to measure the rotational speed of the wheels of the vehicle. Commonly used wheel speed sensors are mainly: a magnetoelectric wheel speed sensor and a Hall wheel speed sensor. The GPS sensor 206 is configured to acquire real-time positioning information (including GPS location information) of the vehicle, such as longitude and latitude, a driving track, and the like of the vehicle-mounted terminal or the vehicle, and report the positioning information to the server 100. It should be understood that the various types of sensors described above may also include laser sensors and other sensors for capturing scan data, and the present embodiment is not limited in this regard.
The communication interface 201 is used for realizing data transmission between the car machine 200 and the server 100.
The memory 203 is used to store relevant data such as image data, GPS position data, vehicle speed information, and the like.
In addition, as shown in fig. 3, a schematic structural diagram of the server 100 is also provided, which includes a communication interface 301, a processor 302, a memory 303, an input/output device 304, and the like.
The communication interface 301 is used for implementing communication transmission with the car machine 200, the processor 302 is used for implementing the method of the present disclosure, the memory 303 is used for storing relevant data and computer program instructions, and the like, and the input/output devices include a display, a mouse, a keyboard, a speaker, and the like.
In addition, the server 100 may further include other devices, such as various sensors, communication interfaces, transceiver devices, and the like, and the embodiments of the present application do not limit the specific technologies and the specific device forms adopted by the server and the client.
In addition, the technical scheme provided by the embodiment can be realized in a mode of software, hardware and combination of software and hardware. Wherein the hardware may provide image input, wheel speed measurement of a wheel speed meter/sensor, and GNSS (Global Navigation Satellite System) input; the software can be realized through C + + programming language, Java and the like, the point cloud map construction function can be developed and realized through Python-based programming voice, or can be realized through other software and hardware, and the application does not limit the specifically realized hardware, software structure and function.
Exemplary method
Fig. 4 is a schematic flow chart of a point cloud map building method according to an exemplary embodiment of the present disclosure. The present embodiment can be applied to the server 100, and the method includes the following steps:
step 110: and acquiring feature point information of the multi-frame image, first speed information of the vehicle and positioning information of the vehicle.
Wherein the feature point information includes: and matching relation between the characteristic points of each frame image and the characteristic points between frames. The first speed information of the vehicle includes a first linear speed and/or a first angular speed of the vehicle, and the first speed information is measured by a wheel speed sensor. The positioning information of the vehicle comprises GPS information of the vehicle, and the GPS position information comprises a coordinate position of the vehicle in a GPS coordinate system and longitude and latitude information.
In addition, when each frame of image is shot by the camera, each frame of image also corresponds to a time stamp, for example, the time stamp of shooting corresponding to the first frame of image is the first time "t 1", the time stamp corresponding to the second frame of image is the second time "t 2", and so on. The time interval of shooting can be set by the camera, and the camera can report the image to the server every time when shooting one frame of image, or report the image to the server after shooting a group of image photos. The content shot in the picture is the surrounding environment scene collected by the camera in the driving process of the vehicle.
Optionally, in an implementation manner, the camera only reports an image (photo) to the server, and the server analyzes the acquired image (photo) to obtain the feature point information; or, in another implementation, the camera or the vehicle machine analyzes the captured image (photo), including performing relevant preprocessing on the image, detecting feature information such as points, lines, sections, and the like, and performing matching processing on previous and subsequent key frames to obtain feature point information of each frame of image, and sending the feature point information of all the images to the server.
In step 110, the speed information of the vehicle includes a first linear speed and/or a first angular speed, the speed information of the vehicle can be obtained by measurement and calculation of a wheel speed sensor or a wheel speed meter, and the speed information of the vehicle corresponds to a timestamp, i.e., the first linear speed and the first angular speed are instantaneous speeds, and the timestamp is consistent with the timestamps of the first time t1, the second time t2, and the like of the previous shooting. The wheel speed meter/wheel speed sensor may determine a first linear speed, a first angular speed of the vehicle based on parameters such as a wheel speed, a wheel diameter, and/or a hub diameter of the vehicle.
It should be understood that the first linear speed and/or the first angular speed of the vehicle may be calculated by the vehicle 200, or may be calculated and determined by a wheel speed meter or a wheel speed sensor, and then reported to the server 100.
In addition, the vehicle 200 may report the feature point information of the image, the speed information of the vehicle, and the positioning information at the same time, or report the feature point information, the speed information of the vehicle, and the positioning information to the server 100 in different time periods or in different times, and the reporting time sequence are not limited in this embodiment.
Step 120: and determining first position information of the vehicle according to the feature point information and the first speed information.
Wherein the first pose information describes a set of feature points and pose information on a two-dimensional image. Further, the pose represents a position and a posture. Any rigid body can accurately and uniquely represent the position state of the rigid body by using the position and the attitude in a space coordinate system (OXYZ). Wherein the location is available: x, y, z coordinates, the pose may then be represented by: the included angle rx between the rigid body and the OX axis, the included angle ry between the rigid body and the OY axis, and the included angle rz between the rigid body and the OZ axis are shown.
Step 130: and determining second position information of the vehicle according to the first position information and the positioning information of the vehicle.
The method specifically comprises the following steps: firstly, fitting (align) the first position information and the positioning information of the vehicle to obtain first position information and the position information of the vehicle in the same coordinate system (such as a GPS coordinate system); and then, optimizing the first pose information and the pose information of the vehicle under the same coordinate system to obtain second pose information, wherein the second pose information is fitted with the GPS position information of the vehicle under a GPS coordinate system.
Alternatively, the fitting method may employ a loose coupling method.
The second pose information of the vehicle has a geographic significance on a plane and a scale significance in a vertical direction, and the second pose information is a feature point set and pose information on a three-dimensional image.
Step 140: and performing point cloud reconstruction by using the feature point information of the multi-frame image and the second position information to obtain a point cloud map.
One possible implementation manner is to perform dense point cloud map reconstruction according to the obtained second pose information and the feature point information of the image obtained in the step 110, that is, sparse point cloud information, and perform subsequent aggregation and iterative operation of sparse point clouds in an effective area to obtain a single-vision point cloud map, where the point cloud map is used to describe the surrounding environment of vehicle driving.
Further, the point cloud map reconstruction can adopt a multi-view dense reconstruction (MVS) method, and a point cloud of a scene is subjected to fusion processing to obtain a point cloud map with single vision, so that the point cloud map reconstruction accuracy is improved.
According to the method provided by the embodiment, the linear velocity and/or the angular velocity of the vehicle are obtained through the measurement of the wheel speed sensor, the velocity information of the vehicle is measured by replacing an IMU (inertial measurement Unit), so that the noise caused by IMU measurement is avoided, meanwhile, the erection and arrangement of IMUs of different types are avoided, the velocity of the mobile terminal is measured by using the wheel speed sensor or a wheel speed meter, and the production cost can be saved.
In addition, the linear velocity and/or angular velocity of the vehicle measured by the wheel speed sensor or the wheel speed meter is combined with the matching relation of the feature points to obtain accurate first position and attitude information of the vehicle, and then the first position and attitude information of the vehicle is fused with the positioning information of the vehicle to obtain second position and attitude information of the vehicle, so that the dimension significance in the vertical direction is increased on the basis of the original accurate camera position and attitude information of the vehicle, the second position and attitude information with three-dimensional features is obtained, and robust input is provided for other visual calculations and research and development of a Ground Truth map.
On the basis of the embodiment shown in fig. 4, as shown in fig. 5, the acquiring of the feature point information of the multi-frame image in step 110 may include the following steps:
step 110-1: and extracting characteristic points from the multiple frames of images shot by the vehicle in the driving process to obtain at least one characteristic point of each frame of image.
Step 110-2: and matching the characteristic points according to at least one characteristic point of each frame of image to obtain the matching relation of the characteristic points of the inter-frame image.
In the process of extracting the feature points, one or more of the following algorithms can be adopted: deep Learning (Deep Learning), ORB (Oriented FAST and Rotated BRIEF description) feature point extraction algorithm, SHITOMASHI. In the process of performing feature point matching on the extracted at least one feature point, an optical flow method or an ORB matching algorithm may be adopted to obtain a matching relationship of the feature points of the inter-frame image.
Taking an ORB algorithm as an example, the feature detection and extraction of the ORB algorithm is a process of acquiring a key object in an image or video transmitted by a visual sensor, and the extracted features are combined to perform target matching based on feature points to obtain a mapping relation of the feature points between frames of images.
The embodiment does not limit the specific implementation process of feature point extraction and feature point matching of the image.
In step 110, in order to further improve the accuracy of the first attitude information, the reported speed information of the vehicle is optimized in advance to obtain the first speed information, that is, the first speed information is the optimized linear speed/angular speed. Referring to fig. 5, the method specifically includes the following steps:
110-3: and acquiring second speed information of the vehicle reported by the wheel speed sensor, wherein the second speed information comprises a second linear speed and/or a second angular speed of the vehicle, and the second linear speed and/or the second angular speed are/is speed information of the vehicle directly measured by the wheel speed sensor, namely original measurement data.
110-4: and optimizing the second speed information to obtain the first speed information.
Wherein the step 110-4 specifically comprises: performing pre-integration processing on the second speed information to obtain position and attitude information of the vehicle; and then processing a second linear velocity and/or a second angular velocity of the vehicle according to the position and posture information of the vehicle to obtain the first velocity information.
Further, a derivation process of the pre-integration process is as follows:
the second linear velocity of the vehicle at time t measured by the wheel speed meter or the wheel speed meter sensor is set as
Figure BDA0003424421630000081
A second angular velocity of
Figure BDA0003424421630000082
Wherein the content of the first and second substances,
Figure BDA0003424421630000083
affected by the white noise η and the sensor bias b,
Figure BDA0003424421630000084
subject scale information scale. The relationship between the measured value and the true value can be expressed by the relationships (1) and (2):
Figure BDA0003424421630000085
Figure BDA0003424421630000086
where ω (t) and v (t) represent theoretical values,
Figure BDA0003424421630000087
and
Figure BDA0003424421630000088
representing the actual measurement.
Simple transformation of the above equations (1) and (2) yields the relations (3) and (4), expressed as:
Figure BDA0003424421630000089
Figure BDA00034244216300000810
obtaining the rotation matrix from the dynamic equation of the CAN bus, e.g. the relation (5) represents the rotation matrix for a rotating dynamic model
Figure BDA00034244216300000811
Figure BDA00034244216300000812
Wherein the angle of rotation
Figure BDA00034244216300000813
Can pass through a Jacobian matrix (denoted as' Jr() ") is obtained by conversion, for example, the formula (6) represents the rotation angle by the angular axis method
Figure BDA00034244216300000814
Comprises the following steps:
Figure BDA00034244216300000815
in addition, the dynamic equation of the vehicle position p (t) can be expressed as the relation (7),
Figure BDA0003424421630000091
in the step 110-4, the position and posture information of the vehicle is obtained by performing pre-integration processing on the second speed information based on the above equations (5) to (7). The frequency of the CAN measurement is synchronous with the camera frequency and the time stamp. Assuming that the indices of the two frame images are i and j, pre-integrating the limited CAN measurement between the two frames CAN result in the following equations (8), (9) and (10), respectively:
Figure BDA0003424421630000092
Figure BDA0003424421630000093
Figure BDA0003424421630000094
wherein the position of the vehicle is p after the pre-integration treatmentjThe position pjAnd position p of the ith frameiAnd linear velocity vi、vjIn relation thereto, the position of the vehicle is then determined by the rotation angle θjIs represented by vjRepresenting the linear velocity of the i-th frame of the vehicle in the rotating matrix.
In this embodiment, pose measurement based on multi-feature point matching is performed, in this implementation manner, a random sampling solution is performed on a 2D-3D matching relationship of feature points, and an erroneous matching point pair is continuously tested and eliminated, then pose parameters of a target are re-solved by using all correct matching point pairs, so as to obtain an initial pose of a vehicle, and finally, the initial pose of the vehicle is used to optimize an originally measured linear velocity/angular velocity, so as to obtain parameters after state update, including a Jacobian matrix, a covariance matrix, and the like, and an optimized linear velocity and angular velocity, that is, the first velocity information.
Alternatively, an optimization method is to use an LM (Levenberg-Marquardt ) algorithm for optimization to obtain the first speed information. The LM algorithm is one of the optimization algorithms, and the purpose is to find a parameter vector that minimizes the function value, thereby obtaining an accurate linear and/or angular velocity of the vehicle.
In vector analysis, the Jacobian matrix is a matrix in which the first-order partial derivatives are arranged in a certain way, and the Jacobian matrix is important in that it represents an optimal linear approximation of a given point to a differentiable equation, so that the Jacobian matrix is similar to the derivatives of a multivariate function. Rn → Rm is a function that transforms from the Euclidean n-dimensional space to the Euclidean m-dimensional space. This function F consists of m real functions y1(x1, …, xn), …, ym (x1, …, xn). The partial derivatives of the functions can form a matrix with m rows and n columns, the matrix is a Jacobian matrix, and the Jacobian matrix is used for mapping operation of inter-frame images during coordinate system transformation and is a key parameter for determining pose information.
In another embodiment, the step 110-4 further obtains the first jacobian matrix when processing the second linear velocity and/or the second angular velocity of the vehicle according to the position and posture information of the vehicle. Step 120 specifically includes: and according to the matching relation of the feature points of the images and the feature points among the frames of images, and the first linear velocity and/or the first angular velocity, carrying out feature point reprojection error (feature point reprojection errors) optimization to obtain optimized first pose information, wherein the first pose information has a physical scale meaning (scale).
Further, as shown in fig. 6, the method includes:
120-1: and performing first reprojection error calculation on the characteristic point information to obtain a second Jacobian matrix.
Specifically, the first reprojection error calculation based on each feature point is performed on the feature point information to obtain the second jacobian matrix.
120-2: and updating the second Jacobian matrix according to the first Jacobian matrix to obtain a third Jacobian matrix. And optimizing the second Jacobian matrix to obtain a third Jacobian matrix.
120-3: and performing second reprojection error calculation by using the third Jacobian matrix to obtain the first position information.
Specifically, similar to step 120-1, a second reprojection error calculation based on each feature point is performed using the third jacobian matrix, resulting in first pose information.
In the present embodiment, the first, second, and third jacobian matrices are mainly used for derivation of the position and rotation of the vehicle, and derivation of bias and noise for the position and rotation, and have practical physical meanings such as linear velocity and angular velocity.
In addition, the first reprojection error and the second reprojection error can optimize the pose and the three-dimensional feature point simultaneously. The (3D-2D) reprojection error is used when the camera motion pose is optimally solved in SLAM (Simultaneous Localization And Mapping): is the error obtained by comparing the pixel coordinates (observed projected position) with the position obtained by projecting the 3D point according to the currently estimated pose.
In computer vision, Reprojection errors (Reprojection errors) are often used. For example, when calculating the planar homography matrix and the projection matrix, the reprojection error is often used to construct a cost function, and then this cost function is minimized to optimize the homography matrix or the projection matrix. In the embodiment, the Jacobian matrix is calculated by using the reprojection error, so that the pose information of the camera is optimized, the calculation error of the homography matrix and the measurement error of the image characteristic point are reduced, and the measurement precision is higher.
In addition, in this embodiment, when the speed information of the vehicle reported by the wheel speed sensor/wheel speed meter is optimized, the feature point information of the image is also optimized. Specifically, in the step 110-4, the method further includes:
the server judges whether the first reprojection error (error1) is within a preset threshold value according to the position and posture information of the vehicle; if yes, the foregoing step 120-3 is executed, and the third jacobian matrix is used to perform a second reprojection error (error2) calculation based on the feature points, so as to optimize the visual information of the image, and obtain the first pose information and the visual optimization parameters. If not, skipping the current frame, inquiring the next frame of image, and if the next frame of image is still inaccurate, attempting to supplement the attitude information of the vehicle by using a repositioning method to obtain accurate pose information of the vehicle.
If the effects of the visual optimization and the wheel speed meter optimization (steps 110-3 and 110-4) are good, that is, the error1 is within the preset threshold value, the estimation result is better, and the visual optimization parameters (for example, including the error1) and the wheel speed meter optimization parameters (for example, including the scale factor, the sensor bias parameter, and the like) are jointly optimized to obtain the optimized current frame track information, that is, the first attitude information.
Step 130 in the foregoing method embodiment is described in detail below.
The aforementioned step 130: determining second position information of the vehicle according to the first position information and the positioning information of the vehicle, wherein the positioning information of the vehicle includes GPS position information, as shown in fig. 7, specifically including:
130-1: and fitting the first position information and the GPS position information to obtain the first position information and the GPS position information of the vehicle in a GPS coordinate system.
Before fitting, the first attitude information is the position and attitude of the vehicle expressed in the camera coordinate system, and after fitting, the first attitude information is converted into the position and attitude expressed in the GPS coordinate system, thereby preparing for fitting GPS position information.
130-2: and performing pose graph (position graph) optimization on the first pose information and the GPS position information of the vehicle under the GPS coordinate system to obtain second pose information, wherein the second pose information is fitted with the GPS position information of the vehicle under the GPS coordinate system.
The measurement of the position graph refers to the pose of each frame obtained through local optimization, and then the relative pose between adjacent key frames is obtained through calculation. In other words, the position map optimization is to take the position of each key frame as a node, and take the calculation result of the relative position between the previous key frames (the time interval is short) as the measurement to perform maximum likelihood estimation, the method can reduce the calculation scale to obtain the second position information of the camera, and the position information is the vehicle position after global optimization and has a robust scale for single mapping, so as to provide accurate data for the point cloud map construction in step 140.
According to the method provided by the embodiment, map optimization processing is performed on the position and pose (first position and pose information) which is fused and optimized by the sensor and the GPS position information with the geographic information, so that vehicle position and pose information (second position and pose information) with accurate scales and a point cloud set of each key frame are obtained, and finally the point cloud map is constructed by using the point cloud set, so that the point cloud map with higher precision and better robustness input is obtained.
Exemplary devices
Referring to fig. 8, a map building apparatus provided for the embodiment of the present application may be a server or a functional module disposed on the server, and is configured to implement all or part of the functions of the foregoing method embodiments. Specifically, the map construction apparatus includes: the obtaining module 801, the calculating module 802, and the composition module 803, and the apparatus may further include other modules and units, such as a storage module, a sending module, and the like, which is not limited in this embodiment.
Specifically, the obtaining module 801 is configured to obtain feature point information of a plurality of frames of images, first speed information of a vehicle, and positioning information of the vehicle.
Wherein the feature point information includes: the first speed information of the vehicle comprises a first linear speed and/or a first angular speed of the vehicle, and the first speed information is measured by a wheel speed sensor. The positioning information includes GPS location information of the vehicle.
A calculating module 802, configured to determine first position information of the vehicle according to the feature point information and the first speed information obtained from the obtaining module 801, and determine second position information of the vehicle according to the first position information and the positioning information of the vehicle obtained by the obtaining module.
And the composition module 803 is configured to perform point cloud reconstruction by using the feature point information of the multi-frame image and the second pose information obtained by the calculation module 802 to obtain a point cloud map.
Optionally, as shown in fig. 9, the calculation module 802 includes an optimization module 8021 and an image processing module 8022.
Optionally, in an implementation manner of this embodiment, when the obtaining module 801 obtains the feature point information of the image, the feature point information may be obtained by an image collecting module 901 of a camera, as shown in fig. 10.
Further, the image collecting module 901 is configured to collect a plurality of frames of images captured/scanned by the vehicle during the driving process, and send the plurality of frames of images to the obtaining module 801.
The image processing module 8022 is configured to, according to the multiple frames of images acquired by the acquisition module 801, extract feature points from the multiple frames of images to obtain at least one feature point of each frame of image; and matching the feature points according to at least one feature point of each frame of image to obtain the matching relationship of the feature points of the inter-frame images. For example, the method includes performing correlation preprocessing on the image, detecting characteristic information such as points, lines, sections and the like, and performing matching processing on previous and subsequent key frames.
Wherein, the algorithm for extracting the feature points comprises the following steps: deep learning, or ORB feature point extraction algorithm.
Optionally, the camera is an image sensor, such as a front camera sensor of a mobile terminal or a vehicle-mounted terminal.
In addition, the acquiring module 801 is further configured to acquire the positioning information of the vehicle through a GPS acquisition module 904 of a GPS sensor. The GPS acquisition module 905 is configured to acquire positioning information of a vehicle and report the positioning information to the server.
Optionally, in another implementation manner of this embodiment, the acquiring module 801 acquires first speed information of the vehicle, which specifically includes: a speed acquisition module 902 in a wheel speed meter or a wheel speed sensor, configured to acquire second speed information of the vehicle, where the second speed information includes a second linear speed and/or a second angular speed of the vehicle; the optimizing module 903 optimizes the second speed information acquired by the speed acquiring module 902 to obtain the first speed information, and finally sends the first speed information to the acquiring module 801.
Or, another implementation manner is that the optimization module 8021 in the calculation module 802 optimizes the second speed information to obtain the first speed information.
Further, when the optimization module 8021 or the optimization module 903 in the wheel speed meter optimizes the second speed information, the optimization module 903 in the wheel speed meter is further configured to perform pre-integration processing on the second speed information to obtain the position and posture information of the vehicle; and processing a second linear velocity and/or a second angular velocity of the vehicle according to the position and posture information of the vehicle to obtain the first velocity information.
Optionally, in another implementation manner of this embodiment, the calculating module 802 is further configured to perform a first reprojection error calculation on the feature point information to obtain a second jacobian matrix; updating the second Jacobian matrix according to the first Jacobian matrix to obtain a third Jacobian matrix; and performing second reprojection error calculation based on the feature points by using the third Jacobian matrix to obtain the first posture information.
Optionally, in another implementation manner of this embodiment, the calculating module 802 is further configured to determine whether the first re-projection error is within a preset threshold according to the position and posture information of the vehicle; if so, performing second reprojection error calculation based on the feature points by using the third Jacobian matrix to obtain the first posture information.
Optionally, in another implementation manner of this embodiment, the calculating module 802 is further configured to fit the first position information calculated by the calculating module 802 and the positioning information of the vehicle reported by the GPS acquiring module 904 to obtain first position information in a GPS coordinate system and GPS position information of the vehicle; and performing pose graph optimization on the first pose information and the GPS position information of the vehicle under the GPS coordinate system to obtain second pose information, wherein the second pose information is fitted with the GPS position information of the vehicle under the GPS coordinate system.
The composition module 803 is further configured to perform optimization processing on the optimized second pose information determined by the calculation module 802 and the GPS location information of the vehicle with geographic information to obtain second pose information of the vehicle with an accurate scale and a point cloud set of each key frame, and finally form a 3D reconstructed point cloud map, thereby implementing global optimization in an area and map construction of dense point clouds based on sparse point clouds.
In addition, in this embodiment of the apparatus, functions of the respective modules shown in fig. 10 correspond to those of the foregoing embodiment of the method shown in fig. 5, for example, the image acquisition module 901 in the camera is used to execute the foregoing step 110-1 and/or step 110-2 of the method, or the image processing module 8022 executes the step 110-2. The speed acquisition module 902 of the wheel speed meter/wheel speed sensor can be used to realize the aforementioned step 110-3, and the optimization module 903 or the optimization module 8021 can be used to realize the aforementioned step 110-4. The calculation module 802 and the patterning module 803 may then be used to implement all or part of the functionality of steps 120 to 140 described above.
Exemplary hardware device
The hardware device of the embodiment of the present application is described below. The hardware device may be an electronic device, and the electronic device may be a server or a mobile terminal, such as a car machine, a vehicle-mounted device/a vehicle-mounted terminal, and the like.
When the hardware device is a server, as shown in fig. 3, a block diagram of a server according to an embodiment of the present application is provided.
The processor 302 includes one or more processors or processing units, each of which may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the server, such as the communication interface 301, input output devices 304, etc., to perform corresponding functions.
Memory 303 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 302 to implement the point cloud mapping methods of the various embodiments of the present application described above and/or other desired functions. The computer readable storage medium may further store sampling data reported by various sensors, such as image data, speed information, positioning information, and the like.
In one example, input/output devices 304, communication interface 301, and the like may be interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, at least one communication interface 301 on the server is connected to a transceiver for receiving various information reported by the car machine or the mobile terminal device, including data and information collected by various sensors. And stores the information in the memory 303. The transceiver may also output various information including the constructed point cloud map and the like to the outside through the communication interface 301. So that the receiving end, the user equipment UE or the mobile terminal can receive and display the reconstructed 3D point cloud map.
In addition, the input/output devices 304 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the server relevant to the present application are shown in fig. 3, omitting components such as speakers, displays, transceivers, etc. In addition, the hardware device may further include any other suitable components according to specific application, and the embodiment is not limited thereto.
Also, the functions of the acquisition module 801, the calculation module 802, and the patterning module 803 in fig. 8 or 9 may be implemented by the processor 302 and/or the memory 303, and the communication interface 301.
In addition, as shown in fig. 2, the vehicle 200 may also be used as an electronic device to implement all or part of the functions of the above-mentioned various sensors, such as the image sensor 204, the wheel speed sensor 205 and the GPS sensor 206, and correspondingly, the functions of the image acquisition module 901, the speed acquisition module 902, the optimization module 903 and the GPS acquisition module 904 shown in fig. 10 may be implemented by the above-mentioned various sensors.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform some or all of the steps in a point cloud mapping method according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + +, or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages, for performing the operations of embodiments of the present application. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform some or all of the steps in a point cloud map construction method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A point cloud map construction method, the method comprising:
acquiring feature point information of a plurality of frames of images, first speed information of a vehicle and positioning information of the vehicle, wherein the feature point information comprises: matching relation between feature points of each frame image and inter-frame feature points, wherein first speed information of the vehicle comprises a first linear speed and/or a first angular speed of the vehicle, and the first speed information is measured by a wheel speed sensor;
determining first attitude information of the vehicle according to the feature point information and the first speed information;
determining second position information of the vehicle according to the first position information and the positioning information of the vehicle;
and performing point cloud reconstruction by using the feature point information of the multi-frame image and the second position information to obtain a point cloud map.
2. The method of claim 1, wherein obtaining first speed information of the vehicle comprises:
acquiring second speed information of the vehicle reported by the wheel speed sensor, wherein the second speed information comprises a second linear speed and/or a second angular speed of the vehicle;
and optimizing the second speed information to obtain the first speed information.
3. The method of claim 2, wherein optimizing the second velocity information to obtain the first velocity information comprises:
performing pre-integration processing on the second speed information to obtain position and attitude information of the vehicle;
and processing a second linear velocity and/or a second angular velocity of the vehicle according to the position and posture information of the vehicle to obtain the first velocity information.
4. The method according to claim 3, wherein a second linear velocity and/or a second angular velocity of the vehicle is processed depending on the position and attitude information of the vehicle, further resulting in a first Jacobian matrix,
determining first attitude information of the vehicle according to the feature point information and the first speed information, including:
performing first reprojection error calculation on the characteristic point information to obtain a second Jacobian matrix;
updating the second Jacobian matrix according to the first Jacobian matrix to obtain a third Jacobian matrix;
and performing second reprojection error calculation by using the third Jacobian matrix to obtain the first position information.
5. The method of claim 4, wherein performing a second reprojection error calculation using the third Jacobian matrix to obtain the first pose information comprises:
and when the first reprojection error is judged to be within a preset threshold value according to the position and posture information of the vehicle, performing second reprojection error calculation by using the third Jacobian matrix to obtain the first position and posture information.
6. The method of claim 4, wherein the positioning information comprises GPS location information;
determining second position information of the vehicle according to the first position information and the positioning information of the vehicle, wherein the determining comprises the following steps:
fitting the first position information and the GPS position information to obtain first position information and GPS position information of the vehicle in a GPS coordinate system;
and performing pose graph optimization on the first pose information and the GPS position information of the vehicle under the GPS coordinate system to obtain second pose information, wherein the second pose information is fitted with the GPS position information of the vehicle under the GPS coordinate system.
7. The method according to any one of claims 1 to 6, wherein acquiring feature point information of a plurality of frames of images comprises:
extracting characteristic points from the multiple frames of images shot by the vehicle in the driving process to obtain at least one characteristic point of each frame of image;
and matching the characteristic points according to at least one characteristic point of each frame of image to obtain the matching relation of the characteristic points of the inter-frame images.
8. A point cloud mapping device, the device comprising: an acquisition module, a calculation module, and a composition module, wherein,
the acquiring module is used for acquiring feature point information of a plurality of frames of images, first speed information of a vehicle and positioning information of the vehicle, wherein the feature point information comprises: matching relation between feature points of each frame image and inter-frame feature points, wherein first speed information of the vehicle comprises a first linear speed and/or a first angular speed of the vehicle, and the first speed information is measured by a wheel speed sensor;
the calculation module is used for determining first position and attitude information of the vehicle according to the feature point information and the first speed information obtained from the acquisition module, and determining second position and attitude information of the vehicle according to the first position and attitude information and the positioning information of the vehicle obtained by the acquisition module;
the composition module is used for carrying out point cloud reconstruction by utilizing the feature point information of the multi-frame images and the second position and posture information obtained by the calculation module to obtain a point cloud map.
9. A computer-readable storage medium storing a computer program,
the computer program, when executed, implements the point cloud mapping method of any of claims 1-7 above.
10. An electronic device, the electronic device comprising: a processor and a memory, wherein the processor is capable of processing a plurality of data,
the memory to store executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the point cloud map construction method of any one of the above claims 1 to 7.
CN202111573064.7A 2021-12-21 2021-12-21 Point cloud map construction method, device and equipment Pending CN114429515A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111573064.7A CN114429515A (en) 2021-12-21 2021-12-21 Point cloud map construction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111573064.7A CN114429515A (en) 2021-12-21 2021-12-21 Point cloud map construction method, device and equipment

Publications (1)

Publication Number Publication Date
CN114429515A true CN114429515A (en) 2022-05-03

Family

ID=81310478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111573064.7A Pending CN114429515A (en) 2021-12-21 2021-12-21 Point cloud map construction method, device and equipment

Country Status (1)

Country Link
CN (1) CN114429515A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780666A (en) * 2022-06-23 2022-07-22 四川见山科技有限责任公司 Road label optimization method and system in digital twin city

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780666A (en) * 2022-06-23 2022-07-22 四川见山科技有限责任公司 Road label optimization method and system in digital twin city
CN114780666B (en) * 2022-06-23 2022-09-27 四川见山科技有限责任公司 Road label optimization method and system in digital twin city

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
US10789771B2 (en) Method and apparatus for fusing point cloud data
CN108344397B (en) Automatic modeling method and system based on oblique photography technology and auxiliary device thereof
US20190371003A1 (en) Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium
US9185289B2 (en) Generating a composite field of view using a plurality of oblique panoramic images of a geographic area
JP2020528134A (en) Calibration of integrated sensor in natural scene
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
JP6830140B2 (en) Motion vector field determination method, motion vector field determination device, equipment, computer readable storage medium and vehicle
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
US10996337B2 (en) Systems and methods for constructing a high-definition map based on landmarks
KR20200110120A (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
US11959749B2 (en) Mobile mapping system
KR20180076441A (en) Method and apparatus for detecting object using adaptive roi and classifier
CN113820735A (en) Method for determining position information, position measuring device, terminal, and storage medium
CN110515110B (en) Method, device, equipment and computer readable storage medium for data evaluation
CN108253942B (en) Method for improving oblique photography measurement space-three quality
CN116625354A (en) High-precision topographic map generation method and system based on multi-source mapping data
CN110986888A (en) Aerial photography integrated method
CN114429515A (en) Point cloud map construction method, device and equipment
KR102130687B1 (en) System for information fusion among multiple sensor platforms
JP5230354B2 (en) POSITIONING DEVICE AND CHANGED BUILDING DETECTION DEVICE
Bukin et al. A computer vision system for navigation of ground vehicles: Hardware and software
CN113433566A (en) Map construction system and map construction method
CN114155290B (en) System and method for large-field-of-view high-speed motion measurement
JP7117408B1 (en) POSITION CALCULATION DEVICE, PROGRAM AND POSITION CALCULATION METHOD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination