WO2023092865A1 - 区域重构方法和系统 - Google Patents

区域重构方法和系统 Download PDF

Info

Publication number
WO2023092865A1
WO2023092865A1 PCT/CN2022/076070 CN2022076070W WO2023092865A1 WO 2023092865 A1 WO2023092865 A1 WO 2023092865A1 CN 2022076070 W CN2022076070 W CN 2022076070W WO 2023092865 A1 WO2023092865 A1 WO 2023092865A1
Authority
WO
WIPO (PCT)
Prior art keywords
image sequence
target area
image
reconstruction
unit
Prior art date
Application number
PCT/CN2022/076070
Other languages
English (en)
French (fr)
Inventor
邓海峰
计洁
李忠超
任高月
蔡盛
耿浩轩
赵斐斐
Original Assignee
南京天辰礼达电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京天辰礼达电子科技有限公司 filed Critical 南京天辰礼达电子科技有限公司
Publication of WO2023092865A1 publication Critical patent/WO2023092865A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • the present application relates to the technical field of surveying and mapping, and in particular, relates to a method and system for area reconstruction.
  • GNSS Global Navigation Satellite System, Global Navigation Satellite System
  • the GNSS receiver solves the position coordinates of the antenna phase center, and in the actual surveying and mapping work, the points to be measured and the points to be staked out are mostly ground object points.
  • the coordinates of the calculated antenna phase center position are transmitted to the object points on the ground. It is necessary to place the tip of the measuring rod on the point to be measured to complete the measurement and stakeout of the point coordinates.
  • GNSS electromagnetic wave signals are blocked, the accuracy and reliability of GNSS solutions are reduced, or even unavailable. Total stations need to be used to complete the measurement and stakeout of occluded points. Traditional measurement methods are labor-intensive and low in efficiency.
  • the purpose of the present application is to provide an area reconstruction method and system, which can solve the problem of unreliable occlusion environment measurement in the field of surveying and mapping and improve work efficiency.
  • an area reconstruction method including:
  • the target area is reconstructed according to the image sequence and the prior pose data to obtain a reconstructed pose of the image sequence and a point cloud model of the target area.
  • the prior pose data includes: spatial position parameters and attitude parameters corresponding to each image in the image sequence;
  • Prior pose data including:
  • the spatial position parameters and posture parameters of each image in the image sequence are determined by a positioning system.
  • the spatial position parameters and attitude parameters corresponding to each image in the image sequence can be determined, and the prior pose of the image sequence can be initially obtained from multiple dimensions, so that Subsequent reconstruction of the target area provides data support, improves the efficiency of point cloud model construction in the target area, and also makes the constructed point cloud model more accurate.
  • the determining the spatial position parameters and posture parameters of each image in the image sequence through the positioning system includes:
  • a combined device formed by a global navigation satellite system and a visual odometry is used to determine prior pose data corresponding to each image in the image sequence.
  • different structures can be used to determine the prior pose data of the target area, so that the flexibility of obtaining the prior pose data is higher.
  • the target area is reconstructed according to the image sequence and the prior pose data to obtain the reconstructed pose of the image sequence and the target area Point cloud models, including:
  • the first spatial relationship and the pose of the image sequence after sparse reconstruction perform dense reconstruction to determine the second spatial relationship of each feature point in the image sequence, so as to obtain the image sequence in the dense The reconstructed pose and the point cloud model of the target region.
  • the two-stage construction of sparse reconstruction and dense reconstruction can be used to make the fineness of the constructed point cloud model higher.
  • the target area is sparsely reconstructed based on the image sequence and the prior pose data, so as to determine the first
  • the spatial relationship and the pose of the image sequence after sparse reconstruction include:
  • the target area is sparsely reconstructed to determine the first spatial relationship and the first spatial relationship of each feature point in the image sequence The pose of the image sequence after sparse reconstruction.
  • the dense reconstruction is performed according to the first spatial relationship and the pose of the image sequence after sparse reconstruction, so as to determine the first position of each feature point in the image sequence.
  • Two spatial relationships to obtain the point cloud model of the target area including:
  • the method also includes:
  • the spatial data of the object to be measured in the target area is determined.
  • the determining the spatial data of the object to be measured in the target area according to the point cloud model includes:
  • the spatial data of the object to be measured is determined according to the coordinates of the plurality of key points.
  • the spatial data of each object in the target area can be determined, and the relevant positional relationship or size data of the object can be measured without touching the object.
  • the method also includes:
  • an area reconstruction device including:
  • An acquisition module configured to acquire an image sequence of a target area, the target area including one or more objects to be measured
  • An acquisition module configured to acquire prior pose data of each image in the image sequence
  • a reconstruction module configured to reconstruct the target area according to the image sequence and the prior pose data, and obtain a reconstructed pose of the image sequence and a point cloud model of the target area.
  • an area reconstruction system including:
  • an acquisition unit configured to acquire an image sequence of the target area
  • Satellite navigation and positioning unit and inertial measurement unit for collecting pose data of image sequences
  • a processing unit configured to process the pose data and the image sequence to obtain a point cloud model of the target area.
  • it also includes:
  • a display unit configured to display the data determined by the processing unit.
  • it also includes: a first shell;
  • the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit, the processing unit and the display unit are installed in the first housing.
  • it also includes: a second shell and a third shell;
  • the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit and the processing unit are installed in the second housing;
  • the display unit is installed in the third housing
  • the processing unit is communicatively connected with the display unit.
  • the fourth shell and the fifth shell are in an optional embodiment, the fourth shell and the fifth shell;
  • the satellite navigation and positioning unit, the inertial measurement unit and the acquisition unit are installed in the fourth housing;
  • the processing unit and the display unit are installed in the fifth housing;
  • the processing unit is respectively connected in communication with the satellite navigation and positioning unit, the inertial measurement unit and the acquisition unit.
  • the embodiment of the present application provides an area reconstruction system, including: a terminal device and a server;
  • the terminal equipment includes: a satellite navigation and positioning unit, an inertial measurement unit and an acquisition unit;
  • the acquisition unit is configured to acquire an image sequence of the target area
  • the satellite navigation and positioning unit and the inertial measurement unit are used to collect pose data of an image sequence
  • the server is configured to receive the pose data and the image sequence sent by the terminal device, and process the pose data and the image sequence to obtain a point cloud model of the target area.
  • the beneficial effect of the embodiment of the present application is: by first determining the prior pose data, and then reconstructing the target area based on the image and the prior pose data, the object to be tested can be realized without contact.
  • the measurement of the prior pose can simplify the operation required for the reconstruction of the target area, and can also make the reconstructed point cloud model of the target area more accurate.
  • FIG. 1 is a schematic block diagram of an area reconstruction system provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of the regional reconstruction system provided by the embodiment of the present application.
  • FIG. 3 is a flow chart of an area reconstruction method provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a method for determining feature points used in the region reconstruction method provided by the embodiment of the present application.
  • Fig. 5 is a schematic diagram of functional modules of an apparatus for area reconstruction provided by an embodiment of the present application.
  • GNSS receivers are usually used for surveying and mapping based on satellite navigation and positioning technology.
  • This GNSS receiver surveying and mapping technology belongs to active measurement and needs to receive electromagnetic wave signals broadcast by satellite terminals. If the electromagnetic wave signals broadcast by satellites If it is blocked, the positioning quality will be reduced or even impossible to locate.
  • the common solution is to use the total station as a supplementary measurement method, which requires the user to carry an additional set of total station and tripod, and the operation of the total station is relatively troublesome, often requiring many Human cooperation, and the GNSS receiver can only measure one point at a time with the measuring rod.
  • This surveying and mapping method is inefficient.
  • the above problems are the main factors that limit the application of GNSS receivers in the field of surveying and mapping.
  • the embodiments of the present application provide an area reconstruction method and an area reconstruction system, which realize positioning and reconstruction of a target area in a non-contact manner through images.
  • the present application is described below through some embodiments.
  • FIG. 1 it is a schematic structural diagram of an area reconstruction system provided by an embodiment of the present application.
  • the area reconstruction system 10 in this embodiment may include: a satellite navigation and positioning unit 110 , an inertial measurement unit 120 , an acquisition unit 130 and a processing unit 140 .
  • the satellite navigation positioning unit in this embodiment may be a global navigation satellite positioning system (Global Navigation Satellite System, GNSS).
  • GNSS Global Navigation Satellite System
  • the position data of the object to be measured can be determined through the satellite navigation and positioning unit.
  • the satellite navigation and positioning unit may be a chip integrated with satellite navigation and positioning functions, and units such as a power supply module, a communication module, an antenna, a memory, and a processor may also be included in the chip.
  • the power module is used to provide required power for each module in the chip.
  • the communication module is used to communicate with other units of the regional reconstruction system.
  • the processor is used to execute instructions required for positioning using the satellite navigation and positioning unit.
  • the inertial measurement unit is a device for measuring the three-axis attitude angle and acceleration of the object to be measured.
  • the inertial measurement unit may include three single-axis accelerometers and three single-axis gyroscopes.
  • the accelerometer is used to measure the acceleration signal of the object to be measured, and the gyroscope can be used to measure the angular velocity signal of the object to be measured. By measuring the angular velocity and acceleration of the object to be measured in three-dimensional space, the attitude of the object to be measured can be calculated.
  • the inertial measurement unit may be a chip integrating an inertial measurement function.
  • the chip may also include units such as a power module, a communication module, an antenna, a memory, and a processor.
  • the power supply module is used to provide the required power for each component on the chip
  • the communication module and antenna are used to communicate with other units of the area reconstruction system
  • the processor is used to perform the measurement of the object to be measured using the inertial measurement function Instructions required for attitude data.
  • the acquisition unit is used for acquiring image data of the object to be measured.
  • the acquisition unit may be a camera.
  • the acquisition unit may also include multiple cameras.
  • the above-mentioned processing unit may be an integrated circuit chip with signal processing capabilities.
  • the above-mentioned processing unit can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (digital signal processor, referred to as DSP) , Application Specific Integrated Circuit (ASIC for short), Field Programmable Gate Array (Field Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the area reconstruction system can be used to set the distance from the target area to determine the image sequence of the target area at different angles and its position data and attitude data.
  • the area reconstruction system may further include a display unit 150, which is used for displaying the collected image of the target area, and may also be used for displaying the model of the target area constructed based on the image sequence.
  • the above-mentioned display unit provides an interactive interface (such as a user operation interface) between the area reconstruction system and the user or is used for displaying image data for the user's reference.
  • the display unit may be a liquid crystal display or a touch display. If it is a touch display, it can be a capacitive touch screen or a resistive touch screen supporting single-point and multi-touch operations. Supporting single-point and multi-touch operations means that the touch display can sense simultaneous touch operations from one or more locations on the touch display.
  • the layout of the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit and the processing unit of the regional reconstruction system in this embodiment can be different based on different requirements.
  • the layout of various units such as satellite navigation and positioning unit, inertial measurement unit, acquisition unit and processing unit.
  • the above-mentioned satellite navigation and positioning unit, inertial measurement unit, acquisition unit and processing unit are integrated into one device.
  • the area reconfiguration system may further include: a first housing.
  • a satellite navigation and positioning unit, an inertial measurement unit, an acquisition unit, a processing unit and a display unit can be installed in the first housing.
  • the components of the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit, the processing unit and the display unit are electrically connected to each other directly or indirectly, so as to realize data transmission or interaction.
  • these components can be electrically connected to each other through one or more communication buses or signal lines.
  • the above-mentioned satellite navigation and positioning unit, inertial measurement unit, acquisition unit and processing unit are arranged in two devices.
  • the area reconfiguration system may include a second housing and a third housing.
  • the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit and the processing unit are installed in the second shell.
  • the second housing and the units installed therein may form a device.
  • the display unit is installed in the third housing.
  • the third housing and the display unit mounted therein may form a device.
  • the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit and the processing unit installed in the third housing are directly or indirectly electrically connected to each other to realize data transmission or interaction.
  • these components can be electrically connected to each other through one or more communication buses or signal lines.
  • a communication connection can be made between the two devices.
  • the processing unit is communicatively connected with the display unit.
  • the processing unit and the display unit may be connected through wireless communication, for example, the wireless connection may be realized through near-field communication such as Bluetooth and WiFi.
  • the processing unit and the display unit may also be connected through wired communication.
  • the area reconfiguration system may include a fourth housing and a fifth housing.
  • the satellite navigation and positioning unit, the inertial measurement unit and the acquisition unit are installed in the fourth casing; the processing unit and the display unit are installed in the fifth casing.
  • the satellite navigation and positioning unit, the inertial measurement unit and the acquisition unit inside the fourth housing are electrically connected to each other directly or indirectly to realize data transmission or interaction.
  • these components can be electrically connected to each other through one or more communication buses or signal lines.
  • the processing unit and the display unit in the fifth housing are directly or indirectly electrically connected to realize data transmission or interaction.
  • these components can be electrically connected to each other through one or more communication buses or signal lines.
  • a communication connection can be made between the two devices.
  • the processing unit inside the fifth housing may be respectively connected in communication with the satellite navigation and positioning unit, the inertial measurement unit and the acquisition unit in the fourth housing.
  • the above-mentioned display unit may also be a display unit of a mobile terminal, and the mobile terminal may be communicatively connected to a measuring device including the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit and the processing unit.
  • the mobile terminal may be a mobile intelligent terminal such as a personal computer (personal computer, PC), a tablet computer, a smart phone, a personal digital assistant (personal digital assistant, PDA), and a measuring handbook.
  • a mobile intelligent terminal such as a personal computer (personal computer, PC), a tablet computer, a smart phone, a personal digital assistant (personal digital assistant, PDA), and a measuring handbook.
  • the area reconstruction system may include a terminal device 200 and a server 300 .
  • the terminal device 200 may include a satellite navigation and positioning unit, an inertial measurement unit, and an acquisition unit.
  • the terminal device 200 may further include a display unit.
  • the server 300 may include a processing unit.
  • the terminal device 200 may further include a display unit.
  • the server 300 may include a processing unit.
  • the terminal device 200 and the server 300 can be communicatively connected, for example, can communicate through mobile data networks such as 4G and 5G.
  • the area reconstruction system may further include a terminal device, a display terminal, and a server.
  • the satellite navigation positioning unit, inertial measurement unit and acquisition unit of the area reconstruction system can be integrated in the terminal device
  • the display unit of the area reconstruction system can be integrated in the display terminal
  • the processing unit of the area reconstruction system can be integrated in the server .
  • the satellite navigation and positioning unit, inertial measurement unit, acquisition unit, processing unit, and display unit of the area reconstruction system can also be arranged in implementation manners different from the above several implementation manners.
  • each unit of the satellite navigation and positioning unit, the inertial measurement unit, the acquisition unit, the processing unit and the display unit of the area reconstruction system may be arranged in an independent device.
  • each independent device may include units such as a power module, a communication module, a memory, and a processor, so that each independent device can independently implement required functions.
  • the satellite navigation and positioning unit and the inertial measurement unit of the area reconstruction system are arranged in one device, and each unit of the acquisition unit, processing unit and display unit can be arranged in another device.
  • the data of the satellite navigation and positioning unit, the inertial measurement unit, and the acquisition unit of the area reconstruction system can be transmitted to the processing unit, and the data of the processing unit can be transmitted to the display unit.
  • the area reconstruction system 10 in this embodiment can be used to execute each step in each area reconstruction method provided in the embodiment of the present application.
  • the implementation process of the area reconstruction method is described in detail below through several embodiments.
  • FIG. 3 is a flow chart of the area reconstruction method provided by the embodiment of the present application.
  • the method in this embodiment can be applied to an area reconstruction system.
  • the specific process shown in FIG. 3 will be described in detail below.
  • Step 410 collecting an image sequence of the target area.
  • the target area includes one or more objects to be tested.
  • the objects to be tested may be different.
  • the objects to be tested may be various houses in the urban residential area.
  • the target area is a mining area
  • the objects to be tested may be various facilities in the mining area.
  • the object to be measured may be earthwork, boundaries, etc. in the construction site.
  • Step 420 acquiring prior pose data of each image in the image sequence.
  • the prior pose data may include spatial position parameters and/or attitude parameters corresponding to each image in the image sequence.
  • the area reconstruction system may include a positioning system, and then the spatial position parameters corresponding to each image in the image sequence may be determined through the positioning system.
  • the area reconstruction system may include an inertial measurement unit, and then the spatial attitude parameters corresponding to each image in the image sequence may be determined through the inertial measurement unit.
  • the area reconstruction system may include a positioning system, and the positioning system may include a positioning function and an inertial measurement function, and then the spatial position parameters and pose parameters corresponding to each image in the image sequence can be determined through the positioning system.
  • step 410 and step 420 do not necessarily need to be performed sequentially.
  • the coordinates of the collection device and The attitude of the acquisition device is taken as the spatial position parameter corresponding to the image
  • the attitude of the acquisition equipment is taken as the pose parameter corresponding to the image.
  • step 410 may also be performed before step 420, for example, after each image is captured, the coordinates and posture of the current collection device are collected under the condition that the posture of the collection device does not change.
  • step 410 may also be performed after step 420.
  • the coordinates and posture of the acquisition device are first determined, and after the coordinates and posture of the acquisition device are determined, the acquisition device then acquires the target area Image.
  • Step 430 Reconstruct the target area according to the image sequence and the prior pose data, and obtain the reconstructed pose of the image sequence and a point cloud model of the target area.
  • the three-dimensional coordinates of each feature point in the image can be determined through the image sequence. Then, according to the prior pose data and the three-dimensional coordinates of each feature point, the point cloud model of the target area is constructed, and the reconstructed pose of the image sequence is obtained.
  • the acquisition device acquires images of the target feature point at least at two different positions. Then the three-dimensional coordinates of each feature point can be determined by binocular vision positioning method.
  • P1 , P2 , and P3 represent spatial position parameters and attitude parameters of the three images, respectively.
  • the spatial position parameters and attitude parameters of the three images can be determined using a positioning system when collecting three more images.
  • xj in the illustration indicates that the target feature point is expressed as; uij, u2j, u3j indicate the pixel coordinates of the feature point in the three images.
  • the three-dimensional coordinates of the target feature point xj can be calculated based on P1, P2, P3 and uij, u2j, u3j.
  • the images in the above image sequence may be depth images, and the three-dimensional coordinates of each feature point are determined according to the position of the acquisition device and the depth image of each feature point when each feature point is collected.
  • the object to be tested can also be reconstructed according to the image sequence and the prior pose data through the SLAM (simultaneous localization and mapping) algorithm to obtain the target A point cloud model of the area.
  • SLAM simultaneous localization and mapping
  • the interaction data can also be obtained first, and the object to be measured can be reconstructed based on the interaction data, the image sequence and the prior pose data to obtain the point cloud model of the target area.
  • the interaction data can be obtained through interaction with the user.
  • the interaction data may include parameters preset by the user, for example, may include the expected size of the object to be measured and the like.
  • the interaction data may also include operations on the images in the sequence of images.
  • operations on images may include image operations such as image zoom-in, image zoom-out, image rotation, image translation, point selection in images, marking of images, and attribute entry of objects in images.
  • the interaction data may also include selecting an image containing the object to be measured among the images in the image sequence.
  • the prior pose data of each feature point in the target area and each object to be measured can be determined , and then combined with the image sequence, the three-dimensional coordinates of each feature point can be determined, so as to determine the point cloud model of the target area.
  • the target area can be reconstructed through the image sequence, it can make up for the situation that the signal is blocked during the measurement of the GNSS receiver, and the determined point cloud model can be The effect is better, and further, because the point cloud model based on the image sequence is determined with the support of prior pose data, the determined point cloud model can also be more accurate.
  • Region reconstruction systems based on different structures can adopt different methods for determining prior pose data. The following describes the determination of prior pose data through several implementation methods.
  • an integrated navigation system formed by a global navigation satellite system and an inertial navigation system may be used to determine the prior pose data of each image in the image sequence.
  • the integrated navigation system formed by the global navigation satellite system and the inertial navigation system can also be understood as a GNSS/INS integrated navigation system.
  • a global navigation satellite system may be used to determine spatial position parameters corresponding to each image in the image sequence
  • an attitude reference system may be used to determine attitude parameters corresponding to each image in the image sequence.
  • the attitude reference system may be an AHRS (Attitude and heading reference system) system composed of an accelerometer, a gyroscope, and a magnetometer.
  • AHRS titude and heading reference system
  • a combined device formed by a global navigation satellite system and a visual odometry may be used to determine the prior pose data corresponding to each image in the image sequence.
  • the visual odometer can be VIO (Visual-Inertial Odometry, visual-inertial odometer).
  • the visual-inertial odometry may include a visual sensor and an inertial measurement unit.
  • Visual odometry can be expressed as VIO (Visual Odometry, visual odometer).
  • step 430 may include: step 431 and step 432 .
  • Step 431 based on the image sequence and the prior pose data, perform sparse reconstruction on the target area to determine the first spatial relationship of each feature point in the image sequence and the pose of the image sequence after sparse reconstruction .
  • the first spatial relationship may include the three-dimensional coordinates of each feature point and the common-view relationship of each feature point.
  • the common-view relationship may represent a positional relationship of two visual positioning feature points in images collected in the same environment.
  • the motion structure restoration technology can be used to perform sparse reconstruction on the target area on the basis of the prior pose data and the image sequence, so as to determine the first position of each feature point in the image sequence. Spatial Relations.
  • Motion structure restoration technology means SFM (structure from motion) technology.
  • SFM structure from motion
  • the SFM technology can restore the three-dimensional information of each object to be measured in the target area from the above image sequence.
  • the aerotriangulation technique can be used to perform sparse reconstruction on the target area on the basis of the prior pose data and the image sequence, so as to determine the first position of each feature point in the image sequence. Spatial relationships and poses of this image sequence after sparse reconstruction.
  • Step 432 perform dense reconstruction according to the first spatial relationship to determine the second spatial relationship of each feature point in the image sequence, so as to obtain the pose of the image sequence after dense reconstruction and the point cloud of the target area Model.
  • each feature point in the image sequence can be densely matched to densely construct the target area to obtain a point cloud model of the target area .
  • each feature point in the image sequence can be densely matched using a triangulation technique, so as to implement dense construction of the target area, to obtain the target A point cloud model of the area.
  • the point cloud model of the target area can also be obtained only based on the sparse construction in step 431 .
  • the area reconstruction method may further include: step 440, generating a digital surface model and/or a digital elevation model of the target area according to the point cloud model.
  • the size of each object to be measured in the target area can also be determined, and the relative distance and relative position of any two objects to be measured in the target area can also be determined through the point cloud model relationship etc.
  • the area reconstruction method may further include: step 450, determining the spatial data of the object to be measured in the target area according to the point cloud model.
  • step 450 may include step 451 and step 452 .
  • Step 451 determine the coordinates of a plurality of key points of the object to be measured in the target area from the point cloud model.
  • the key point may be a point that can represent the object to be measured.
  • the key point of the facility may be the highest point of the facility, a point on the base of the facility, and the like.
  • the key points of the house may be vertices in the external structure of the house, a point on the edge of the foundation of the house, and the like. Wherein, the objects to be tested are different, and the corresponding key points may also be different.
  • the object to be measured may be determined according to the received information input by the user, and then the coordinates of the key points of the object to be measured are screened out from the point cloud model.
  • the target area is a mining area
  • the information input by the user is the target facilities existing in the mining area.
  • the target facility it can be determined that the object to be measured is the target facility in the target area, and the points that can represent the particularity of the target facility can be selected from the point cloud model of the target area as key points.
  • the object to be measured selected by the user in the point cloud model and the data required to be measured for the object to be measured can be obtained through the human-computer interaction interface.
  • the user selects the key point of the object to be measured on the point cloud model or the image sequence, the distance between two points in the object to be measured, and the surface corresponding to the object to be measured.
  • Step 452 determine the spatial data of the object to be measured according to the coordinates of the plurality of key points.
  • the spatial data may include the volume of the object to be measured, the height of the object to be measured, the area of one side of the object to be measured, the distances between multiple objects to be measured, and the like.
  • the surveying and mapping work of the occluded environment can be realized.
  • the measurement can be realized by selecting the corresponding object point to be measured on the image, thereby solving the problem of GNSS receiver measurement.
  • There are occlusions and one data acquisition can measure all the objects to be measured in the shooting target area, which can greatly improve the work efficiency.
  • the volume and area of the structure can be measured, which expands the usage scenarios of the system .
  • the embodiment of this application also provides an area reconstruction device corresponding to the area reconstruction method. Since the problem-solving principle of the device in the embodiment of the application is similar to the aforementioned embodiment of the area reconstruction method, this application For the implementation of the device in the embodiment, reference may be made to the description in the embodiment of the above method, and repeated descriptions will not be repeated.
  • FIG. 5 is a schematic diagram of functional modules of an area reconstruction device provided in an embodiment of the present application.
  • Each module in the area reconstruction apparatus in this embodiment is used to execute each step in the above method embodiment.
  • the area reconstruction device includes: a collection module 510 , an acquisition module 520 and a reconstruction module 530 , where the content of each module is as follows.
  • An acquisition module 510 configured to acquire an image sequence of a target area, where the target area includes one or more objects to be measured;
  • An acquisition module 520 configured to acquire prior pose data of each image in the image sequence
  • the reconstruction module 530 is configured to reconstruct the target area according to the image sequence and the prior pose data, and obtain the reconstructed pose of the image sequence and a point cloud model of the target area.
  • an embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the region reconstruction method in the above-mentioned method embodiment are executed .
  • the computer program product of the area reconfiguration method provided by the embodiment of the present application includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the steps of the area reconfiguration method in the above method embodiments
  • a computer-readable storage medium storing program codes
  • the instructions included in the program code can be used to execute the steps of the area reconfiguration method in the above method embodiments
  • each block in a flowchart or block diagram may represent a module, program segment, or part of code that contains one or more programmable logic functions for implementing specified logical functions. Execute instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
  • each functional module in each embodiment of the present application may be integrated to form an independent part, each module may exist independently, or two or more modules may be integrated to form an independent part.
  • this function is realized in the form of a software function module and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes.
  • relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that there is a relationship between these entities or operations. There is no such actual relationship or order between them.
  • the term “comprises”, “comprises” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of or also include elements inherent in such a process, method, article, or device. Without further limitations, an element defined by the statement "comprising" does not exclude the presence of additional identical elements in the process, method, article or device comprising the element.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

本申请提供了一种区域重构方法和系统,其中,该方法包括:采集目标区域的图像序列,该目标区域中包括一个或多个待测对象;获取该图像序列中的各张图像的先验位姿数据;根据该图像序列和该先验位姿数据,对该目标区域进行重建,得到该图像序列重建后的位姿与该目标区域的点云模型;根据该重建信息,确定出该目标区域中的待测对象的空间数据。

Description

区域重构方法和系统
相关申请的交叉引用
本申请要求于2021年11月29日提交中国国家知识产权局的申请号为202111428587.2、名称为“区域重构方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及测绘技术领域,具体而言,涉及一种区域重构方法和系统。
背景技术
在测绘领域,GNSS(Global Navigation Satellite System,全球卫星导航定位系统)接收机是主要的点测量和点放样工具。
但GNSS接收机解算的是天线相位中心的位置坐标,而实际测绘工作中待测量点与待放样点大多是位于地面的地物点,因此在测绘时,GNSS接收机需配合测量杆将解算的天线相位中心位置坐标传递至地面上的地物点,需要将测量杆的杆尖放置在待测量点上,才能完成点坐标的测量与放样,在存在建筑物和林荫遮挡的环境下,GNSS的电磁波信号受到遮挡,GNSS解算的精度和可靠性降低,甚至不可用,需要用全站仪完成遮挡点的测量与放样,传统的测量方式劳动强度大,工作效率低。
发明内容
本申请的目的在于提供一种区域重构方法和系统,能够解决测绘领域遮挡环境测量不可靠的问题并且提升工作效率。
第一方面,本申请实施例提供一种区域重构方法,包括:
采集目标区域的图像序列,所述目标区域中包括一个或多个待测对象;
获取所述图像序列中的各张图像的先验位姿数据;
根据所述图像序列和所述先验位姿数据,对所述目标区域进行重建,得到该图像序列重建后的位姿与所述目标区域的点云模型。
在一种可选的实施方式中,所述先验位姿数据包括:所述图像序列中的各张图像对应的空间位置参数和姿态参数;所述获取所述图像序列中的各张图像的先验位姿数据,包括:
通过定位系统确定出所述图像序列中的各张图像的空间位置参数和姿态参数。
在上述实现方式中,在确定先验位姿数据时,可以确定出图像序列中各张图像对应的空间位置参数和姿态参数,可以从多个维度先初始获知图像序列的先验位姿,以为后续的目标区域的重构提供数据支持,提高目标区域的点云模型构建效率,也能够使构建的点云模型更加准确。
在一种可选的实施方式中,所述通过定位系统确定出所述图像序列中的各张图像的空间位置参数和姿态参数,包括:
使用全球导航卫星系统和惯性导航系统形成的组合导航系统,确定所述图像序列中各张图像的先验位姿数据;
或者,使用全球导航卫星系统确定所述图像序列的各张图像对应的空间位置参数,使用航姿参考系统确定出所述图像序列的各张图像对应的姿态参数;
或者,使用全球导航卫星系统和视觉里程计形成的组合设备,确定出所述图像序列的各张图像对应的先验位姿数据。
在上述实施方式中,可以使用不同的结构确定出目标区域的先验位姿数据,使获得先验位姿数据的灵活性更高。
在一种可选的实施方式中,所述根据所述图像序列和所述先验位姿数据,对所述目标区域进行重建,得到所述图像序列重建后的位姿与所述目标区域的点云模型,包括:
基于所述图像序列和所述先验位姿数据,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿;
根据所述第一空间关系和所述图像序列在稀疏重建后的位姿,进行稠密重建,以确定出所述图像序列中的各个特征点的第二空间关系,以得到所述图像序列在稠密重建后的位姿与所述目标区域的点云模型。
在上述实施方式中,可以通过稀疏重构和稠密重构两级构建,可以使构建的点云模型的精细度更高。
在一种可选的实施方式中,所述基于所述图像序列和所述先验位姿数据,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿,包括:
使用运动结构恢复技术,在所述先验位姿数据和所述图像序列基础上,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿;
或者,使用空三技术,在所述先验位姿数据和所述图像序列基础上,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿。
在一种可选的实施方式中,所述根据所述第一空间关系和所述图像序列在稀疏重建后的位姿,进行稠密重建,以确定出所述图像序列中的各个特征点的第二空间关系,以得到所述目标区域的点云模型,包括:
在所述各个特征点的第一空间关系的基础上,对所述图像序列中的各个特征点进行密 集匹配和/或三角化技术,以对所述目标区域进行稠密重建,以得到所述图像序列在稠密重建后的位姿与所述目标区域的点云模型。
在一种可选的实施方式中,所述方法还包括:
根据所述点云模型,确定出所述目标区域中的待测对象的空间数据。
在一种可选的实施方式中,所述根据所述点云模型,确定出所述目标区域中的待测对象的空间数据,包括:
从所述点云模型中确定出所述目标区域中的待测对象的多个关键点的坐标;
根据所述多个关键点的坐标确定出所述待测对象的空间数据。
在上述实施方式中,可以确定出目标区域中的各个对象的空间上个各项数据,可以实现在不接触对象的情况下就能够测量对象的相关位置关系或尺寸数据。
在一种可选的实施方式中,所述方法还包括:
根据所述点云模型,生成所述目标区域的数字地表模型和/或数字高程模型。
第二方面,本申请实施例提供一种区域重构装置,包括:
采集模块,用于采集目标区域的图像序列,所述目标区域中包括一个或多个待测对象;
获取模块,用于获取所述图像序列中的各张图像的先验位姿数据;
重建模块,用于根据所述图像序列和所述先验位姿数据,对所述目标区域进行重建,得到该图像序列重建后的位姿与所述目标区域的点云模型。
第三方面,本申请实施例提供一种区域重构系统,包括:
采集单元,用于采集所述目标区域的图像序列;
卫星导航定位单元和惯性测量单元,用于采集图像序列的位姿数据;
处理单元,用于对所述位姿数据和所述图像序列进行处理,以得到所述目标区域的点云模型。
在一种可选的实施方式中,还包括:
显示单元,用于对所述处理单元确定的数据进行显示。
在一种可选的实施方式中,还包括:第一外壳;
所述卫星导航定位单元、所述惯性测量单元、所述采集单元、所述处理单元以及所述显示单元安装在所述第一外壳内。
在一种可选的实施方式中,还包括:第二外壳和第三外壳;
所述卫星导航定位单元、所述惯性测量单元、所述采集单元以及所述处理单元安装在所述第二外壳内;
所述显示单元安装在所述第三外壳内;
所述处理单元与所述显示单元通信连接。
在一种可选的实施方式中,第四外壳和第五外壳;
所述卫星导航定位单元、所述惯性测量单元以及所述采集单元安装在所述第四外壳内;
所述处理单元以及所述显示单元安装在所述第五外壳内;
所述处理单元分别与所述卫星导航定位单元、所述惯性测量单元和所述采集单元通信连接。
第四方面,本申请实施例提供一种区域重构系统,包括:终端设备和服务器;
所述终端设备包括:卫星导航定位单元、惯性测量单元和采集单元;
所述采集单元,用于采集所述目标区域的图像序列;
所述卫星导航定位单元和所述惯性测量单元,用于采集图像序列的位姿数据;
所述服务器,用于接收所述终端设备发送的所述位姿数据和所述图像序列,对所述位姿数据和所述图像序列进行处理,以得到所述目标区域的点云模型。
本申请实施例的有益效果是:通过上述的先确定先验位姿数据,然后再基于图像和先验位姿数据对目标区域进行重构,可以实现不接触的情况下就能够实现对待测对象的测量,先验的位姿在能够简化目标区域重构所需的操作的基础上,还能够使重构得到的目标区域的点云模型能够更加准确。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本申请实施例提供的区域重构系统的方框示意图;
图2为本申请实施例提供的区域重构系统的结构示意图;
图3为本申请实施例提供的区域重构方法的流程图;
图4为本申请实施例提供的区域重构方法中使用到的特征点的确定手段示意图;
图5为本申请实施例提供的区域重构装置的功能模块示意图。
具体实施方式
下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行描述。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
发明人了解到,在测绘领域中,通常是使用GNSS接收机基于卫星导航定位技术进行测绘,该GNSS接收机测绘技术属于有源测量,需要接收卫星端播发的电磁波信号,如果 卫星播发的电磁波信号被遮挡,则会导致定位质量降低甚至无法定位。在不能接收到卫星播发的电磁波信号时,常用的解决方案是使用全站仪作为补充测量手段,需要用户多携带一套全站仪和脚架,且全站仪的操作相对麻烦,往往需要多人配合,且GNSS接收机配合测量杆一次只能测量一个点,此种测绘方式效率较低,上述问题是限制GNSS接收机在测绘领域应用的主要因素。
基于上述研究,本申请实施例提供了一种区域重构方法和区域重构系统,通过图像实现非接触的方式对目标区域进行定位重构。下面通过一些实施例对本申请进行描述。
实施例一
为便于对本实施例进行理解,首先对执行本申请实施例所公开的一种区域重构方法的执行设备进行详细介绍。
如图1所示,是本申请实施例提供的区域重构系统的结构示意图。
本实施例中的区域重构系统10可以包括:卫星导航定位单元110、惯性测量单元120、采集单元130以及处理单元140。
本实施例中的卫星导航定位单元可以是全球导航卫星定位系统(Global Navigation Satellite System,GNSS)。通过该卫星导航定位单元可以确定出待测对象的位置数据。
示例性地,该卫星导航定位单元可以是集成了卫星导航定位功能的芯片,该芯片中还可以电源模块、通信模块、天线、存储器以及处理器等单元。该电源模块用于给该芯片中的各个模块提供所需的电源。该通信模块用于与区域重构系统的其他单元实现通信。该处理器用于执行使用该卫星导航定位单元进行定位时所需的指令。
该惯性测量单元为用于测量待测对象的三轴姿态角以及加速度的装置。该惯性测量单元可以包括三个单轴的加速度计和三个单轴的陀螺。该加速度计用于测量待测对象的加速度信号,该陀螺可用于测量待测对象的角速度信号。通过测量得到的待测对象在三维空间中的角速度和加速度,可以解算出待测对象的姿态。
可选地,该惯性测量单元可以是集成惯性测量功能的芯片。该芯片还可以包括电源模块、通信模块、天线、存储器和处理器等单元。该电源模块用于为该芯片上的各个组件提供所需电源,该通信模块和天线用于与区域重构系统的其他单元实现通信,该处理器用于执行使用该惯性测量功能测量待测对象的姿态数据时所需的指令。
该采集单元用于采集待测对象的图像数据。示例性地,该采集单元可以是相机。可选地,该采集单元也可以包括多个相机。
上述的处理单元可能是一种集成电路芯片,具有信号的处理能力。上述的处理单元可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(digital signal processor,简称DSP)、 专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
示例性地,需要对目标区域进行测量时,则可以使用该区域重构系统,在距离目标区域设定距离,确定出该目标区域不同角度的图像序列及其位置数据、姿态数据。
本实施例中,区域重构系统还可以包括显示单元150,该显示单元用于显示采集得到的目标区域的图像,也可以用于显示基于图像序列构建的目标区域的模型。
示例性地,上述的显示单元在区域重构系统与用户之间提供一个交互界面(例如用户操作界面)或用于显示图像数据给用户参考。在本实施例中,该显示单元可以是液晶显示器或触控显示器。若为触控显示器,其可为支持单点和多点触控操作的电容式触控屏或电阻式触控屏等。支持单点和多点触控操作是指触控显示器能感应到来自该触控显示器上一个或多个位置处同时产生的触控操作。
本实施例中的区域重构系统的卫星导航定位单元、惯性测量单元、采集单元和处理单元的布置方式可以基于不同的需求有不同,下面通过几种可选的实施方式描述区域重构系统的卫星导航定位单元、惯性测量单元、采集单元和处理单元等各个单元的布置方式。
在第一种可选的实施方式中,上述的卫星导航定位单元、惯性测量单元、采集单元和处理单元集成在一个设备中。
该区域重构系统还可以包括:第一外壳。卫星导航定位单元、惯性测量单元、采集单元、处理单元以及显示单元可以安装在该第一外壳内。
示例性地,卫星导航定位单元、惯性测量单元、采集单元、处理单元以及显示单元各元件相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。
在第二种可选的实施方式中,上述的卫星导航定位单元、惯性测量单元、采集单元和处理单元布置在两个设备中。
示例性地,该区域重构系统可以包括第二外壳和第三外壳。
其中,卫星导航定位单元、惯性测量单元、采集单元以及处理单元安装在第二外壳内。该第二外壳与其内部安装的各个单元可形成一个设备。该显示单元安装在第三外壳内。该第三外壳与其内部安装的显示单元可形成一个设备。
第三外壳内安装的卫星导航定位单元、惯性测量单元、采集单元以及处理单元各元件相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。
其中,两个设备之间可以通信连接。例如,该处理单元与该显示单元通信连接。可选地,该处理单元与该显示单元可以通过无线通信连接,例如,可以通过蓝牙、WiFi等近场通信方式实现无线连接。可选地,该处理单元与该显示单元也可以通过有线通信连接。
示例性地,该区域重构系统可以包括第四外壳和第五外壳。
其中,卫星导航定位单元、惯性测量单元以及采集单元安装在该第四外壳内;该处理单元以及显示单元安装在该第五外壳内。
第四外壳内部卫星导航定位单元、惯性测量单元以及采集单元各元件相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。
第五外壳内的处理单元和显示单元之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。
其中,两个设备之间可以通信连接。例如,第五外壳内部的处理单元可以分别与第四外壳中的卫星导航定位单元、惯性测量单元和采集单元通信连接。
在一个可选的实施方式中,上述的显示单元也可以是一移动终端的显示单元,该移动终端可以与包含该卫星导航定位单元、惯性测量单元、采集单元以及处理单元的测量设备通信连接。
该移动终端可以是个人电脑(personal computer,PC)、平板电脑、智能手机、个人数字助理(personal digital assistant,PDA)、测量手簿等移动智能终端。
在第三种选的实施方式中,如图2所示,该区域重构系统可以包括终端设备200和服务器300。
该终端设备200中可以包括卫星导航定位单元、惯性测量单元以及采集单元。
可选地,该终端设备200中还可以包括显示单元。
该服务器300中可以包括处理单元。
可选地,该终端设备200中还可以包括显示单元。
该服务器300中可以包括处理单元。
本实施例中,终端设备200与服务器300可以通信连接,例如,可以通过4G、5G等移动数据网络进行通信。
在第四种选的实施方式中,如图2所示,该区域重构系统可以还包括终端设备、显示终端和服务器。
其中,区域重构系统的卫星导航定位单元、惯性测量单元以及采集单元可以集成在终端设备中,区域重构系统的显示单元集成在显示终端中,区域重构系统的处理单元可以集成在服务器中。
当然,基于不同的需求,区域重构系统的卫星导航定位单元、惯性测量单元、采集单元、处理单元以及显示单元还可以以与上述几种实施方式不同的实施方式布置。
例如,区域重构系统的卫星导航定位单元、惯性测量单元、采集单元、处理单元以及显示单元各个单元可以均布置在一个独立的设备中。可选地,每个独立设备中可以包括电源模块、通信模块、存储器和处理器等单元,以使每一个独立设备可以独立实现所需的功能。
再例如,区域重构系统的卫星导航定位单元、惯性测量单元布置在一个设备中,采集单元、处理单元以及显示单元各个单元可以均布置另一个设备中。
本实施例中,区域重构系统的卫星导航定位单元、惯性测量单元、采集单元各个单元的数据可以传输给处理单元,该处理单元的数据可以传输给显示单元。
本实施例中的区域重构系统10可以用于执行本申请实施例提供的各个区域重构方法中的各个步骤。下面通过几个实施例详细描述区域重构方法的实现过程。
实施例二
请参阅图3,是本申请实施例提供的区域重构方法的流程图。本实施例中的方法可以应用于区域重构系统。下面将对图3所示的具体流程进行详细阐述。
步骤410,采集目标区域的图像序列。
上述目标区域中包括一个或多个待测对象。不同场景下,待测对象可能不同。例如,该目标区域为城市居民区,则该待测对象可以城市居民区中的各个房屋。再例如,该目标区域为一矿区,则待测对象可以为矿区的各项设施。再例如,该目标区域为一建筑工地,则待测对象可以为工地中的土方、边界等。
步骤420,获取该图像序列中的各张图像的先验位姿数据。
该先验位姿数据可以包括图像序列中的各张图像对应的空间位置参数和/或姿态参数。
本实施例中区域重构系统可以包括定位系统,则可以通过该定位系统确定出图像序列中各张图像对应的空间位置参数。
本实施例中区域重构系统可以包括惯性测量单元,则可以通过该惯性测量单元确定出图像序列中各张图像对应的空间姿态参数。
可选地,区域重构系统可以包括定位系统,定位系统可以包括定位功能和惯性测量功能,则可以通过该定位系统确定出图像序列中各张图像对应的空间位置参数和位姿参数。
本实施例中,步骤410和步骤420并不一定需要有先后执行顺序,例如,可以在采集目标区域的图像序列时,基于采集设备实时的状态,可以确定出采集图像时,采集设备的坐标以及采集设备的姿态,将该采集设备的坐标作为图像对应的空间位置参数,将采集设备的姿态作为图像对应的位姿参数。
可选地,步骤410也可以在步骤420之前执行,例如,每采集一张图像后,在采集设备姿态不改变的状态下,采集当前采集设备的坐标和姿态。
可选地,步骤410也可以在步骤420之后执行,例如,每采集一张图像之前,先确定出采集设备的坐标和姿态,在确定了采集设备坐标和姿态后,该采集设备再采集目标区域的图像。
步骤430,根据该图像序列和该先验位姿数据,对该目标区域进行重建,得到该图像序列重建后的位姿与该目标区域的点云模型。
本实施例中,可以通过图像序列确定出图像中的各个特征点的三维坐标。然后,根据先验位姿数据以及各个特征点的三维坐标构建目标区域的点云模型,以及得到该图像序列重建后的位姿。
在一种实施方式中,可以针对任意一个目标特征点,采集设备至少在两个不同的位置采集该目标特征点的图像。然后可以通过双目视觉定位方法,确定出各个特征点的三维坐标。
如图4所示,图中示出了三张图像,其中,P1,P2,P3分别表示三张图像的空间位置参数和姿态参数。其中,三张图像的空间位置参数和姿态参数可以再采集三张图像时,使用定位系统确定。
其中,图示中的xj表示目标特征点表示为;uij,u2j,u3j表示特征点在三张图像中的像素坐标。
在图4所示的实例中,可以以P1,P2,P3和uij,u2j,u3j为基础,计算得到目标特征点xj的三维坐标。
在一种实施方式中,上述图像序列中的图像可以是深度图像,根据采集各个特征点时,采集设备的位置以及各个特征点的深度图像确定出各个特征点的三维坐标。
在一可选的实施方式中,还可以通过SLAM(simultaneous localization and mapping,同步定位与建图)算法,根据该图像序列和该先验位姿数据,对该待测对象进行重建,得到该目标区域的点云模型。
本实施例中,还可以先获取交互数据,基于交互数据、图像序列和该先验位姿数据,对该待测对象进行重建,得到该目标区域的点云模型。
示例性地,该交互数据可以通过与用户交互得到。
示例性地,该交互数据可以包括用户预先设定的参数,例如,可以包括待测对象的预计尺寸等。
该交互数据还可以包括针对图像序列中的图像的操作。例如,针对图像的操作可以包括图像放大、图像缩小、图像旋转、图像平移、在图像中选点、对图像进行标记、图像中 的对象的属性录入等图像操作。
示例性地,该交互数据还可以包括在图像序列中的图像中选择包含待测量对象的图像。
通过上述的实现方式中,在构建目标区域的点云模型时,可以基于区域重构系统所具有的定位功能,可以确定出目标区域中的各个特征点和各个待测对象的先验位姿数据,然后再结合图像序列可以确定出各个特征点的三维坐标,以此确定出的目标区域的点云模型。相较于直接通过GNSS接收机对待测对象进行测试而言,由于可以通过图像序列对目标区域进行重构,从而可以弥补GNSS接收机测量时信号被遮挡的情况,能够使确定出的点云模型的效果更好,进一步地,由于基于图像序列确定点云模型之前有了先验位姿数据的支持,使确定出的点云模型也能够更加准确。
基于不同结构的区域重构系统,可以采用不同的先验位姿数据确定方式,下面通过几种实施方式描述先验位姿数据的确定。
在一种实施方式中,可以使用全球导航卫星系统和惯性导航系统形成的组合导航系统,确定该图像序列中各张图像的先验位姿数据。
本实施方式中,全球导航卫星系统和惯性导航系统形成的组合导航系统也可以理解为GNSS/INS组合导航系统。
在另一种实施方式中,可以使用全球导航卫星系统确定该图像序列的各张图像对应的空间位置参数,使用航姿参考系统确定出该图像序列的各张图像对应的姿态参数。
其中,航姿参考系统可以是加速度计、陀螺仪和磁力计构成的AHRS(Attitude and heading reference system,航姿参考系统)系统。
在另一种实施方式中,可以使用全球导航卫星系统和视觉里程计形成的组合设备,确定出该图像序列的各张图像对应的先验位姿数据。
视觉里程计可以是VIO(Visual-Inertial Odometry,视觉惯性里程计)。该视觉惯性里程计可以包括视觉传感器和惯性测量单元。
视觉里程计可以表示为VIO(Visual Odometry,视觉里程计)。
为了使构建的点云模型能够更加的准确,还可以通过两级构建流程:稀疏重建(稀疏构建)和稠密重建(稠密构建),确定出目标区域的点云模型。示例性地,步骤430可以包括:步骤431和步骤432。
步骤431,基于该图像序列和该先验位姿数据,对该目标区域进行稀疏重建,以确定出该图像序列中的各个特征点的第一空间关系和该图像序列在稀疏重建后的位姿。
示例性地,该第一空间关系可以包括各个特征点的三维坐标和各个特征点的共视关系。
其中,该共视关系可以表示两个视觉定位特征点在同一环境采集图像中的位置关系。
在一种实施方式中,可以使用运动结构恢复技术,在该先验位姿数据和该图像序列基 础上,对该目标区域进行稀疏重建,以确定出该图像序列中的各个特征点的第一空间关系。
运动结构恢复技术表示SFM(structure from motion)技术。该SFM技术可以从上述的图像序列中恢复目标区域中的各个待测对象的三维信息。
在另一种实施方式中,可以使用空三技术,在该先验位姿数据和该图像序列基础上,对该目标区域进行稀疏重建,以确定出该图像序列中的各个特征点的第一空间关系和该图像序列在稀疏重建后的位姿。
步骤432,根据该第一空间关系,进行稠密重建,以确定出该图像序列中的各个特征点的第二空间关系,以得到该图像序列在稠密重建后的位姿与该目标区域的点云模型。
可选地,可以在该各个特征点的第一空间关系的基础上,对该图像序列中的各个特征点进行密集匹配,以对该目标区域进行稠密构建,以得到该目标区域的点云模型。
可选地,可以在该各个特征点的第一空间关系的基础上,使用三角化技术对该图像序列中的各个特征点进行密集匹配,以实现对该目标区域进行稠密构建,以得到该目标区域的点云模型。
在一些可选的实施方式中,也可以仅基于步骤431的稀疏构建得到目标区域的点云模型。
本实施例中,区域重构方法还可以包括:步骤440,根据该点云模型,生成该目标区域的数字地表模型和/或数字高程模型。
在构建了点云模型的基础上,还可以对目标区域中的各个待测对象的尺寸进行确定,也可以通过点云模型确定出目标区域中的任意两个待测对象的相对距离、相对位置关系等。
区域重构方法还可以包括:步骤450,根据该点云模型,确定出该目标区域中的待测对象的空间数据。
可选地,步骤450可以包括步骤451和步骤452。
步骤451,从该点云模型中确定出该目标区域中的待测对象的多个关键点的坐标。
该关键点可以是能够表示待测对象的一点,例如,该待测对象为矿区的各项设施时,则设施的关键点可以是设施的最高点、设施的基座上的一点等。再例如,该待测对象为房屋时,则房屋的关键点可以是房屋外部结构中的各个顶点、房屋的地基边缘上的一点等。其中,待测对象不同,对应的关键点也可能不同。
示例性地,可以根据接收到的用户输入的信息确定出待测对象,再从该点云模型中筛选出该待测对象的关键点的坐标。
例如,目标区域为一片矿区,接收到用户输入的信息为矿区中所存在的目标设施。则可以根据该目标设施确定出待测对象为目标区域中的目标设施,则可以从该目标区域的点云模型中筛选出能够表示目标设施的特殊性的点作为关键点。
可选地,可以通过人机交互界面,获取用户在点云模型中选出的待测对象,以及针对待测对象所需要测量的数据。例如,获取用户在点云模型或图像序列中的上选择待测对象的关键点、待测对象中的两点距离、待测对象对应的面。
步骤452,根据该多个关键点的坐标确定出该待测对象的空间数据。
示例性地,该空间数据可以包括待测对象的体积、待测对象的高度、待测对象的其中一面的面积、多个待测对象的相互距离等。
基于本申请提出的非接触式测量系统,可以实现遮挡环境的测绘工作,可以通过拍摄多张图像,即可通过在图像上选择对应的待测量地物点实现测量,从而解决GNSS接收机测量时存在遮挡的情况,并且一次数据采集可以测量拍摄目标区域内的所有待测对象,可以极大地提升作业效率,另外,基于点云模型可以测量构筑物的体积和面积等信息,扩展了系统的使用场景。
实施例三
基于同一申请构思,本申请实施例中还提供了与区域重构方法对应的区域重构装置,由于本申请实施例中的装置解决问题的原理与前述的区域重构方法实施例相似,因此本实施例中的装置的实施可以参见上述方法的实施例中的描述,重复之处不再赘述。
请参阅图5,是本申请实施例提供的区域重构装置的功能模块示意图。本实施例中的区域重构装置中的各个模块用于执行上述方法实施例中的各个步骤。区域重构装置包括:采集模块510、获取模块520和重建模块530,其中各个模块的内容如下。
采集模块510,用于采集目标区域的图像序列,该目标区域中包括一个或多个待测对象;
获取模块520,用于获取该图像序列中的各张图像的先验位姿数据;
重建模块530,用于根据该图像序列和该先验位姿数据,对该目标区域进行重建,得到该图像序列重建后的位姿与该目标区域的点云模型。
此外,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中该的区域重构方法的步骤。
本申请实施例所提供的区域重构方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,该程序代码包括的指令可用于执行上述方法实施例中该的区域重构方法的步骤,具体可参见上述方法实施例,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本申请的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的 一部分,该模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本申请各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
该功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例该方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……”限定的要素,并不排除在包括该要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上该仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
以上该,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (16)

  1. 一种区域重构方法,其特征在于,包括:
    采集目标区域的图像序列,所述目标区域中包括一个或多个待测对象;
    获取所述图像序列中的各张图像的先验位姿数据;
    根据所述图像序列和所述先验位姿数据,对所述目标区域进行重建,得到所述图像序列重建后的位姿与所述目标区域的点云模型。
  2. 根据权利要求1所述的方法,其特征在于,所述先验位姿数据包括:所述图像序列中的各张图像对应的空间位置参数和姿态参数;所述获取所述图像序列中的各张图像的先验位姿数据,包括:
    通过定位系统确定出所述图像序列中的各张图像的空间位置参数和姿态参数。
  3. 根据权利要求2所述的方法,其特征在于,所述通过定位系统确定出所述图像序列中的各张图像的空间位置参数和姿态参数,包括:
    使用全球导航卫星系统和惯性导航系统形成的组合导航系统,确定所述图像序列中各张图像的先验位姿数据;
    或者,使用全球导航卫星系统确定所述图像序列的各张图像对应的空间位置参数,使用航姿参考系统确定出所述图像序列的各张图像对应的姿态参数;
    或者,使用全球导航卫星系统和视觉里程计形成的组合设备,确定出所述图像序列的各张图像对应的先验位姿数据。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述图像序列和所述先验位姿数据,对所述目标区域进行重建,得到所述图像序列重建后的位姿与所述目标区域的点云模型,包括:
    基于所述图像序列和所述先验位姿数据,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿;
    根据所述第一空间关系和所述图像序列在稀疏重建后的位姿,进行稠密重建,以确定出所述图像序列中的各个特征点的第二空间关系,以得到所述图像序列在稠密重建后的位姿与所述目标区域的点云模型。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述图像序列和所述先验位姿数据,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿,包括:
    使用运动结构恢复技术,在所述先验位姿数据和所述图像序列基础上,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿;
    或者,使用空三技术,在所述先验位姿数据和所述图像序列基础上,对所述目标区域进行稀疏重建,以确定出所述图像序列中的各个特征点的第一空间关系和所述图像序列在稀疏重建后的位姿。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述第一空间关系,进行稠密重建,以确定出所述图像序列中的各个特征点的第二空间关系,以得到所述目标区域的点云模型,包括:
    在所述各个特征点的第一空间关系的基础上,对所述图像序列中的各个特征点进行密集匹配和/或三角化技术,以对所述目标区域进行稠密重建,以得到所述图像序列在稠密重建后的位姿与所述目标区域的点云模型。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述点云模型,确定出所述目标区域中的待测对象的空间数据。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述点云模型,确定出所述目标区域中的待测对象的空间数据,包括:
    从所述点云模型中确定出所述目标区域中的待测对象的多个关键点的坐标;
    根据所述多个关键点的坐标确定出所述待测对象的空间数据。
  9. 根据权利要求1-8任意一项所述的方法,其特征在于,所述方法还包括:
    根据所述点云模型,生成所述目标区域的数字地表模型和/或数字高程模型。
  10. 一种区域重构装置,其特征在于,包括:
    采集模块,用于采集目标区域的图像序列,所述目标区域中包括一个或多个待测对象;
    获取模块,用于获取所述图像序列中的各张图像的先验位姿数据;
    重建模块,用于根据所述图像序列和所述先验位姿数据,对所述目标区域进行重建,得到该图像序列重建后的位姿与所述目标区域的点云模型。
  11. 一种区域重构系统,其特征在于,包括:
    采集单元,用于采集所述目标区域的图像序列;
    卫星导航定位单元和惯性测量单元,用于采集图像序列的位姿数据;
    处理单元,用于对所述位姿数据和所述图像序列进行处理,以得到所述目标区域的点云模型。
  12. 根据权利要求11所述的区域重构系统,其特征在于,还包括:
    显示单元,用于对所述处理单元确定的数据进行显示。
  13. 根据权利要求12所述的区域重构系统,其特征在于,还包括:第一外壳;
    所述卫星导航定位单元、所述惯性测量单元、所述采集单元、所述处理单元以及所述显示单元安装在所述第一外壳内。
  14. 根据权利要求12所述的区域重构系统,其特征在于,还包括:第二外壳和第三外壳;
    所述卫星导航定位单元、所述惯性测量单元、所述采集单元以及所述处理单元安装在所述第二外壳内;
    所述显示单元安装在所述第三外壳内;
    所述处理单元与所述显示单元通信连接。
  15. 根据权利要求12所述的区域重构系统,其特征在于,第四外壳和第五外壳;
    所述卫星导航定位单元、所述惯性测量单元以及所述采集单元安装在所述第四外壳内;
    所述处理单元以及所述显示单元安装在所述第五外壳内;
    所述处理单元分别与所述卫星导航定位单元、所述惯性测量单元和所述采集单元通信连接。
  16. 一种区域重构系统,其特征在于,包括:终端设备和服务器;
    所述终端设备包括:卫星导航定位单元、惯性测量单元和采集单元;
    所述卫星导航定位单元和所述惯性测量单元,用于采集目标区域的位姿数据;
    所述采集单元,用于采集所述目标区域的图像序列;
    所述服务器,用于接收所述终端设备发送的所述位姿数据和所述图像序列,对所述位姿数据和所述图像序列进行处理,以得到所述目标区域的点云模型。
PCT/CN2022/076070 2021-11-29 2022-02-11 区域重构方法和系统 WO2023092865A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111428587.2A CN113838197A (zh) 2021-11-29 2021-11-29 区域重构方法和系统
CN202111428587.2 2021-11-29

Publications (1)

Publication Number Publication Date
WO2023092865A1 true WO2023092865A1 (zh) 2023-06-01

Family

ID=78971887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/076070 WO2023092865A1 (zh) 2021-11-29 2022-02-11 区域重构方法和系统

Country Status (2)

Country Link
CN (1) CN113838197A (zh)
WO (1) WO2023092865A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635856A (zh) * 2023-11-07 2024-03-01 广东省地质调查院 一种矿山开采原始数字高程模型重建方法、系统和介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838197A (zh) * 2021-11-29 2021-12-24 南京天辰礼达电子科技有限公司 区域重构方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145339A (zh) * 2019-12-25 2020-05-12 Oppo广东移动通信有限公司 图像处理方法及装置、设备、存储介质
US20210201569A1 (en) * 2019-12-31 2021-07-01 Lyft, Inc. Map Feature Extraction Using Overhead View Images
CN113379822A (zh) * 2020-03-16 2021-09-10 天目爱视(北京)科技有限公司 一种基于采集设备位姿信息获取目标物3d信息的方法
CN113674424A (zh) * 2021-08-31 2021-11-19 北京三快在线科技有限公司 一种电子地图绘制的方法及装置
CN113820735A (zh) * 2021-08-31 2021-12-21 上海华测导航技术股份有限公司 位置信息的确定方法、位置测量设备、终端及存储介质
CN113838197A (zh) * 2021-11-29 2021-12-24 南京天辰礼达电子科技有限公司 区域重构方法和系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080784B (zh) * 2019-11-27 2024-04-19 贵州宽凳智云科技有限公司北京分公司 一种基于地面图像纹理的地面三维重建方法和装置
CN112085844B (zh) * 2020-09-11 2021-03-05 中国人民解放军军事科学院国防科技创新研究院 面向野外未知环境的无人机影像快速三维重建方法
CN113409444B (zh) * 2021-05-21 2023-07-11 北京达佳互联信息技术有限公司 三维重建方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145339A (zh) * 2019-12-25 2020-05-12 Oppo广东移动通信有限公司 图像处理方法及装置、设备、存储介质
US20210201569A1 (en) * 2019-12-31 2021-07-01 Lyft, Inc. Map Feature Extraction Using Overhead View Images
CN113379822A (zh) * 2020-03-16 2021-09-10 天目爱视(北京)科技有限公司 一种基于采集设备位姿信息获取目标物3d信息的方法
CN113674424A (zh) * 2021-08-31 2021-11-19 北京三快在线科技有限公司 一种电子地图绘制的方法及装置
CN113820735A (zh) * 2021-08-31 2021-12-21 上海华测导航技术股份有限公司 位置信息的确定方法、位置测量设备、终端及存储介质
CN113838197A (zh) * 2021-11-29 2021-12-24 南京天辰礼达电子科技有限公司 区域重构方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635856A (zh) * 2023-11-07 2024-03-01 广东省地质调查院 一种矿山开采原始数字高程模型重建方法、系统和介质
CN117635856B (zh) * 2023-11-07 2024-06-11 广东省地质调查院 一种矿山开采原始数字高程模型重建方法、系统和介质

Also Published As

Publication number Publication date
CN113838197A (zh) 2021-12-24

Similar Documents

Publication Publication Date Title
US10262231B2 (en) Apparatus and method for spatially referencing images
WO2023092865A1 (zh) 区域重构方法和系统
US20160300389A1 (en) Correlated immersive virtual simulation for indoor navigation
US11243288B2 (en) Location error radius determination
US20190096089A1 (en) Enabling use of three-dimensonal locations of features with two-dimensional images
CN112288853B (zh) 三维重建方法、三维重建装置、存储介质
EP3639221B1 (en) Onscene command vision
JP2015055534A (ja) 情報処理装置、情報処理装置の制御プログラム及び情報処理装置の制御方法
JP2001503134A (ja) 携帯可能な手持ちデジタル地理データ・マネージャ
CN112348886B (zh) 视觉定位方法、终端和服务器
CN113820735B (zh) 位置信息的确定方法、位置测量设备、终端及存储介质
RU2652535C2 (ru) Способ и система измерения расстояния до удаленных объектов
KR20230042003A (ko) 지자기 맵의 생성
KR20190059120A (ko) 사물인터넷 기반의 증강현실을 이용한 시설물 점검 시스템
US20170227361A1 (en) Mobile mapping system
Cervenak et al. ARKit as indoor positioning system
KR20230044393A (ko) 다수의 자기 내비게이션 디바이스들로부터의 중첩 자기 측정 데이터의 상관 및 그 데이터를 이용한 지자기 맵 업데이트
Simon et al. Towards orientation-aware location based mobile services
Gómez et al. Indoor augmented reality based on ultrasound localization systems
EP2569958B1 (en) Method, computer program and apparatus for determining an object in sight
Milosavljević et al. Transforming smartphone into geospatial video provider
Kealy et al. A new paradigm for developing and delivering ubiquitous positioning capabilities
CN117724039A (zh) 井下定位方法、装置、设备及存储介质
Wang et al. Evaluations on 3D Personal Navigation based on Geocoded Images in Smartphones

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22896968

Country of ref document: EP

Kind code of ref document: A1