WO2020189909A2 - Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr - Google Patents

Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr Download PDF

Info

Publication number
WO2020189909A2
WO2020189909A2 PCT/KR2020/002651 KR2020002651W WO2020189909A2 WO 2020189909 A2 WO2020189909 A2 WO 2020189909A2 KR 2020002651 W KR2020002651 W KR 2020002651W WO 2020189909 A2 WO2020189909 A2 WO 2020189909A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
information
image
image data
sensor
Prior art date
Application number
PCT/KR2020/002651
Other languages
English (en)
Korean (ko)
Other versions
WO2020189909A3 (fr
Inventor
박일석
홍승환
송승관
Original Assignee
주식회사 스트리스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020190111249A external-priority patent/KR102200299B1/ko
Application filed by 주식회사 스트리스 filed Critical 주식회사 스트리스
Publication of WO2020189909A2 publication Critical patent/WO2020189909A2/fr
Publication of WO2020189909A3 publication Critical patent/WO2020189909A3/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Definitions

  • the image data is generated as a panoramic image that can be implemented in 3D-VR, such as a head mounted display or a head-up display. Is distorted compared to the flat image (refer to FIG. 6(b)), so it is not easy to extract object information.
  • the mobile platform on which the image sensor is mounted is a drone
  • the utility is degraded due to the limitation of flight altitude in the city center where high-rise buildings and road facilities are concentrated.
  • there is an obstructed area where the current status of the ground facilities cannot be photographed due to bridges, buildings, and facilities spaced apart from the ground, so that all road facilities cannot be photographed.
  • a mobile mapping system equipped with heterogeneous sensors is sometimes used as a technology to check the status and maintenance of road facilities.
  • the mobile mapping system is an integrated sensor system that combines navigation sensors, image sensors, and laser sensors of each unique coordinate system, and geo-referencing and calibration that integrates each coordinate system into one data. Through the (calibration) operation, point cloud data containing 3D spatial information is created.
  • digital maps in the form of vectors or models, precision road maps, and 3D models are generated through drawing and modeling processes based on 3D point cloud data.
  • the image information is used for coloring each point cloud data acquired from the laser sensor.
  • the technical task to be achieved by the present invention is to provide accurate information on each pixel of 2D image data by fusing a 3D survey sensor such as a laser sensor and a navigation sensor around the image sensor so that the current status and maintenance of road facilities can be easily and accurately checked. It is to provide a system that provides a service based on an image pixel by registering 3D spatial information.
  • the object detection and recognition unit detects and recognizes a maintenance part of the road facility along with the road facility based on the image data for spatial information through machine learning, and the object information is the road facility And attribute data including meta data related to the type, name, and detailed information of the object, shape, shape, and texture, in relation to the repair part.
  • the index-based distortion removal unit may include a pixel index assigning unit for assigning an index to each pixel to define a correspondence relationship between the distorted image data from the image data and the reference plane image data, and the distortion image to which the index is assigned.
  • a section decomposition unit that decomposes data into preset sections, and a planar image that generates flat image data by removing distortion of the distorted image data based on a distortion correction model of the decomposed section by referring to the reference plane image data It may include a generating unit.
  • facility information obtained by obtaining and processing facility information corresponding to the road facility from the outside based on the 3D location data belonging to the 3D spatial information and 2D location data included in the image data Further comprising an acquisition unit, wherein the facility information acquisition unit obtains a time-series aerial orthogonal image including the facility information related to the road facility from the outside, and extracts geometric line form information from the aerial orthogonal image.
  • the facility information acquisition unit obtains a time-series aerial orthogonal image including the facility information related to the road facility from the outside, and extracts geometric line form information from the aerial orthogonal image.
  • a method for implementing a road facility management solution based on a 3D-VR multi-sensor system is provided from an image sensor, a navigation sensor, and a 3D survey sensor that acquires 3D geographic data.
  • a three-dimensional survey sensor such as a laser sensor and a navigation sensor are combined with a three-dimensional survey sensor such as a laser sensor centered on an image sensor so that the current status and maintenance of road facilities can be easily and accurately identified, thereby providing an accurate three-dimensional space for each pixel of the two-dimensional image data.
  • a three-dimensional survey sensor such as a laser sensor and a navigation sensor
  • a three-dimensional survey sensor such as a laser sensor centered on an image sensor
  • image-based machine learning and artificial intelligence algorithms can be applied to the system, thereby reducing drawing and modeling work time and improving accuracy.
  • 3 is a configuration diagram of a distortion removing unit.
  • FIG. 5 is a flowchart illustrating a method of implementing a road facility management solution based on a 3D-VR multi-sensor system according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating the concept of a geometric model for generating geometric structure information.
  • FIG. 7 is a diagram illustrating a distorted panoramic image as image data captured from a wide area image sensor.
  • FIG. 8 is a flowchart illustrating a process of removing image distortion by an index-based distortion removing unit.
  • FIG. 12 is a diagram schematically illustrating a process of removing image distortion in a geometric model-based distortion removing unit.
  • FIG. 13 is a diagram illustrating image data from which distortion is removed by a distortion removal unit.
  • the object detection and recognition unit detects and recognizes a maintenance part of a road facility based on image data for spatial information by machine learning, reads it as object information, and stores 3D maintenance section position data in the database unit. It is a drawing.
  • unit to "module” generally refer to components such as software (computer program) and hardware that can be logically separated. Therefore, the module in this embodiment refers not only to the module in the computer program, but also to the module in the hardware configuration. Therefore, the present embodiment is a computer program for making them function as modules (a program for executing each step in a computer, a program for making a computer function as each means, a program for realizing each function in a computer) ), also explains the system and method.
  • phrases equivalent to "save” and “save” are used, but these words are stored in a memory device or stored in a memory device when the embodiment is a computer program. It means to control like that.
  • a system or device is configured by connecting a plurality of computers, hardware, devices, etc. by communication means such as a network (including one-to-one communication connection), and a single computer, hardware, device, etc. This includes cases where it is realized.
  • communication means such as a network (including one-to-one communication connection), and a single computer, hardware, device, etc.
  • the terms “device” and “system” are used as terms of mutual agreement. Of course, the "system” does not include anything but an artificial decision, a social “mechanism” (social system).
  • Road facilities may be facilities such as automobile roads, bicycle paths, walking paths, and roads around rivers, but do not exclude various types of facilities installed on the ground.
  • the system 100 includes an image sensor 102, a 3D survey sensor 104, a sensor data collection unit including a navigation sensor 106, a data fusion unit 108, and a distortion removal unit 110. ), image data processing unit 112, object detection and recognition unit 114, database unit 116, operation/display unit 118, region of interest screen generation unit 120, update unit 122 and facility information acquisition unit (123) may be included.
  • the navigation sensor 106 is a sensor that detects navigation information such as positioning information, position of the platform, attitude, and speed, and acquires observation data for navigation and time information at the time the observation data was acquired, and uses a satellite navigation device (GPS). It can be composed of a position acquisition device that acquires the moving position of the platform through an inertial measurement unit (IMU), an attitude acquisition device that acquires the attitude of the vehicle through an inertial navigation system (INS). .
  • IMU inertial measurement unit
  • INS inertial navigation system
  • Each sensor may be mounted on the same platform, but may be distributed and mounted on a ground mobile platform and an aerial platform such as satellite, aerial, and drone.
  • the ground mobile platform 10 may be a vehicle, a bicycle, a two-wheeled vehicle, or equipment separately configured for walking, and the three-dimensional survey sensor 104 and navigation sensor 106 composed of an image sensor 102 and a lidar sensor are shown in FIG. 2 It may be mounted on the ground mobile platform 10 as described above.
  • the sensor data collection unit provided with each sensor 102 to 106, along with observation data and time information acquired from each sensor 102 to 106, and unique data related to device-specific properties for each sensor 102 to 106, each sensor (102-106) Alternatively, acquired environment information including weather, wind speed, temperature at which the sensor 102-106 or the platform is located, and humidity, which appear when observation data from the platform is acquired, may be collected.
  • the peculiar data of the 3D survey sensor 104 include LiDAR information, for example, the sensor's photographing date and time, sensor model, sensor serial number, laser wavelength, laser intensity, laser transmission/reception time, laser observation angle/distance, laser pulse. , May include at least one of an electronic time delay, a standby delay, and a sensor temperature.
  • the unique data of the navigation sensor 106 includes sensor model information of GNSS/IMU/INS, GNSS reception information, GNSS satellite information, GNSS signal information, GNSS navigation information, ionospheric/convection layer signal delay information of GNSS signals, DOP information, and Earth. Behavior information, dual/multi GNSS equipment information, GNSS base station information, wheel sensor information, gyro sensor scale/bias information, accelerometer scale/bias information, position/position/speed/acceleration/angular velocity/angular acceleration information and predicted error amount, navigation It may include at least one of all or part of the information filtering model and erasure error information, a photographing date/time, and time synchronization information.
  • the data fusion unit 108 includes the observation data, the internal geometry calculated based on the geometric parameters defined for each of the sensors 102 to 106, and the position of the platform 10 on which at least one of the sensors 102 to 106 is mounted. Data fusion for a geometric model is performed based on geometric structure information composed of an external geometry that defines a geometric relationship between each sensor 102 to 106 according to a posture.
  • the internal geometry is an intrinsic value of the sensor itself, and is an error in observed data for each sensor 102 to 106 due to a parameter maintained regardless of whether the platform or the like is moved.
  • the geometric parameter of the navigation sensor may be at least one of a scale in an axial direction and an offset in an axial direction.
  • the point group data related information is at least one of 3D position data, point group color information, point group class information whose type is estimated from the object extracted from 3D position data, and laser intensity information when the 3D survey sensor is a laser sensor. It may include.
  • image data may be input to the index-based distortion removal unit 110 based on the type of the wide area image sensor, and image distortion may be removed. .
  • the image data is converted to the index-based distortion removal unit 110 and the geometric model-based distortion removal unit 110 according to the level of goodness of distortion removal and the user's selection. ) Can be input to remove image distortion.
  • the index-based distortion removal unit 110 includes a pixel index provision unit 128, a section decomposition unit 130, and a planar image generation unit 122 to remove image distortion when image data for a fisheye lens is input. can do.
  • the index-based distortion removal unit 110 may further include a section re-decomposition unit 134 in addition to the above-described member to remove image distortion when omnidirectional image data is input.
  • the section decomposition unit 130 may decompose the distorted image data to which the index is assigned into a preset section.
  • the planar image generator 122 may generate planar image data by removing distortion of the distorted image data based on a distortion correction model of the disassembled section by referring to the planar image data for reference.
  • the distortion correction model may be modeling based on an equation or index of the decomposed section.
  • the section re-decomposition unit 134 is a member used in the case of omnidirectional image data, and may re-decompose the omnidirectional image decomposed by the section decomposition unit 130 into a preset section, as in the third step of FIG. 11. .
  • the planar image generator 122 may remove the distortion based on a distortion correction model of the section re-decomposed by the section re-decomposition unit 134, for example, based on an equation or an index.
  • the geometric model-based distortion removal unit 110 includes a 3D shape restoration unit 136 for restoring a virtual 3D shape from the distortion image data based on a camera geometry based on the distortion image data from the image data and a virtual camera geometry. It may include a planar image generator 138 for generating planar image data by defining and projecting it to a virtual planar camera through the geometrical equation.
  • the distortion removal unit 110 for removing distortion of an image for a fisheye lens and an omnidirectional image relates to lens distortion removal for creating an image data set for application to machine learning.
  • the distortion removal according to the physical lens structure is performed based on the geometric model or index of each sensor, and the projection distortions that change according to the image projection angle are converted to have geometric characteristics similar to those of the plane image, and are applied to the plane image. Make it possible to use the model as it is.
  • the geometric model-based image distortion removal method is performed by restoring a virtual 3D object from an image with distortion and reprojecting it onto a virtual plane camera. This is a method of solving the interrelationship of the plane images of by giving an index to each pixel.
  • the system 100 includes the distortion removal unit 110 as an example, but in another embodiment, the system 100 does not include the distortion removal unit 110 and image data is processed into image data.
  • a panoramic image which is directly input to the unit 112 and without distortion, may be provided to the manipulation/display unit 118.
  • the image data processing unit 112 stores at least the 3D position data with reference to the index, geometry information and time information given to define a correspondence relationship between the pixel of the image data and the 3D position data of the point cloud data related information.
  • Image data for spatial information is generated by registering 3D spatial information generated from the included point cloud data-related information in pixel units of the image data.
  • the index may be assigned to define a correspondence relationship between the pixel of the image data and the 3D position data of the point cloud data related information by absolute coordinates of the point cloud data related information and the image data with reference to the navigation data.
  • the index refers to the coordinate system (S) of the 3D survey sensor 104 and the coordinate system (C) for the image sensor, the class of the data, the object shape, etc., and matches the point cloud data-related information and image data, It may be given to set a virtual absolute coordinate system between them to define a correspondence relationship between pixels and 3D position data according to the absolute coordinate system.
  • an index is assigned to each sensor and data and linked to each other to shorten data processing time and reduce data capacity.
  • the image data processing unit 112 may register 3D spatial information in pixel units of image data using the navigation data and time information of each of the sensors 102 to 106.
  • a model for imparting 3D spatial information to pixels of image data may be defined as in Equation 1, and some equations and variables may be modified or omitted during calculation.
  • Variables included in each image sensor 102, 3D survey sensor 104, navigation sensor 106, and system model are determined by map values or a calibration process to determine the accuracy and precision of 3D spatial information and data fusion. .
  • 3D spatial information may be given to each pixel or time, index, and navigation data may be assigned to each pixel.
  • the image data processing unit 112 may link the entire geocoding image information, 3D spatial information, time information, index, and navigation data for each pixel.
  • the object information may include attribute data including a type, a name, and metadata related to detailed information, a shape, a shape, and a texture in relation to a road facility and a maintenance part.
  • the facility recognition unit 140 and the maintenance part recognition unit 142 detect candidate group data related to the object identified from the spatial information image data using a machine learning technique, and sequentially apply an additional machine learning model to the detected candidate group data.
  • the information of the object can be recognized.
  • the object detection and recognition unit 114 may perform preprocessing to improve the accuracy of object detection and recognition in image data for spatial information before the machine learning technique.
  • Pre-processing of image sensor data for spatial information includes, for example, image brightness adjustment, image color adjustment, image blurring, image sharpness, image texture analysis, and observation geometric characteristics. It may be performed by at least one of movement, rotation, and/or scale transformation of image sensor data due to the difference (converting one or more of Rigid-body, Affine, and Similartiy as 2D transformed data).
  • the database unit 116 links and stores image data for spatial information, road facilities, and object information of a maintenance part.
  • the database unit 116 may store 3D map data including at least maintenance section position data based on image data for spatial information linked to object information, as shown in the left drawing of FIG. 16.
  • the facility information acquisition unit 123 may obtain and process facility information corresponding to the road facility from the outside based on 3D location data belonging to the 3D spatial information and 2D location data included in the image data.
  • the facility information acquisition unit 123 obtains a time-series aerial orthogonal image including facility information related to road facilities from the outside, extracts geometric line form information from the aerial orthogonal image, and Based on the information, it is possible to generate basic road information processed in the form of a document or data sheet in a time series of facility structure information and accessories.
  • the road linear information may be information related to, for example, a road pavement as facility structure information, a ground condition, and a median divider as an accessory, a sign, and a trench-shaped side exit located on the side of the road.
  • the facility information acquisition unit 123 receives two-dimensional drawing data and 3D models accumulated in the form of images in relation to road facilities from outside, and digitalizes drawing data and 3D models through image-based object recognition and 3D model recognition. Drawing information can be created.
  • the 3D model may be a rendered image and a schematic quantity related to the components of the road facility.
  • the updater 122 acquires external data including object information including a plurality of attribute data, and extracts object information including at least metadata related to location data and detailed information of road facilities from the external data.
  • the spatial information image data may be updated with object information of external data.
  • the updated object information is stored in the database unit 116 in association with image data for spatial information.
  • the external data is a digital map or a road precision map, which is obtained from an external device other than the system 100 from the same location as the image data for spatial information of the system 100.
  • the speed limit of a road sign is stored as 30km, but roads in the same location as metadata of external data such as digital maps or road precision maps.
  • the update unit 122 changes object information of the image data for spatial information to 40 km.
  • the operation/display unit 118 and the region of interest screen generation unit 120 may be implemented in a low-cost processing module L, for example, in the cloud or on the client side, which has a lower data processing burden than the above-described member.
  • modules according to the data processing burden have been distinguished and described, but in other embodiments, all of the functions of the above-described members may be implemented in one integrated module.
  • FIGS. 1 to 17 a method of implementing a road facility management solution based on a 3D-VR multi-sensor system according to an embodiment of the present invention will be described with reference to FIGS. 1 to 17.
  • FIG. 5 is a flowchart illustrating a method of implementing a road facility management solution based on a 3D-VR multi-sensor system according to an embodiment of the present invention.
  • the image sensor 102, the 3D survey sensor 104, and the navigation sensor 106 acquire each observation data and time information at the time of acquisition, and the data fusion unit 108 is used for each sensor 102 to 106
  • the internal geometry calculated based on the geometric parameter defined every time is obtained (S505).
  • the sensor data collection unit including the image sensor 102, the 3D survey sensor 104, and the navigation sensor 106 may additionally acquire unique data for each sensor 102 to 106.
  • the internal geometry is described in detail in the description of the system 100 and detailed description thereof will be omitted.
  • the data fusion unit 108 determines the observed data, the internal geometry, and the geometric relationship between the respective sensors 102 to 106 according to the position and posture of the platform 10 on which at least one of the sensors 102 to 106 is mounted. As shown in FIG. 6, data fusion for the geometric model is performed on the basis of the geometry information composed of the external geometry to be defined (S510). 6 is a diagram illustrating the concept of a geometric model for generating geometric structure information.
  • the data fusion unit 108 generates point cloud data related information related to the 3D survey sensor 104 based on observation data, geometric structure information, and time information, and is generated from the data of the image sensor 102 to generate geocoding image information.
  • Image data including a may be generated.
  • the point cloud data-related information and the geocoding image information may include the above-described data, and a detailed description thereof will be omitted.
  • the distortion removing unit 110 determines whether the image sensor is a wide area image sensor (S515).
  • Image data from a wide area image sensor taken at each measurement point (circular point) as in FIG. 7(a) is generated as a distorted image as in FIG. 7(b).
  • 7 is a diagram illustrating a distorted panoramic image as image data captured from a wide area image sensor.
  • the distortion removal unit 110 additionally depends on at least one of a type of a wide area image sensor, a good degree of distortion removal, a user's selection, and other options.
  • An index-based distortion removal unit 110 and a geometric model-based distortion removal unit 110 may be selected.
  • the distortion removal unit 110 removes distortion of the image data (S415).
  • the distortion removal process proceeds to the index-based distortion removal unit 110 or the geometric model-based distortion removal unit 110 selected according to at least one of a type of a wide area image sensor, a good degree of distortion removal, and a user's selection.
  • a distortion removal process will be described in detail by distinguishing between the index-based distortion removal unit 110 and the geometric model-based distortion removal unit 110 with reference to FIGS. 8 to 13.
  • 8 is a flowchart illustrating a process of removing image distortion by an index-based distortion removing unit.
  • 9 is a flowchart illustrating a process of removing distortion of an image by a geometric model-based distortion removing unit.
  • 10 and 11 are diagrams schematically illustrating a process of removing image distortion by an index-based distortion removing unit.
  • 12 is a diagram schematically illustrating a process of removing image distortion in a geometric model-based distortion removing unit.
  • 13 is a diagram illustrating image data from which distortion is removed by a distortion removal unit.
  • the image distortion removal process in the index-based distortion removal unit 110 will be described with reference to FIGS. 8, 10 and 11, and image data for fisheye lenses distorted by a wide-area image sensor equipped with a fisheye lens is input.
  • the pixel index assigning unit 128 assigns an index to each pixel to define a correspondence relationship between the distorted image data from the image data and the reference plane image data, as shown in FIG. 10 (S810). .
  • the section decomposing unit 130 decomposes the distorted image data to which the index is assigned into a preset section (S815).
  • omnidirectional image data is input to the index-based distortion removal unit 110, the pixel indexing unit 128 and the section decomposing unit 130 process the image data, and the processing proceeds to the second step of FIG. 11.
  • the section re-decomposition unit 134 re-decomposes the omnidirectional image decomposed by the section decomposition unit 130 into a preset section, as in the third step of FIG. 11.
  • the planar image generation unit 132 removes the distortion based on a distortion correction model, such as an equation or an index, of the section re-decomposed by the section re-decomposition unit 134, and generates the planar image data as shown in the right figure of FIG. Generate.
  • the image distortion removal process in the geometric model-based distortion removal unit 110 will be described with reference to FIGS. 9 and 12, when the distorted omnidirectional image data is input by a wide area image sensor that generates an omnidirectional image ( S905), the 3D shape restoration unit 136 restores a virtual 3D shape from the distortion image data based on the camera geometry based on the distortion image data from the image data (S910).
  • planar image generation unit 138 defines a virtual camera geometry and generates planar image data by projecting it onto a virtual plane camera through the geometry (S915).
  • an image from which distortion is removed or an image without distortion (in the case of N in S515) is input to the image data processing unit 112, and the image data processing unit 112 includes pixels of the image data and Generated from point cloud data-related information (left drawing of Fig. 14) including at least three-dimensional position data by referring to indexes, geometry information, and time information given to define a correspondence relationship between three-dimensional position data of point cloud data-related information
  • the 3D spatial information 20 is registered in pixel units of the image data to generate image data for spatial information (S525).
  • the 3D spatial information may include not only 3D location data but also the above-described various data constituting point cloud data related information in order to supplement image data including only 2D location data in the coordinate data for geocoding.
  • Image data for spatial information may be configured by combining image information for geocoding and 3D spatial information for each pixel.
  • the object detection and recognition unit 114 detects and recognizes object information related to the road facility and the maintenance part of the facility based on the image data for spatial information through machine learning (S530).
  • Road facility and object information is generated by the facility recognition unit 140 and the repair part recognition unit 142 as shown in FIGS. 15 and 16.
  • a plurality of attribute data and machine learning processes constituting object information are listed above, and detailed descriptions thereof will be omitted.
  • FIG. 15 is a diagram illustrating a screen in which a part of the location data and object information of the road facility is output to the manipulation/display unit when a user designates a specific road facility in the image provided to the manipulation/display unit
  • FIG. 16 is an object detection A diagram illustrating that the recognition unit detects and recognizes a maintenance part of a road facility based on image data for spatial information by machine learning, reads it as object information, and stores 3D maintenance section position data in the database unit.
  • the object detection and recognition unit 114 may perform preprocessing to improve the accuracy of object detection and recognition in image data for spatial information before machine learning, and a preprocessing technique will be omitted in detail.
  • the database unit 116 stores in association with image data for spatial information, road facilities, and object information of a maintenance part.
  • the manipulation/display unit 118 provides image data for spatial information associated with object information to the user in the form of a screen, and the region of interest screen generation unit 120 is used by the user as shown in the right and lower drawings of FIG.
  • the manipulation/display unit 118 is controlled to display the region of interest in the form of an enlarged screen (S540).
  • the operation/display unit 118 provides 3D position data of a specific road facility and maintenance part. And at least the name of the road facility among the object information.
  • a two-dimensional survey sensor 104 such as a laser sensor and a navigation sensor 106 are combined with the image sensor 102 as a center.
  • An image pixel-based service can be provided by registering accurate 3D spatial information in each pixel of image data.
  • the speed limit of a road sign is stored as 30 km as metadata belonging to object information linked to image data for spatial information, but it is stored in the same location as metadata of external data such as digital maps or road precision maps. External data in which the speed limit of a road sign that is present is recorded as 40 km may be acquired.
  • the updater 122 extracts object information including at least metadata related to location data and detailed information of road facilities from external data (S1810).
  • the image data for spatial information is It is updated with object information of external data (S1820).
  • the updated object information is stored in the database unit 116 in association with image data for spatial information.
  • the update unit 122 maintains object information of the image data for spatial information (S1825).
  • Components constituting the system 100 shown in FIGS. 1, 3 and 4 or steps according to the embodiments shown in FIGS. 5, 8, 9, and 18 are in the form of a program for realizing the function Can be recorded on a computer-readable recording medium.
  • the computer-readable recording medium refers to a recording medium that can be read by a computer by storing information such as data or programs by electrical, magnetic, optical, mechanical, or chemical action. Examples of such recording media that can be separated from a computer include portable storage, flexible disks, magneto-optical disks, CD-ROMs, CD-R/Ws, DVDs, DATs, and memory cards.
  • SSDs solid state disks
  • hard disks hard disks
  • ROMs read-only memory cards
  • the present invention is not necessarily limited to these embodiments. That is, within the scope of the object of the present invention, all of the constituent elements may be selectively combined and operated in one or more.
  • all of the components may be each implemented as one independent hardware, some or all of the components are selectively combined to provide a program module that performs some or all functions combined in one or a plurality of hardware. It may be implemented as a computer program having.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Technology (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Administration (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un système et un procédé pour mettre en oeuvre une solution de gestion d'installation routière basée sur un système multi-capteurs 3D-VR. Le système comprend: une unité de collecte de données de capteur pour collecter des données d'observation et des informations temporelles d'une installation de route à partir d'un capteur d'image, un capteur de navigation et un capteur de mesure 3D pour obtenir des données géographiques 3D, respectivement; une unité de convergence de données pour effectuer une convergence de données pour des modèles géométriques sur la base d'informations de structure géométrique comprenant une géométrie interne et une géométrie externe définies pour chaque capteur; une unité de traitement de données d'image pour générer des données d'image pour des informations spatiales par enregistrement, dans des unités de pixels de données d'image, des informations spatiales 3D générées à partir d'informations relatives à des données de nuage de points comprenant au moins des données de localisation 3D avec référence à l'indice, des informations de structure géométrique et des informations temporelles; une unité de détection et de reconnaissance d'objet pour détecter et reconnaître des informations d'objet relatives à l'installation de route sur la base des données d'image pour des informations spatiales; et une unité de base de données pour relier et stocker les données d'image pour des informations spatiales et les informations d'objet.
PCT/KR2020/002651 2019-03-15 2020-02-25 Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr WO2020189909A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2019-0029698 2019-03-15
KR20190029698 2019-03-15
KR10-2019-0111249 2019-09-09
KR1020190111249A KR102200299B1 (ko) 2019-03-15 2019-09-09 3d-vr 멀티센서 시스템 기반의 도로 시설물 관리 솔루션을 구현하는 시스템 및 그 방법

Publications (2)

Publication Number Publication Date
WO2020189909A2 true WO2020189909A2 (fr) 2020-09-24
WO2020189909A3 WO2020189909A3 (fr) 2020-11-12

Family

ID=72520376

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002651 WO2020189909A2 (fr) 2019-03-15 2020-02-25 Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr

Country Status (1)

Country Link
WO (1) WO2020189909A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487746A (zh) * 2021-05-25 2021-10-08 武汉海达数云技术有限公司 一种车载点云着色中最优关联影像选择方法及系统
CN114419231A (zh) * 2022-03-14 2022-04-29 幂元科技有限公司 基于点云数据和ai技术的交通设施矢量识别提取分析系统
CN116244343A (zh) * 2023-03-20 2023-06-09 张重阳 一种基于大数据和vr的跨平台3d智能交通指挥方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101456556B1 (ko) * 2012-11-16 2014-11-12 한국건설기술연구원 실측자료를 이용한 도로 자동 도면화 방법
KR101796357B1 (ko) * 2015-09-25 2017-11-09 한남대학교 산학협력단 멀티측량센서모듈이 구비된 절첩식 모바일 매핑 장치용 프레임, 그 프레임이 구비된 도로기하구조 분석시스템
KR102475039B1 (ko) * 2017-06-30 2022-12-07 현대오토에버 주식회사 지도 정보 갱신 장치, 방법 및 시스템
KR101916419B1 (ko) * 2017-08-17 2019-01-30 주식회사 아이닉스 광각 카메라용 다중 뷰 영상 생성 장치 및 영상 생성 방법
KR101954963B1 (ko) * 2018-01-15 2019-03-07 주식회사 스트리스 수치지도 및 도로정밀지도 구축 자동화를 위한 장치 및 방법

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487746A (zh) * 2021-05-25 2021-10-08 武汉海达数云技术有限公司 一种车载点云着色中最优关联影像选择方法及系统
CN113487746B (zh) * 2021-05-25 2023-02-24 武汉海达数云技术有限公司 一种车载点云着色中最优关联影像选择方法及系统
CN114419231A (zh) * 2022-03-14 2022-04-29 幂元科技有限公司 基于点云数据和ai技术的交通设施矢量识别提取分析系统
CN114419231B (zh) * 2022-03-14 2022-07-19 幂元科技有限公司 基于点云数据和ai技术的交通设施矢量识别提取分析系统
CN116244343A (zh) * 2023-03-20 2023-06-09 张重阳 一种基于大数据和vr的跨平台3d智能交通指挥方法及系统
CN116244343B (zh) * 2023-03-20 2024-01-19 麦乐峰(厦门)智能科技有限公司 一种基于大数据和vr的跨平台3d智能交通指挥方法及系统

Also Published As

Publication number Publication date
WO2020189909A3 (fr) 2020-11-12

Similar Documents

Publication Publication Date Title
WO2019093532A1 (fr) Procédé et système d'acquisition de coordonnées de position tridimensionnelle sans points de commande au sol à l'aide d'un drone de caméra stéréo
KR102200299B1 (ko) 3d-vr 멀티센서 시스템 기반의 도로 시설물 관리 솔루션을 구현하는 시스템 및 그 방법
WO2020189909A2 (fr) Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr
WO2020101156A1 (fr) Système de correction géométrique basé sur l'ortho-image pour plateforme mobile ayant un capteur monté
CN108932051B (zh) 增强现实图像处理方法、装置及存储介质
CN108406731A (zh) 一种基于深度视觉的定位装置、方法及机器人
WO2020004817A1 (fr) Appareil et procédé de détection d'informations de voie, et support d'enregistrement lisible par ordinateur stockant un programme informatique programmé pour exécuter ledit procédé
JP2010504711A (ja) 地理空間モデルにおいて移動しているオブジェクトを追跡する映像監視システム及びその方法
CN109596121B (zh) 一种机动站自动目标检测与空间定位方法
CN106705964A (zh) 全景相机融合imu、激光扫描仪定位与导航系统及方法
WO2019139243A1 (fr) Appareil et procédé de mise à jour d'une carte à haute définition pour la conduite autonome
WO2020071619A1 (fr) Appareil et procédé pour mettre à jour une carte détaillée
WO2021230466A1 (fr) Procédé et système de détermination d'emplacement de véhicule
JP2008059319A (ja) 物体認識装置および映像物体測位装置
WO2016035993A1 (fr) Dispositif et procédé d'établissement de carte intérieure utilisant un point de nuage
CN112348886B (zh) 视觉定位方法、终端和服务器
WO2012091326A2 (fr) Système de vision de rue en temps réel tridimensionnel utilisant des informations d'identification distinctes
WO2016206108A1 (fr) Système et procédé pour mesurer un déplacement d'une plateforme mobile
WO2020075954A1 (fr) Système et procédé de positionnement utilisant une combinaison de résultats de reconnaissance d'emplacement basée sur un capteur multimodal
JP2022042146A (ja) データ処理装置、データ処理方法およびデータ処理用プログラム
WO2021015435A1 (fr) Appareil et procédé pour générer une carte tridimensionnelle à l'aide d'une photographie aérienne
CN103411587A (zh) 定位定姿方法及系统
JP3398796B2 (ja) 複合現実感を利用した3次元測量支援用画像システム
CN116883604A (zh) 一种基于天、空、地影像的三维建模技术方法
Kang et al. An automatic mosaicking method for building facade texture mapping using a monocular close-range image sequence

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20773592

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20773592

Country of ref document: EP

Kind code of ref document: A2