WO2023282466A1 - Procédé de collecte de données d'apprentissage utilisant une prévisualisation laser, et programme informatique enregistré sur un support d'enregistrement pour l'exécuter - Google Patents

Procédé de collecte de données d'apprentissage utilisant une prévisualisation laser, et programme informatique enregistré sur un support d'enregistrement pour l'exécuter Download PDF

Info

Publication number
WO2023282466A1
WO2023282466A1 PCT/KR2022/007642 KR2022007642W WO2023282466A1 WO 2023282466 A1 WO2023282466 A1 WO 2023282466A1 KR 2022007642 W KR2022007642 W KR 2022007642W WO 2023282466 A1 WO2023282466 A1 WO 2023282466A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
vehicle
lidar
data collection
learning
Prior art date
Application number
PCT/KR2022/007642
Other languages
English (en)
Korean (ko)
Inventor
승정민
Original Assignee
주식회사 인피닉
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 인피닉 filed Critical 주식회사 인피닉
Publication of WO2023282466A1 publication Critical patent/WO2023282466A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/484Transmitters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the present invention relates to the processing of artificial intelligence (AI) learning data. More specifically, in collecting data for machine learning of artificial intelligence (AI), it relates to a learning data collection method using a laser preview and a computer program recorded on a recording medium to execute the method.
  • AI artificial intelligence
  • AI Artificial intelligence
  • machine learning refers to learning to optimize parameters with given data using a model composed of multiple parameters. Such machine learning is classified into supervised learning, unsupervised learning, and reinforcement learning according to the form of learning data.
  • designing data for artificial intelligence (AI) learning proceeds in the steps of data structure design, data collection, data refinement, data processing, data expansion, and data verification.
  • data structure design is performed through ontology definition, classification system definition, and the like.
  • Data collection is performed by collecting data through direct filming, web crawling, or associations/professional organizations.
  • Data purification is performed by removing redundant data from collected data and de-identifying personal information.
  • Data processing is performed by performing annotation and inputting metadata.
  • Data extension is performed by performing ontology mapping and supplementing or extending the ontology as needed.
  • data verification is performed by verifying validity according to the set target quality using various verification tools.
  • autonomous driving of a vehicle refers to a system that can judge and drive a vehicle by itself. Such autonomous driving may be classified into gradual stages from non-automation to complete automation according to the degree of involvement of the system in driving and the degree of control of the vehicle by the driver.
  • the level of autonomous driving is divided into six levels classified by the Society of Automotive Engineers (SAE) International. According to the six levels classified by the International Association of Automotive Engineers, level 0 is non-automation, level 1 is driver assistance, level 2 is partial automation, level 3 is conditional automation, level 4 is highly automated, and level 5 The steps are fully automated steps.
  • SAE Society of Automotive Engineers
  • Autonomous driving of vehicles is performed through mechanisms of perception, localization, path planning, and control.
  • AI artificial intelligence
  • Data used for machine learning of artificial intelligence (AI) that can be used for autonomous driving of these vehicles is collected by various types of sensors installed in the vehicle.
  • the data used for machine learning of artificial intelligence (AI) that can be used for autonomous driving of a vehicle are lidar, camera, radar, and ultrasonic sensor fixed to the vehicle. ) may be acquired, photographed, or sensed data by, but is not limited thereto.
  • AI artificial intelligence
  • One object of the present invention is to provide a method for collecting learning data using laser preview in collecting data for machine learning of artificial intelligence (AI) that can be used for autonomous driving of a vehicle.
  • AI artificial intelligence
  • Another object of the present invention is to provide a computer program recorded on a recording medium to execute a learning data collection method using laser preview in collecting data for machine learning of artificial intelligence (AI).
  • AI artificial intelligence
  • the present invention proposes a method for effectively controlling multiple sensors for collecting data for machine learning of artificial intelligence (AI).
  • the method includes the steps of receiving, through the transceiver, sensing data from a ray in which a learning data collection device is fixed to a vehicle, and a radar in which the processor is fixed to the vehicle; identifying, by the processor, an object from the received sensing data; And it may be a computer program recorded on a recording medium so that the processor executes the step of controlling a lidar that is fixedly installed in the vehicle and emits laser pulses in response to the distance from the identified object.
  • AI artificial intelligence
  • Details of other embodiments include receiving sensing data from detailed descriptions and radars; identifying an object from the received sensing data by the learning data collection device; and controlling, by the learning data collection device, a lidar fixedly installed in the vehicle to emit laser pulses in response to a distance between the vehicle and the identified object.
  • the detection data may be information on points at which the electromagnetic wave signal emitted by the radar toward the driving direction of the vehicle is reflected.
  • the object in the identifying the object, may be identified by extracting points forming a crowd within a preset threshold range from points included in the sensed data.
  • the lidar when no object is identified from the sensing data, the lidar may be controlled not to emit the laser pulse.
  • the lidar when the distance between the vehicle and the object is greater than a preset lidar recognition range, the lidar does not emit the laser pulse or emits the laser pulse only with a preset intensity. It can be controlled to emit.
  • the laser pulse radiation period of the lidar is lengthened or shortened in proportion to the distance from the object. You can control it.
  • a plurality of cameras fixed to the vehicle may be controlled to capture 2D images. In this case, the specific possible range corresponds to a narrower range than the lidar recognition range.
  • a resolution of a 2D image to be photographed by the plurality of cameras may be differently set in correspondence to a distance between the vehicle and the object.
  • the controlling of the lidar includes estimating a movement path of the object based on a time-sequential change in position of the object included in the sensing data, and photographing a direction corresponding to the estimated movement path among the plurality of cameras.
  • the shooting cycle of the camera can be set shorter than that of cameras shooting in other directions.
  • a plurality of ultrasonic sensors fixed to the vehicle may be controlled to emit ultrasonic waves.
  • the present invention proposes a computer program recorded on a recording medium to execute a method capable of effectively controlling multiple sensors.
  • the computer program may include a memory; transceiver; and a processor configured to process instructions resident in the memory.
  • AI machine learning artificial intelligence
  • AI artificial intelligence
  • data of relatively low importance may be reduced. In this way, by reducing data of relatively low importance, it is possible to lower the burden of data processing for machine learning of artificial intelligence (AI).
  • FIG. 1 is a block diagram of an artificial intelligence learning system according to an embodiment of the present invention.
  • FIG. 2 is an exemplary view for explaining multiple sensors according to an embodiment of the present invention.
  • FIG. 3 is a logical configuration diagram of a learning data collection device according to an embodiment of the present invention.
  • FIG. 4 is a hardware configuration diagram of a learning data collection device according to an embodiment of the present invention.
  • 5 to 7 are exemplary diagrams for explaining a process of controlling multiple sensors according to an embodiment of the present invention.
  • FIGS. 8 and 9 are exemplary diagrams for explaining a process of post-processing collected data according to an embodiment of the present invention.
  • 10 to 12 are exemplary diagrams for explaining a process of correcting errors of multiple sensors according to an embodiment of the present invention.
  • FIG. 13 is a flowchart illustrating a data collection method according to an embodiment of the present invention.
  • first and second used in this specification may be used to describe various components, but the components should not be limited by the terms. These terms are only used for the purpose of distinguishing one component from another. For example, a first element may be termed a second element, and similarly, a second element may be termed a first element, without departing from the scope of the present invention.
  • AI artificial intelligence
  • the present invention can effectively control multiple sensors that collect data for machine learning of artificial intelligence (AI), reduce meaningless data, and minimize errors between data collected by multiple sensors.
  • AI artificial intelligence
  • FIG. 1 is a block diagram of an artificial intelligence learning system according to an embodiment of the present invention.
  • the artificial intelligence learning system includes a learning data collection device 100, a learning data generating device 200, a plurality of annotation devices 300, and an artificial intelligence learning device ( 400) may be configured.
  • components of the artificial intelligence learning system are merely functionally distinct elements, two or more components are integrated and implemented in an actual physical environment, or one component is implemented in an actual physical environment. may be implemented separately from each other.
  • the learning data collection device 100 is a device that can be used to collect data for machine learning of artificial intelligence (AI) that can be used for autonomous driving of a vehicle.
  • AI artificial intelligence
  • the learning data collection device 100 may obtain, capture, or sense data by controlling multiple sensors. And, the learning data collection device 100 may transmit acquired, photographed, or sensed data to the learning data generating device 200 so that they can be processed.
  • the multiple sensors that are controlled by the learning data collection device 100 may include one or more of a lidar, a camera, a radar, and an ultrasonic sensor. It is not limited.
  • the learning data collection apparatus 100 effectively controls multiple sensors for collecting data, reduces meaningless data, and reduces errors between data collected by multiple sensors. It has features that can be minimized.
  • the learning data generating device 200 is a device that can be used to design and process data for machine learning of artificial intelligence (AI) that can be used for autonomous driving of vehicles.
  • AI artificial intelligence
  • the learning data generating device 200 may receive attributes of a project related to artificial intelligence (AI) learning from the artificial intelligence learning device 400 .
  • the learning data generating device 200 is based on the user's control and the properties of the received project, designing a data structure for artificial intelligence (AI) learning, refining collected data, processing data, expanding data, and verification can be performed.
  • AI artificial intelligence
  • the learning data generating device 200 may design a data structure for artificial intelligence (AI) learning.
  • AI artificial intelligence
  • the learning data generating device 200 forms an ontology for artificial intelligence (AI) learning and a data classification system for artificial intelligence (AI) learning based on the attributes of the user's control and the received project.
  • AI artificial intelligence
  • the learning data generating device 200 may collect data for artificial intelligence (AI) learning based on the designed data structure. To this end, the learning data generating device 200 may receive sensing data, 3D point cloud data, 2D images, and sensing data from the learning data collecting device 100 . However, the learning data generation device 200 is not limited thereto, and may perform web crawling or download data from an external organization's device.
  • AI artificial intelligence
  • the learning data generation apparatus 200 may remove redundant or extremely similar data from among the collected sensing data, 3D point cloud data, 2D images, and sensing data.
  • the learning data generation apparatus 200 may de-identify personal information included in the collected sensing data, 3D point cloud data, and 2D images.
  • the learning data generating device 200 may distribute and transmit collected and refined sensing data, 3D point cloud data, 2D images, and sensing data to a plurality of annotation devices 300 .
  • the learning data generating device 200 generates sensing data, 3D point cloud data, 2D images, and sensing data in response to a pre-allocated amount for an operator (ie, labeler) of the annotation device 300. can be distributed
  • the learning data generating device 200 may receive annotation work results from each annotation device 300 .
  • the learning data generating device 200 may generate AI learning data by packaging the received annotation work result. And, the learning data generating device 200 may transmit the generated artificial intelligence (AI) learning data to the artificial intelligence learning device 400 .
  • AI artificial intelligence
  • the learning data generation device 200 having such characteristics transmits and receives data to and from the learning data collection device 100, the annotation device 300, and the artificial intelligence learning device 400, and performs calculations based on the transmitted and received data. Any device that can do this is acceptable.
  • the learning data generating device 200 may be any one of a fixed computing device such as a desktop, workstation, or server, but is not limited thereto.
  • the annotation device 300 is a device that can be used to annotate the sensed data distributed by the learning data generating device 200, 3D point cloud data, 2D images, and sensing data. All or part of the annotation device 300 may be a device for performing annotation work by an annotation worker through a clouding service.
  • the annotation device 300 selects one sensed data, 3D point cloud data, It can be output to 2D image or sensing data display.
  • the annotation device 300 may select a tool according to a signal input from a user through an input/output device.
  • the tool is a tool for setting a bounding box that specifies one or more objects included in sensing data, 3D point cloud data, 2D image, or sensing data.
  • the annotation device 300 may receive coordinates according to the selected tool through an input/output device.
  • the annotation device 300 may specify an object included in sensing data, 3D point cloud data, 2D image, or sensing data by setting a bounding box based on the input coordinates.
  • the bounding box is an area for specifying an object to be learned by artificial intelligence (AI) among objects included in sensing data, 3D point cloud data, 2D image, or sensing data.
  • AI artificial intelligence
  • Such a bounding box may have a rectangle or cube shape, but is not limited thereto.
  • the annotation device 300 receives two coordinates through an input/output device, and based on the inputted two coordinates, the coordinates and the upper left vertex within sensing data, 3D point cloud data, 2D image, or sensing data.
  • An object can be specified by setting a bounding box based on a rectangle with the coordinates of the lower right vertex.
  • the two coordinates may be set by the user inputting one type of input signal twice (eg, mouse click) or by the user inputting two types of input signal once (eg, mouse drag). It may, but is not limited thereto.
  • the annotation device 300 generates sensing data, 3D point cloud data, 2D image, sensing data, or metadata for a set object to be annotated according to a signal input from a user through an input/output device.
  • the metadata is sensing data, 3D point cloud data, 2D image, sensing data, or data for describing an object.
  • metadata includes the category of the specified object, the rate at which the object is clipped by the angle of view, the rate at which the object is obscured by other objects or objects, the tracking ID of the object, the time the image was taken, and the weather conditions on the day the image was taken.
  • file size may include, but are not limited to, file size, image size, copyright holder, resolution, bit value, aperture transmittance, exposure time, ISO sensitivity, focal length, aperture value, angle of view, white balance, RGB depth, class name , tag, shooting location, type of road, road surface information, or traffic jam information may be further included.
  • the annotation device 300 may generate an annotation work result based on the specified object and generated metadata.
  • the annotation work result may have a JSON (Java Script Object Notation) file format, but is not limited thereto.
  • the annotation device 300 may transmit the generated annotation work result to the learning data generating device 200 .
  • the annotation device 100 may be a stationary computing device such as a desktop, workstation, or server, or a smartphone, laptop, tablet, or tablet. It may be any one of mobile computing devices such as a phablet, a portable multimedia player (PMP), a personal digital assistant (PDA), or an e-book reader.
  • PMP portable multimedia player
  • PDA personal digital assistant
  • the artificial intelligence learning device 400 is a device that can be used for machine learning of artificial intelligence (AI) that can be used for autonomous driving of a vehicle.
  • AI artificial intelligence
  • the artificial intelligence learning device 400 may transmit requirements for achieving the purpose of artificial intelligence (AI) that can be used for autonomous driving of a vehicle to the learning data generating device 200 .
  • the artificial intelligence learning device 400 may receive artificial intelligence (AI) learning data from the learning data generating device 200 .
  • the artificial intelligence learning apparatus 400 may perform machine learning on artificial intelligence (AI) that can be used for autonomous driving of a vehicle using the received artificial intelligence (AI) learning data.
  • the artificial intelligence learning device 400 may be any device capable of transmitting and receiving data to and from the learning data generating device 200 and performing calculations using the transmitted and received data.
  • the artificial intelligence learning device 400 may be any one of a fixed computing device such as a desktop, workstation, or server, but is not limited thereto.
  • the learning data collection device 100, the learning data generation device 200, a plurality of annotation devices 300, and the artificial intelligence learning device 400 are connected directly to each other through a secure line, common Data may be transmitted and received using a network in which one or more of a wired communication network or a mobile communication network is combined.
  • public wired communication networks may include Ethernet, x Digital Subscriber Line (xDSL), Hybrid Fiber Coax (HFC), and Fiber To The Home (FTTH). It may be, but is not limited thereto.
  • xDSL Digital Subscriber Line
  • HFC Hybrid Fiber Coax
  • FTTH Fiber To The Home
  • CDMA Code Division Multiple Access
  • WCDMA Wideband CDMA
  • HSPA High Speed Packet Access
  • LTE Long Term Evolution
  • 5th generation mobile telecommunication may be included, but is not limited thereto.
  • the learning data collection device 100 includes a radar 20, a lidar 30, a camera 40, and an ultrasonic sensor ( 50), it is possible to acquire, photograph or sense data for machine learning of artificial intelligence (AI).
  • AI artificial intelligence
  • the vehicle 10 is a vehicle equipped with a radar 20, a lidar 30, a camera 40, and an ultrasonic sensor 50 for collecting basic data for machine learning of artificial intelligence (AI). It can be distinguished from vehicles that perform autonomous driving by intelligence (AI).
  • the radar 20 is fixedly installed in the vehicle 10 and emits electromagnetic waves toward the driving direction of the vehicle 10, and the electromagnetic waves reflected by an object located in front of the vehicle 10 and returned. By sensing, the vehicle 10 may generate sensing data corresponding to an image of the front side.
  • the sensing data is information on points at which electromagnetic waves emitted by the radar 20 fixedly installed in the vehicle 10 toward the driving direction of the vehicle are reflected. Accordingly, the coordinates of the points included in the sensing data may have values corresponding to the position and shape of an object located in front of the vehicle 10 .
  • This sensed data may be 2D information, but is not limited thereto and may be 3D information.
  • the lidar 30 is fixedly installed on the vehicle 10, radiates a laser pulse around the vehicle 10, and detects light reflected back by an object located around the vehicle 10. , 3D point cloud data corresponding to a 3D image of the surroundings of the vehicle 10 may be generated.
  • the 3D point cloud data is three-dimensional information on points obtained by reflecting laser pulses emitted around the vehicle by the LIDAR 30 fixedly installed in the vehicle 10 . Accordingly, coordinates of points included in the 3D point cloud data may have values corresponding to the location and formation of objects located around the vehicle 10 .
  • the camera 40 is fixedly installed in the vehicle 10 and can capture a two-dimensional image of the surroundings of the vehicle 10 .
  • a plurality of cameras 40 may be configured according to the angle of view.
  • FIG. 2 shows an example in which six cameras 40 are installed in the vehicle 10, the number of cameras 40 that can be installed in the vehicle 10 can be configured according to the present invention. It will be obvious to those skilled in the art.
  • the 2D image is an image captured by the camera 40 fixed to the vehicle 10 .
  • the 2D image may include color information of an object located in a direction in which the camera 40 faces.
  • the ultrasonic sensor 50 is fixedly installed in the vehicle 50, emits ultrasonic waves around the vehicle 10, detects sound waves reflected by an object positioned adjacent to the vehicle 10 and returns, Distance information corresponding to the distance between the ultrasonic sensor 50 installed in (10) and the object may be generated.
  • a plurality of ultrasonic sensors 50 may be configured and fixedly installed at the front, rear, front and rear sides of the vehicle 10, which are easily contacted with objects.
  • the distance information is information about a distance from an object detected by the ultrasonic sensor 50 fixedly installed in the vehicle 10 .
  • FIG. 3 is one of the present invention in the examples It is a logical configuration diagram of the learning data collection device according to FIG.
  • the learning data collection device 100 includes a communication unit 105, an input/output unit 110, a multi-sensor control unit 115, a data post-processing unit 120, an error correcting unit 125, and a data control unit. It may be configured to include study 130.
  • the components of the learning data collection device 100 are only functionally distinct elements, two or more components are integrated and implemented in an actual physical environment, or one component is mutually exclusive in an actual physical environment. It could be implemented separately.
  • the communication unit 105 may transmit/receive data with multiple sensors and the learning data generating device 200 .
  • the communication unit 105 transmits detection data, 3D point cloud data, 2D image, and distance information from the radar 20, lidar 30, camera 40, and ultrasonic sensor 50 fixedly installed in the vehicle 10. can receive
  • the learning data collection device 100 and the multiple sensors may be directly connected to each other by cables for data transmission and reception, but are not limited thereto.
  • the communication unit 105 may transmit the post-processing and error correction-performed sensing data, 3D point cloud data, 2D image, and distance information to the learning data generating device 200 under the control of the data providing unit 130. .
  • the input/output unit 110 may receive a signal from a user through a user interface (UI) or output an operation result to the outside.
  • UI user interface
  • the input/output unit 110 may receive input of a threshold range from a user.
  • the critical range is a size range for determining a crowd of points that can be identified as objects among points included in the sensing data or 3D point cloud data, A value determined according to the type may be input.
  • the input/output unit 110 may receive an input of a lidar recognition range from a user.
  • the lidar recognition range is a range in which a laser pulse emitted from the lidar 30 arrives and an object can be recognized, and a value determined according to the output or type of the lidar 30 may be input.
  • the input/output unit 110 may receive input of a specific possible range from the user.
  • the specific possible range is a range in which an object can be specified from a 2D image captured by the camera 40, and includes the type of object to be targeted for machine learning of artificial intelligence (AI) and the number of minimum required pixels. A value determined according to, etc. may be input.
  • AI artificial intelligence
  • the input/output unit 110 may receive an input of a contactable range from the user.
  • the contactable range is a range in which the vehicle 10 may have a possibility of contact with another object when it behaves within a common sense range, and a value determined according to the goal of machine learning of artificial intelligence (AI) may be input. there is.
  • AI artificial intelligence
  • the input/output unit 110 may receive input of a sampling table from a user.
  • the sampling table is a table in which reference information for sampling only some data from 3D point cloud data obtained by lidar and the distance between the vehicle 10 and the object are mapped to each other.
  • the first sampling rate included in the sampling data is information about the number of 3D point cloud data to be sampled per unit time
  • the second sampling rate is information about the number of points to configure one 3D point cloud data.
  • the multi-sensor controller 115 may control the radar 20, lidar 30, camera 40, and ultrasonic sensor 50 fixedly installed in the vehicle 10.
  • the multi-sensor controller 115 may receive one or more of sensing data, 3D point cloud data, 2D image, and distance information through the communication unit 105 .
  • the multi-sensor control unit 115 may receive sensing data through the communication unit 105 and may additionally receive one or more of 3D point cloud data, 2D image, and distance information according to circumstances.
  • the sensing data is information on points at which electromagnetic waves emitted toward the driving direction of the vehicle by the radar 20 fixedly installed in the vehicle 10 are reflected.
  • the 3D point cloud data is three-dimensional information on points obtained by reflecting laser pulses emitted around the vehicle by the LIDAR 30 fixedly installed in the vehicle 10 .
  • the 2D image is an image captured by a camera 40 fixedly installed in the vehicle 10 .
  • the distance information is information about a distance from an object detected by the ultrasonic sensor 50 fixedly installed in the vehicle 10 .
  • the multi-sensor controller 115 may identify an object from the received sensing data.
  • the multi-sensor control unit 125 may identify an object from the sensing data by extracting points forming a cluster within a threshold range previously set by the input/output unit 110 from among the points included in the sensing data. there is.
  • the multi-sensor control unit 115 corresponds to the distance between the vehicle 10 and the object identified from the sensing data, the lidar 30 fixed to the vehicle 10 to emit laser pulses, and the camera to take a 2D image ( 40) and at least one of the ultrasonic sensor 50 to emit ultrasonic waves.
  • the multi-sensor controller 115 may control the lidar 30 not to interrupt the laser pulse when no object is identified from the sensing data.
  • the multi-sensor control unit 115 prevents the lidar 30 from emitting a laser pulse, or It can be controlled to emit laser pulses only with the intensity set in .
  • the multi-sensor control unit 115 When the distance between the vehicle 10 and the object is greater than the LIDAR recognition range, the multi-sensor control unit 115 does not capture 2D images from the plurality of cameras 40 fixed to the vehicle 10, and the ultrasonic sensor 50 ) can be controlled so that it does not emit ultrasonic waves.
  • the multi-sensor control unit 115 lengthens or shortens the laser pulse radiation period of the lidar 30 in proportion to the distance from the object. You can control it.
  • the multi-sensor control unit 115 detects a 2D image by a plurality of cameras 40 fixed to the vehicle 10. can be controlled to shoot.
  • the specific possible range may correspond to a narrower range than the LIDAR recognition range.
  • the multi-sensor controller 115 may differently set resolutions of 2D images to be captured by the plurality of cameras 40 in correspondence to the distance between the vehicle 10 and the object. For example, the multi-sensor controller 115 may set the resolution of a 2D image to be captured by the plurality of cameras 40 to increase as the distance between the vehicle 10 and the object decreases.
  • the multi-sensor control unit 115 may estimate the moving path of the object based on the time-sequential position change of the object included in the sensing data.
  • the multi-sensor controller 115 may identify a camera that photographs a direction corresponding to the estimated movement path of the object from among the plurality of cameras 40 .
  • the multi-sensor controller 115 may set a shorter shooting cycle of a camera that captures a direction corresponding to the moving path of the object than a camera that captures other directions.
  • the multi-sensor control unit 115 provides a plurality of ultrasonic sensors 50 fixed to the vehicle 10 when the distance between the vehicle 10 and the object is within a contactable range preset through the input/output unit 110. They can be controlled to emit ultrasonic waves.
  • the possible contact range may correspond to a narrower range than the specific possible range.
  • the data post-processing unit 120 collects detection data, 3D point cloud data, 2D image, and distance information from the radar 20, lidar 30, camera 40, and ultrasonic sensor 50 as targets. Post-processing can be done.
  • the data post-processing unit 120 may receive one or more of sensing data, 3D point cloud data, 2D images, and distance information through the communication unit 105 .
  • the data post-processing unit 120 may receive sensing data and 3D point cloud data through the communication unit 105, and may additionally receive one or more of 2D images and distance information depending on circumstances.
  • the data post-processing unit 120 may identify an object from the received sensing data.
  • the data post-processing unit 125 may identify an object from the sensing data by extracting points forming a cluster within a threshold range previously set by the input/output unit 110 from among the points included in the sensing data. there is.
  • the data post-processing unit 120 selects an area to which the laser pulse of the LIDAR 30 can reach, centered on the area where the identified object is located among the coordinates included in the 3D point cloud data, as a region of interest (ROI). interest) and regions of uninterested.
  • ROI region of interest
  • the data post-processing unit 120 corresponds to the distance between the vehicle 10 and the object identified from the sensing data, the plurality of 3D point cloud data obtained by the lidar 30 and the plurality of points photographed by the camera 40
  • One or more of the 2D images of and distance information sensed by the ultrasonic sensor 50 may be filtered.
  • the data post-processing unit 120 may identify a first sampling rate corresponding to the distance between the vehicle 10 and the object from the sampling table previously set by the input/output unit 110. .
  • the data post-processing unit 120 may reduce the number of 3D point cloud data per unit time by sampling a plurality of 3D point cloud data according to the identified first sampling rate.
  • the data post-processing unit 120 may identify a second sampling rate corresponding to the distance between the vehicle 10 and the object from the sampling table.
  • the data post-processing unit 120 may reduce the number of points constituting each 3D point cloud data by sampling a plurality of 3D point cloud data according to the identified second sampling rate.
  • the data post-processing unit 120 converts 2D images captured by the camera 40 that captures the region of interest from among a plurality of 2D images to 2D images captured by the camera 40 that captures the region of interest. It can be recompressed to a lower resolution.
  • the data post-processing unit 120 may reduce the number of 2D images per unit time by sampling 2D images captured by the camera 40 that captures the region of interest among a plurality of 2D images.
  • the data post-processing unit 120 expands the size of chroma subsampling for compressing color information of 2D images captured by the camera 40 that captures the region of interest among a plurality of 2D images, and then reproduces the data post-processing unit 120. can be compressed.
  • the data post-processing unit 120 discards the distance information detected by the ultrasonic sensor 50 when the distance between the vehicle 10 and the object is longer than a specific possible range previously set through the input/output unit 110. may be
  • the error correcting unit 125 may correct errors according to installation positions of the radar 20, lidar 30, camera 40, and ultrasonic sensor 50 fixedly installed in the vehicle 10. .
  • the error correction unit 125 may receive one or more of sensing data, 3D point cloud data, 2D images, and distance information through the communication unit 105 .
  • the data post-processing unit 120 may receive sensing data and 3D point cloud data through the communication unit 105, and may additionally receive one or more of 2D images and distance information depending on circumstances.
  • the error correction unit 125 may identify a first point where the radar 20 fixedly installed in the vehicle 10 emits an electromagnetic wave signal. That is, the first point means a point where the radar 20 is fixedly installed in the vehicle 10 . If a plurality of radars 20 are installed in the vehicle 10, the number of first points may also be plural.
  • the error correction unit 125 may identify a second point where the LIDAR 30 fixedly installed in the vehicle 10 emits a laser pulse. That is, the second point means a point where the lidar 30 is fixedly installed in the vehicle 10 . If a plurality of lidars 30 are installed in the vehicle 10, the number of second points may also be plural.
  • Such a first point and a second point may be set in advance according to the model and size of the vehicle 10 in which the radar 20 and the lidar 30 are fixedly installed, the learning goal of artificial intelligence (AI), and the like, , but not limited thereto.
  • AI artificial intelligence
  • the error correction unit 125 may correct coordinates of points included in the sensed data and coordinates of points included in the 3D point cloud data so that the identified first and second points may be recognized as the same position.
  • the error correction unit 125 may identify an object from sensing data.
  • the error correction unit 125 may identify an object from the sensing data by extracting points forming a cluster within a threshold range previously set by the input/output unit 110 from among the points included in the sensing data. there is.
  • the error correction unit 125 corrects the coordinates of the points included in the sensed data and the coordinates of the points included in the 3D point cloud data only when an object is identified from the sensed data, and when no object is identified from the sensed data, the error corrector 125 corrects the coordinates of the points included in the sensed data
  • the coordinates of the points included in the data and the coordinates of the points included in the 3D point cloud data can be maintained as they are.
  • the error correction unit 125 sets a virtual standard point having three-dimensional coordinates, and then coordinates of points included in the sensing data so that the first point and the second point are recognized as being located at the reference point. and coordinates of points included in 3D point cloud data can be corrected.
  • the error compensating unit 125 recognizes that the first and second points are located at the reference point, so that the points included in the detected data The coordinates of points and the coordinates of points included in 3D point cloud data can be corrected.
  • the sensing data obtained by the radar 20 and the 3D point cloud data obtained by the LIDAR 30 are composed of coordinates of points where electromagnetic waves or laser pulses are reflected, and observation points (ie, observation points) through correction of coordinate values. , the first point and the second point) can be matched.
  • the error correction unit 125 provides a formula for correcting the first point of the radar 20 and the second point of the LIDAR 30 to the point where the camera 40 captures the 2D image. can do.
  • the error correcting unit 125 may calculate 3D vector values between respective photographing points where the plurality of cameras 40 are installed and a reference point.
  • the error correction unit 125 calculates a formula for correcting sensing data or 3D point cloud data so that the first point or the second point can be recognized as being located at each imaging point, based on the calculated 3D vector values. can be derived.
  • the error correcting unit 125 provides a 3D image regardless of whether the lidar 30 fixed to the vehicle 10 corresponds to a static lidar or a rotative lidar. It can be calibrated to handle point cloud data.
  • the error compensating unit 125 is a case where the lidar 30 fixedly installed in the vehicle 10 is composed of a plurality of fixed lidar devices installed spaced apart from each other at different points in the vehicle 10, a plurality of lidar By combining the coordinates of the points included in the 3D point cloud data acquired by the device, one 3D point cloud data having a spherical shape obtained by a rotary lidar that rotates around a reference point and emits laser pulses is generated can do.
  • the error correction unit 125 is configured as a rotary lidar device in which the lidar 30 fixedly installed in the vehicle 10 rotates around the second point and emits laser pulses.
  • the lidar 30 fixedly installed in the vehicle 10 rotates around the second point and emits laser pulses.
  • the error correction unit 125 may correct distance values included in the distance information so that each sensing point where the plurality of ultrasonic sensors 50 are installed is recognized as being located at a reference point.
  • the data providing unit 130 may provide basic data that can be used for machine learning of artificial intelligence (AI) to the learning data generating device 200 .
  • AI artificial intelligence
  • the data providing unit 130 provides sensing data, 3D point cloud data, After the 2D image and distance information are collected, and the collected sensing data, 3D point cloud data, 2D image, and distance information are post-processed by the data post-processing unit 120, the post-processed sensing data, 3D point cloud data, 2D image, and When the distance information is corrected by the error correction unit 125, the corrected sensing data, 3D point cloud data, 2D image, and distance information may be transmitted to the learning data generating device 200 through the communication unit 105.
  • FIG. 4 is one of the present invention in the examples It is a hardware configuration diagram of the learning data collection device according to FIG.
  • the learning data collection device 100 includes a processor (150), a memory (155), a transceiver (160), an input/output device (165), and a data bus. (Bus, 170) and storage (Storage, 175) can be configured.
  • the processor 150 may implement the operation and function of the learning data collection device 100 based on instructions according to the software 180a in which the method according to the embodiments of the present invention is resident in the memory 155. .
  • Software 180a in which a method according to embodiments of the present invention is implemented may be loaded in the memory 155 .
  • the transceiver 160 may transmit and receive data to and from the radar 20 , lidar 30 , camera 40 , ultrasonic sensor 50 , and learning data generating device 200 .
  • the input/output device 165 may receive data required for operation of the learning data collection device 100 and output collected sensing data, 3D point cloud data, 2D image, and distance information.
  • the data bus 170 is connected to the processor 150, the memory 155, the transceiver 160, the input/output device 165, and the storage 175, and is a movement path for transferring data between each component. role can be fulfilled.
  • the storage 175 stores an application programming interface (API), a library file, a resource file, etc. necessary for the execution of the software 180a in which the method according to the embodiments of the present invention is implemented. can be saved
  • the storage 175 may store software 180b in which a method according to embodiments of the present invention is implemented. Also, the storage 175 may store information necessary for performing a method according to embodiments of the present invention.
  • the software 180a, 180b for implementing a learning data collection method using a laser preview resident in the memory 155 or stored in the storage 175 is provided by the processor 150 in the vehicle 10 ) receiving sensing data from a radar 20 fixedly installed through the transceiver 160, identifying an object from the received sensing data by a processor 150, and identifying the identified object by a processor 150
  • the processor 150 may be a computer program recorded on a recording medium to execute the step of controlling the lidar 30 that is fixedly installed in the vehicle and emits laser pulses.
  • the software (180a, 180b) for implementing a data processing method resident in the memory 155 or stored in the storage 175 is a radar in which the processor 150 is fixed to the vehicle 10 Receiving the sensing data obtained by (20) and a plurality of 3D point cloud data obtained by the LIDAR 30 fixedly installed in the vehicle 10 through the transceiver 160, the processor 150 Identifying an object from the received sensing data, and the processor 150 corresponding to the distance between the vehicle 10 and the object, a plurality of 3D point cloud data acquired by the lidar 30 It may be a computer program recorded on a recording medium to execute the step of filtering them.
  • the software (180a, 180b) for implementing the error correction method resident in the memory 155 or stored in the storage 175 is a processor 150 fixedly installed in the vehicle 10 Receiving sensing data obtained by the radar 20 and 3D point cloud data obtained by the LIDAR 30 fixedly installed in the vehicle 10 through the transceiver 160, wherein the processor 150 performs the Identifying a first point where the radar 20 emits an electromagnetic wave signal and identifying a second point where the lidar 30 emits a laser pulse, and the processor 150 determines the first point and the It may be a computer program recorded on a recording medium to execute the step of correcting the coordinates of the points included in the sensed data and the coordinates of the points included in the 3D point cloud data so that the second points are recognized as the same position. there is.
  • the processor 150 may include an Application-Specific Integrated Circuit (ASIC), another chipset, a logic circuit, and/or a data processing device.
  • the memory 155 may include read-only memory (ROM), random access memory (RAM), flash memory, memory cards, storage media, and/or other storage devices.
  • the transceiver 160 may include a baseband circuit for processing wired/wireless signals.
  • the input/output device 165 includes an input device such as a keyboard, a mouse, and/or a joystick, and a Liquid Crystal Display (LCD), an Organic LED (OLED), and/or a liquid crystal display (LCD).
  • an image output device such as an active matrix OLED (AMOLED) may include a printing device such as a printer or a plotter.
  • AMOLED active matrix OLED
  • a module may reside in memory 155 and be executed by processor 150 .
  • the memory 155 may be internal or external to the processor 150 and may be connected to the processor 150 by various well-known means.
  • Each component shown in FIG. 4 may be implemented by various means, eg, hardware, firmware, software, or a combination thereof.
  • one embodiment of the present invention includes one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), FPGAs ( Field Programmable Gate Arrays), processors, controllers, microcontrollers, microprocessors, etc.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • processors controllers, microcontrollers, microprocessors, etc.
  • an embodiment of the present invention is implemented in the form of a module, procedure, function, etc. that performs the functions or operations described above, and is stored on a recording medium readable through various computer means.
  • the recording medium may include program commands, data files, data structures, etc. alone or in combination.
  • Program instructions recorded on the recording medium may be those specially designed and configured for the present invention, or those known and usable to those skilled in computer software.
  • recording media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs (Compact Disk Read Only Memory) and DVDs (Digital Video Disks), floptical It includes hardware devices specially configured to store and execute program instructions, such as magneto-optical media, such as a floptical disk, and ROM, RAM, flash memory, and the like. Examples of program instructions may include high-level language codes that can be executed by a computer using an interpreter or the like as well as machine language codes generated by a compiler. These hardware devices may be configured to operate as one or more pieces of software to perform the operations of the present invention, and vice versa.
  • Fig. 5 degree 7 is one of the present invention in the examples It is an exemplary view for explaining a process of controlling multiple sensors according to the
  • the learning data collection device 100 includes a radar 20, a lidar 30, a camera 40 and The ultrasonic sensor 50 may be controlled.
  • the learning data collection device 100 receives one or more of sensing data, 3D point cloud data, 2D image, and distance information from the radar 20, lidar 30, camera 40, and ultrasonic sensor 50 can do.
  • the sensing data is information on points at which electromagnetic waves emitted toward the driving direction of the vehicle by the radar 20 fixedly installed in the vehicle 10 are reflected.
  • the 3D point cloud data is three-dimensional information on points obtained by reflecting laser pulses emitted around the vehicle by the LIDAR 30 fixedly installed in the vehicle 10 .
  • the 2D image is an image captured by a camera 40 fixedly installed in the vehicle 10 .
  • the distance information is information about a distance from an object detected by the ultrasonic sensor 50 fixedly installed in the vehicle 10 .
  • the learning data collection device 100 may identify the object 60 from the received sensing data.
  • the learning data collection apparatus 100 may identify the object 60 from the sensing data by extracting points forming a cluster within a preset threshold range from points included in the sensing data.
  • the learning data collection device 100 corresponds to the distance d between the vehicle 10 and the object 60, and is fixedly installed in the vehicle 10 to radiate laser pulses, LiDAR 30, and to capture 2D images. At least one of the camera 40 and the ultrasonic sensor 50 to emit ultrasonic waves may be controlled.
  • the lidar 30 when the distance d between the vehicle 10 and the object 60 is greater than the lidar recognition range 31, the lidar 30 does not emit laser pulses.
  • the laser pulse may be controlled to be emitted only with a preset intensity.
  • the ultrasonic sensor 50 when the distance d between the vehicle 10 and the object 60 is greater than the lidar recognition range 31, the plurality of cameras 40 do not capture 2D images, , the ultrasonic sensor 50 may be controlled not to emit ultrasonic waves.
  • the learning data collection device 100 is a lidar in proportion to the distance d from the object 60.
  • the laser pulse radiation period of (30) can be controlled to be long or short.
  • the learning data collection device 100 When the distance d between the vehicle 10 and the object 60 is within a specific possible range 41, the learning data collection device 100 generates a 2D image by a plurality of cameras 40 fixed to the vehicle 10. can be controlled to shoot.
  • the specific possible range 41 may correspond to a narrower range than the lidar recognition range 31 .
  • the learning data collection apparatus 100 may set different resolutions of 2D images to be captured by the plurality of cameras 40 in correspondence to the distance d between the vehicle 10 and the object 60 .
  • the learning data collection device 100 estimates the movement path of the object 60 based on the time-series position change of the object 60 included in the sensing data, and photographs a direction corresponding to the estimated movement path of the object 60. It is possible to identify a camera that captures the object 60 and set a shorter shooting cycle of a camera that shoots a direction corresponding to the moving path of the object 60 than a camera that shoots another direction.
  • the learning data collection device 100 uses a plurality of ultrasonic sensors 50 fixedly installed in the vehicle 10 to transmit ultrasonic waves. can be controlled to fire.
  • the possible contact range 51 may correspond to a narrower range than the specific possible range 41 .
  • the learning data collection apparatus 100 controls the data collection period of multiple sensors or the quality of data to be collected, so that among data for machine learning of artificial intelligence (AI), the importance is relatively low. data can be reduced.
  • AI artificial intelligence
  • the learning data collection device 100 includes a radar 20, a lidar 30, a camera 40 and Post-processing may be performed on the sensing data, 3D point cloud data, 2D image, and distance information collected from the ultrasonic sensor 50 .
  • the learning data collection apparatus 100 detects sensing data, 3D point cloud data, and 2D images from the radar 20, lidar 30, camera 40, and ultrasonic sensor 50. and distance information may be received.
  • the learning data collection device 100 may identify the object 60 from the received sensing data.
  • the learning data collection apparatus 100 may identify the object 60 from the sensing data by extracting points forming a cluster within a preset threshold range from points included in the sensing data.
  • the learning data collection apparatus 100 sets the region to which the laser pulse of the lidar 30 can reach, centered on the region where the identified object 60 is located among the coordinates included in the 3D point cloud data, as the region of interest ( ROI) and non-interest regions.
  • the learning data collection apparatus 100 corresponds to the distance between the vehicle 10 and the object 60, a plurality of 3D point cloud data obtained by the lidar 30, and a plurality of points photographed by the camera 40.
  • One or more of 2D images and distance information sensed by the ultrasonic sensor 50 may be filtered.
  • the learning data collection device 100 identifies a first sampling rate corresponding to the distance between the vehicle 10 and the object 60 from the sampling table, and generates a plurality of 3D images according to the identified first sampling rate.
  • the number of 3D point cloud data per unit time may be reduced by sampling the point cloud data.
  • the learning data collection apparatus 100 identifies a second sampling rate corresponding to the distance between the vehicle 10 and the object 60 from the sampling table, and generates a plurality of 3D point cloud data according to the identified second sampling rate. It is possible to reduce the number of points constituting each 3D point cloud data by sampling them.
  • the learning data collection apparatus 100 selects the 2D images captured by the camera 40 that captures the region of interest among a plurality of 2D images captured by the plurality of cameras 40 as a region of interest (ROI).
  • the 2D images captured by the camera 40 may be recompressed to a resolution lower than that of the 2D images.
  • the training data collection apparatus 100 may reduce the number of 2D images per unit time by sampling 2D images captured by the camera 40 that captures a region of interest among a plurality of 2D images.
  • the learning data collection apparatus 100 may recompress after expanding the size of chroma subsampling for compressing color information of 2D images captured by the camera 40 that captures a region of non-interest among a plurality of 2D images. there is.
  • the learning data collection device 100 may discard distance information detected by the ultrasonic sensor 50 .
  • the learning data collection apparatus 100 filters data collected by multiple sensors to reduce data of relatively low importance among machine learning data of artificial intelligence (AI). can As a result, by reducing the amount of data of relatively low importance, it is possible to lower the burden of data processing for machine learning of artificial intelligence (AI).
  • Fig. 10 degree 12 is one of the present invention in the examples the error of multiple sensors according to correcting It is an example diagram to explain the process.
  • the learning data collection device 100 includes a radar 20, a lidar 30, a camera 40 and An error according to the installation position of the ultrasonic sensor 50 may be corrected.
  • the learning data collection apparatus 100 detects sensing data, 3D point cloud data, and 2D images from the radar 20, lidar 30, camera 40, and ultrasonic sensor 50. and distance information may be received.
  • the learning data collection device 100 may identify a first point 25 from which the radar 20 fixedly installed in the vehicle 10 emits an electromagnetic wave signal. That is, the first point 25 means a point where the radar 20 is fixedly installed in the vehicle 10 . If a plurality of radars 20 are installed in the vehicle 10, the number of first points 25 may also be plural.
  • the learning data collection device 100 may identify a second point 35 where the LIDAR 30 fixedly installed in the vehicle 10 emits a laser pulse. That is, the second point 35 means a point where the LIDAR 30 is fixedly installed in the vehicle 10 . If a plurality of lidars 30 are installed in the vehicle 10, the number of second points 35 may also be plural.
  • the learning data collection device 100 determines the coordinates of the points included in the sensing data and the points included in the 3D point cloud data so that the identified first point 25 and the second point 35 can be recognized as the same position. Coordinates can be corrected. More specifically, after setting a virtual standard point having three-dimensional coordinates, the learning data collection device 100 recognizes that the first point 25 and the second point 35 are located at the reference point. The coordinates of the points included in the sensing data and the coordinates of the points included in the 3D point cloud data may be corrected as much as possible.
  • the learning data collection apparatus 100 calculates 3D vector values between each of the photographing points 45 where the plurality of cameras 40 are installed and the reference point, and based on the calculated 3D vector values, An equation capable of correcting sensed data or 3D point cloud data may be derived so that the first point 25 or the second point 35 may be recognized as being located at each imaging point 45 .
  • the learning data collection device 100 is composed of a plurality of fixed lidar devices installed at different points in the vehicle 10 and spaced apart from each other in the lidar 30 fixed to the vehicle 10, a plurality of lidar devices Combining the coordinates of the points included in the 3D point cloud data acquired by can
  • the learning data collection device 100 is a rotary lidar device when the lidar 30 fixed to the vehicle 10 rotates around the second point and emits laser pulses.
  • the coordinates of the points included in one 3D point cloud data obtained by are separated into a plurality of points based on the direction in which the laser pulse was emitted, and a plurality of 3D point cloud data each having a form obtained by a plurality of fixed lidar devices. can also create
  • the learning data collection device 100 can directly apply the location of an object recognized by a specific sensor to data obtained from other sensors by correcting errors according to the installation positions of multiple sensors. there will be As a result, it is possible to integrate and manage annotation work results for data individually obtained by multiple sensors.
  • 13 is one of the present invention in the examples to explain data collection methods according to is a flowchart .
  • the learning data collection apparatus 100 detects data and 3D point cloud data from the radar 20, lidar 30, camera 40, and ultrasonic sensor 50. , 2D image and distance information may be received (S100).
  • the learning data collection device 100 identifies an object from the received sensing data, and is fixedly installed in the vehicle 10 to emit laser pulses in response to the distance between the identified object and the vehicle 10 (30). ), at least one of the camera 40 to capture a 2D image and the ultrasonic sensor 50 to emit ultrasonic waves (S200).
  • a detailed description of the process of controlling the lidar 30, the camera 40, and the ultrasonic sensor 50 by the learning data collection device 100 is the same as that described with reference to FIGS. 5 to 7, so it will not be described repeatedly. .
  • the learning data collection device 100 performs post-processing on detection data, 3D point cloud data, 2D image, and distance information received from the radar 20, lidar 30, camera 40, and ultrasonic sensor 50. It can be performed (S300).
  • the learning data collection device 100 may correct errors according to installation positions of the radar 20, lidar 30, camera 40, and ultrasonic sensor 50 (S400).
  • the learning data collection device 100 detects errors in detection data, 3D point cloud data, 2D image, and distance information according to the installation positions of the radar 20, lidar 30, camera 40, and ultrasonic sensor 50. Since the detailed description of the correction process is the same as that described with reference to FIGS. 10 to 12, it will not be described repeatedly.
  • the learning data collection device 100 may transmit the corrected sensing data, 3D point cloud data, 2D image, and distance information to the learning data generating device 200 (S500).

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention propose un procédé de collecte de données d'apprentissage utilisant une prévisualisation laser, qui collecte des données pour l'apprentissage automatique d'une intelligence artificielle (AI). Le procédé peut comprendre les étapes consistant : à recevoir, par un appareil de collecte de données d'apprentissage, des données détectées en provenance d'un radar installé de manière fixe sur un véhicule ; à identifier, par l'appareil de collecte de données d'apprentissage, un objet à partir des données détectées reçues ; et à commander, par l'appareil de collecte de données d'apprentissage, en correspondance avec la distance entre le véhicule et l'objet identifié, un LiDar qui est installé de manière fixe sur le véhicule et qui est destiné à émettre des impulsions laser. Selon la présente invention décrite ci-dessus, il est possible de réduire des données ayant une importance relativement faible parmi des éléments de données pour l'apprentissage automatique d'une intelligence artificielle (AI) par commande d'une période de collecte de données de multiples capteurs installés dans un véhicule, qui collectent des données pour une intelligence artificielle à apprentissage automatique (AI) qui peut être utilisée pour une conduite autonome, ou la qualité de données à collecter.
PCT/KR2022/007642 2021-07-08 2022-05-30 Procédé de collecte de données d'apprentissage utilisant une prévisualisation laser, et programme informatique enregistré sur un support d'enregistrement pour l'exécuter WO2023282466A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210089486A KR102310601B1 (ko) 2021-07-08 2021-07-08 레이저 프리뷰를 이용한 학습 데이터 수집 방법 및 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램
KR10-2021-0089486 2021-07-08

Publications (1)

Publication Number Publication Date
WO2023282466A1 true WO2023282466A1 (fr) 2023-01-12

Family

ID=78150839

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/007642 WO2023282466A1 (fr) 2021-07-08 2022-05-30 Procédé de collecte de données d'apprentissage utilisant une prévisualisation laser, et programme informatique enregistré sur un support d'enregistrement pour l'exécuter

Country Status (2)

Country Link
KR (1) KR102310601B1 (fr)
WO (1) WO2023282466A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102310601B1 (ko) * 2021-07-08 2021-10-13 주식회사 인피닉 레이저 프리뷰를 이용한 학습 데이터 수집 방법 및 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170307735A1 (en) * 2016-04-22 2017-10-26 Mohsen Rohani Object detection using radar and machine learning
US20180307944A1 (en) * 2017-04-24 2018-10-25 Baidu Usa Llc Automatically collecting training data for object recognition with 3d lidar and localization
KR20190127624A (ko) * 2019-10-31 2019-11-13 충북대학교 산학협력단 라이다 센서를 이용한 밀집도 기반의 객체 검출 장치 및 방법
KR102310601B1 (ko) * 2021-07-08 2021-10-13 주식회사 인피닉 레이저 프리뷰를 이용한 학습 데이터 수집 방법 및 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3430427B1 (fr) * 2016-03-14 2021-07-21 IMRA Europe S.A.S. Procédé de traitement d'un nuage de points 3d
KR101964100B1 (ko) * 2017-10-23 2019-04-01 국민대학교산학협력단 신경망 학습 기반의 객체 검출 장치 및 방법
KR101899549B1 (ko) * 2017-12-27 2018-09-17 재단법인 경북아이티융합 산업기술원 카메라 및 라이다 센서를 이용한 장애물 인식 장치 및 그 방법
KR102144707B1 (ko) 2018-10-16 2020-08-14 주식회사 키센스 인공지능 학습을 위한 모바일 기기의 터치 기반 어노테이션과 이미지 생성 방법 및 그 장치
US11852746B2 (en) * 2019-10-07 2023-12-26 Metawave Corporation Multi-sensor fusion platform for bootstrapping the training of a beam steering radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170307735A1 (en) * 2016-04-22 2017-10-26 Mohsen Rohani Object detection using radar and machine learning
US20180307944A1 (en) * 2017-04-24 2018-10-25 Baidu Usa Llc Automatically collecting training data for object recognition with 3d lidar and localization
KR20190127624A (ko) * 2019-10-31 2019-11-13 충북대학교 산학협력단 라이다 센서를 이용한 밀집도 기반의 객체 검출 장치 및 방법
KR102310601B1 (ko) * 2021-07-08 2021-10-13 주식회사 인피닉 레이저 프리뷰를 이용한 학습 데이터 수집 방법 및 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DONG XU; WANG PENGLUO; ZHANG PENGYUE; LIU LANGECHUAN: "Probabilistic Oriented Object Detection in Automotive Radar", 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), IEEE, 14 June 2020 (2020-06-14), pages 458 - 467, XP033798958, DOI: 10.1109/CVPRW50498.2020.00059 *
MEYER MICHAEL; KUSCHK GEORG: "Automotive Radar Dataset for Deep Learning Based 3D Object Detection", 2019 16TH EUROPEAN RADAR CONFERENCE (EURAD), EUMA, 2 October 2019 (2019-10-02), pages 129 - 132, XP033663874 *

Also Published As

Publication number Publication date
KR102310601B1 (ko) 2021-10-13

Similar Documents

Publication Publication Date Title
EP3707674A1 (fr) Procédé et appareil pour effectuer une estimation de profondeur d'objet
WO2023120831A1 (fr) Procédé de désidentification et programme informatique enregistré sur un support d'enregistrement en vue de son exécution
WO2015194867A1 (fr) Dispositif de reconnaissance de position de robot mobile utilisant le suivi direct, et son procédé
WO2016074169A1 (fr) Procédé de détection de cible, dispositif détecteur, et robot
WO2020204659A1 (fr) Dispositif électronique, procédé et support lisible par ordinateur pour fournir un effet de flou dans une vidéo
WO2015194866A1 (fr) Dispositif et procédé permettant de reconnaître un emplacement d'un robot mobile au moyen d'un réajustage basé sur les bords
WO2020091262A1 (fr) Procédé de traitement d'image à l'aide d'un réseau neuronal artificiel, et dispositif électronique le prenant en charge
WO2023282466A1 (fr) Procédé de collecte de données d'apprentissage utilisant une prévisualisation laser, et programme informatique enregistré sur un support d'enregistrement pour l'exécuter
WO2020032497A1 (fr) Procédé et appareil permettant d'incorporer un motif de bruit dans une image sur laquelle un traitement par flou a été effectué
KR102310608B1 (ko) 레이더 및 라이다를 기반으로 하는 자율주행 학습 데이터의 처리 방법 및 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램
WO2019143050A1 (fr) Dispositif électronique et procédé de commande de mise au point automatique de caméra
WO2022025441A1 (fr) Ensemble de capture d'image omnidirectionnelle et procédé exécuté par celui-ci
WO2020231156A1 (fr) Dispositif électronique et procédé d'acquisition d'informations biométriques en utilisant une lumière d'affichage
WO2019017585A1 (fr) Dispositif électronique de commande de la mise au point d'une lentille et procédé de commande associé
WO2022265262A1 (fr) Procédé d'extraction de données pour l'entraînement d'intelligence artificielle basé sur des mégadonnées, et programme informatique enregistré sur support d'enregistrement pour l'exécuter
WO2022035054A1 (fr) Robot et son procédé de commande
WO2020091347A1 (fr) Dispositif et procédé de mesure de profondeur tridimensionnelle
WO2020017814A1 (fr) Système et procédé de détection d'entité anormale
WO2020171450A1 (fr) Dispositif électronique et procédé de génération carte de profondeur
WO2019160262A1 (fr) Dispositif électronique et procédé pour traiter une image au moyen d'un dispositif électronique
WO2023033333A1 (fr) Dispositif électronique comprenant une pluralité de caméras et son procédé de fonctionnement
WO2022260252A1 (fr) Dispositif électronique à module de dispositif de prise de vues et procédé opératoire associé
KR102310604B1 (ko) 다중 센서에 의해 수집된 데이터의 처리 방법 및 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램
WO2023054833A1 (fr) Procédé d'augmentation de données pour apprentissage automatique, et programme informatique enregistré dans un support d'enregistrement pour l'exécution de celui-ci
WO2024058618A1 (fr) Approche probabiliste pour unifier des représentations pour une cartographie robotique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22837820

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22837820

Country of ref document: EP

Kind code of ref document: A1