WO2021241834A1 - Virtual lane generation apparatus and method based on traffic flow information perception for autonomous driving in adverse weather conditions - Google Patents

Virtual lane generation apparatus and method based on traffic flow information perception for autonomous driving in adverse weather conditions Download PDF

Info

Publication number
WO2021241834A1
WO2021241834A1 PCT/KR2021/000335 KR2021000335W WO2021241834A1 WO 2021241834 A1 WO2021241834 A1 WO 2021241834A1 KR 2021000335 W KR2021000335 W KR 2021000335W WO 2021241834 A1 WO2021241834 A1 WO 2021241834A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving
history
lane
virtual lane
attention
Prior art date
Application number
PCT/KR2021/000335
Other languages
French (fr)
Korean (ko)
Inventor
이경수
고영일
이주현
Original Assignee
서울대학교산학협력단
(주)스마트모빌리티랩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교산학협력단, (주)스마트모빌리티랩 filed Critical 서울대학교산학협력단
Publication of WO2021241834A1 publication Critical patent/WO2021241834A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • B60R21/0134Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • B60W30/143Speed control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • B60W30/16Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18163Lane change; Overtaking manoeuvres
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • B60W2050/0052Filtering, filters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/408Radar; Laser, e.g. lidar
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/30Driving style
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/10Historical data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/10Path keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/14Cruise control
    • B60Y2300/143Speed control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/14Cruise control
    • B60Y2300/16Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/18Propelling the vehicle
    • B60Y2300/18008Propelling the vehicle related to particular drive situations
    • B60Y2300/18166Overtaking, changing lanes

Definitions

  • the present invention relates to a method and apparatus for generating a virtual lane. More particularly, the present invention relates to a method and apparatus for determining a driving trajectory of a surrounding vehicle by utilizing a self-attention mechanism of a convolutional neural network (CNN) and generating a virtual lane therefrom.
  • CNN convolutional neural network
  • driver assistance systems and intelligent vehicle systems are being developed.
  • autonomous driving technology recognition and positioning of the surrounding environment, determination and planning for changes in the surrounding environment, and control of vehicle driving are performed accordingly, so that the vehicle can autonomously drive without driver intervention.
  • the states of objects constituting the traffic environment such as surrounding vehicles, pedestrians, and traffic signals, may be detected through sensors such as radar and cameras.
  • sensors such as radar and cameras.
  • information on lanes displayed on the road surface may be mainly utilized to plan the behavior of the own vehicle and predict the behavior of surrounding vehicles.
  • visual information such as an image by a camera may be a main acquisition means.
  • the painted portion of the lane on the road surface may be distinguished from the asphalt by using the difference in reflectivity for laser scanning through LiDAR or the like.
  • the technical problem to be solved by the present invention is to provide a virtual environment around the own vehicle so that behavior planning and driving control according to autonomous driving can be performed even when a lane around the own vehicle is not detected due to a weather environment such as heavy snow or heavy rain. It is to provide technology to create lanes.
  • the method of generating a virtual lane for assisting autonomous driving includes driving data of at least one surrounding vehicle measured by a complex sensor in time flow. deriving a driving history by accumulating accordingly; dividing the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN); deriving a weighted driving history of the at least one surrounding vehicle by assigning a weight to the at least one driving section based on the self-attention; determining a driving trajectory of the at least one surrounding vehicle by performing curve fitting on the weighted driving history through the CNN; and generating the virtual lane on an HD map for assisting the autonomous driving based on the driving trajectory.
  • CNN convolutional neural network
  • an apparatus for generating a virtual lane for assisting autonomous driving includes: a memory storing at least one program; and a processor for generating the virtual lane by executing the at least one program, wherein the processor is configured to accumulate driving data of at least one surrounding vehicle measured by a complex sensor over time to derive a driving history, , divides the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN), and divides the driving history into at least one driving section based on the self-attention.
  • CNN convolutional neural network
  • Deriving a weighted driving history of the at least one surrounding vehicle by assigning a weight to a driving section, and performing curve fitting on the weighted driving history through the CNN to determine the driving trajectory of the at least one surrounding vehicle, and
  • the virtual lane is generated on the HD map for assisting the autonomous driving based on the driving trajectory.
  • a virtual lane can be created by referring to the driving flow of surrounding vehicles, so that the autonomous driving process based on the virtual lane is can be performed smoothly.
  • the driving patterns of surrounding vehicles are classified according to the self-attention mechanism, and behaviors that obscure lane creation, such as lane changes, can be considered with low weight. The generated accuracy can be improved.
  • FIG. 1 is a block diagram illustrating elements constituting an apparatus for generating a virtual lane according to some exemplary embodiments.
  • FIG. 2 is a diagram for explaining how an autonomous driving system including an apparatus for generating a virtual lane according to some exemplary embodiments operates.
  • FIG. 3 is a diagram for describing a detailed process of generating a virtual lane according to some exemplary embodiments.
  • FIG. 4 is a flowchart illustrating steps of configuring a method for generating a virtual lane according to some embodiments.
  • FIG. 1 is a block diagram illustrating elements constituting an apparatus for generating a virtual lane according to some exemplary embodiments.
  • an apparatus 100 for generating a virtual lane may include a memory 110 and a processor 120 .
  • the present invention is not limited thereto, and other general-purpose components other than the components shown in FIG. 1 may be further included in the device 100 .
  • the apparatus 100 may be a computing device mounted on a vehicle so as to create a virtual lane.
  • the device 100 may include the memory 110 as a means for storing various data, instructions, at least one program or software, and performs processing on various data by executing the instructions or at least one program.
  • the processor 120 may be included as a means for doing so.
  • the device 100 may refer to an electric component mounted in a vehicle in the form of a smart mobility device.
  • the device 100 is not limited to a type mounted on a vehicle, and the device 100 may be implemented as a mobile device that wirelessly exchanges data with the vehicle.
  • the memory 110 may store various commands related to generation of a virtual lane in the form of at least one program.
  • the memory 110 may store instructions constituting software such as a computer program or mobile application.
  • the memory 110 may store various data necessary for the execution of at least one program.
  • the memory 110 includes a read only memory (ROM), a programmable ROM (PROM), an electrically programmable ROM (EPROM), an electrically erasable and programmable ROM (EEPROM), a flash memory, a phase-change RAM (PRAM), and an MRAM.
  • ROM read only memory
  • PROM programmable ROM
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable and programmable ROM
  • flash memory a phase-change RAM (PRAM), and an MRAM.
  • MRAM magnetic RAM
  • RRAM resistive RAM
  • FRAM ferrroelectric RAM
  • etc. may be implemented as a non-volatile memory, or DRAM (dynamic RAM), SRAM (static RAM), SDRAM (synchronous DRAM), PRAM (phase- It may be implemented as a volatile memory such as change RAM), resistive RAM (RRAM), and ferroelectric RAM (FeRAM).
  • the memory 110 may be implemented as a hard disk drive (HDD), a solid state drive (
  • the processor 120 may create a virtual lane by executing at least one program stored in the memory 110 .
  • the processor 120 may perform a series of processing processes for implementing the creation of the virtual lane.
  • the processor 120 may perform general functions for controlling the apparatus 100 , and may process various operations inside the apparatus 100 .
  • the processor 120 may be implemented as an array of a plurality of logic gates or a general-purpose microprocessor.
  • the processor 120 may be configured with a single number or a plurality of processors.
  • the processor 120 may be integrally configured with the memory 110 instead of being separate from the memory 110 for storing at least one program.
  • the processor 120 may be at least one of a central processing unit (CPU), a graphics processing unit (GPU), and an application processor (AP) provided in the device 100 , but this is only an example, and the processor 120 may be It may be implemented in various forms.
  • the device 100 for generating a virtual lane may be for assisting autonomous driving.
  • the processor 120 of the device 100 may create a virtual lane by performing a series of processing steps.
  • the processor 120 may derive a driving history by accumulating driving data of at least one surrounding vehicle measured by the complex sensor over time.
  • the processor 120 may acquire driving data such as positions and speeds of surrounding vehicles based on data measured every time unit by the complex sensor.
  • the unit of time for which data is measured by the composite sensor may be 100 ms, or may be set to another suitable value.
  • the complex sensor may detect data on the surrounding environment of the vehicle by means of a camera, radar, LiDAR, and GPS.
  • complex sensors can utilize HD maps to access information such as roads, terrain, and buildings.
  • driving data of a surrounding vehicle measured by the complex sensor may be expressed as physical quantities such as a two-dimensional relative position of the surrounding vehicle with respect to the own vehicle, a degree of dependence, and a lateral speed.
  • the driving data of the surrounding vehicle measured by the complex sensor may be generated for each time unit, and the processor 120 may derive the driving history of the surrounding vehicle by accumulating the driving data generated for each time unit.
  • driving histories for each of the surrounding vehicles may be derived.
  • the processor 120 may derive the driving history by generating driving data by applying Kalman filtering for data purification to data measured by the complex sensor.
  • Kalman filtering is to derive driving history based on more refined data.
  • driving data is expressed in two-dimensional relative positions and relative speeds of surrounding vehicles, driving data of surrounding vehicles is analyzed through a model that assumes constant speed. It can be estimated and corrected.
  • the processor 120 may divide the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN).
  • the memory 110 may store a CNN including a self-attention layer.
  • the memory 110 may store the learned values of parameters constituting the CNN to store the learned CNN in a completed state.
  • the processor 120 may divide the driving history of the surrounding vehicle into at least one driving section according to the driving pattern by utilizing the CNN stored in the memory 110 .
  • CNN is one of deep neural networks including a convolutional layer, and can be variously used for machine learning that classifies patterns of inputs.
  • a CNN may include a self-attention layer to classify patterns in driving data of surrounding vehicles measured by complex sensors. That is, the processor 120 may divide the driving history into at least one driving section by classifying driving data by patterns based on CNN.
  • the CNN stored in the memory 110 and utilized by the processor 120 may include a self-attention layer to utilize the self-attention mechanism.
  • a self-attention layer to utilize the self-attention mechanism.
  • data sections that have high correlation with each other in successive time series data to form a pattern may be classified, and the classified data sections may be expressed as weights.
  • the self-attention layer may calculate an attention score in order to divide the driving history into at least one driving section according to a driving pattern.
  • the attention score may be calculated in various ways.
  • the self-attention of CNN may calculate an attention score by using a softmax function.
  • the softmax function is a function for estimating the probability of each component of a vector having an arbitrary dimension, and may refer to one of the main activation functions used in the field of machine learning.
  • the processor 120 may derive a weighted driving history of at least one surrounding vehicle by assigning a weight to at least one driving section based on self-attention.
  • a weight may be given to at least one driving section by the self-attention of the CNN, and the processor 120 considers the corresponding weight, that is, a driving section with a high weight as a higher weight than a driving section with a low weight.
  • a weighted driving history can be derived. That is, the processor 120 may derive the weighted driving history by excluding unnecessary sections from the driving history.
  • the processor 120 may determine the driving trajectory of at least one surrounding vehicle by performing curve fitting on the weighted driving history through CNN.
  • Curve fitting is a process of determining a trajectory that matches the driving data constituting the weighted driving history. For example, it may refer to a process of deriving a curve or a straight line that is fitted to the weighted driving history on a two-dimensional plane corresponding to the road surface. have.
  • the CNN may be trained to receive driving data of surrounding vehicles and determine a driving trajectory.
  • the processor 120 may determine the driving trajectory by performing curve fitting by inferring coefficients of a polynomial function for representing a trajectory fitted to the weighted driving history. That is, the driving trajectory may be determined in the form of a polynomial function on a two-dimensional plane corresponding to the road surface. For example, the processor 120 may determine the coefficients of the quadratic function or the linear function by setting the driving trajectory as a curve of the quadratic function or a straight line of the linear function so that the driving trajectory is fitted to the weighted driving history.
  • the processor 120 may generate a virtual lane on the HD map for assisting autonomous driving based on the driving trajectory.
  • the HD map may display objects, such as surrounding vehicles or pedestrians, which are the basis for determination in the planning and determination stage of autonomous driving.
  • the processor 120 may generate virtual lanes based on driving trajectories of surrounding vehicles on the HD map, for example, assuming that each driving trajectory is formed between the virtual lanes.
  • the memory 110 may store the CNN, and the CNN divides the driving history into at least one driving section including a self-attention layer, derives a weighted driving history through weighting, and performs curve fitting to perform a driving trajectory can be learned to determine, and the processor 120 can generate a virtual lane based on the CNN, so even in bad weather conditions, that is, even when the actual lane on the road surface is difficult to identify by visual information such as a camera, the complex sensor Since a virtual lane can be created only with driving data measured by , robustness to the weather environment and road surface condition of autonomous driving can be secured.
  • FIG. 2 is a diagram for explaining how an autonomous driving system including an apparatus for generating a virtual lane according to some exemplary embodiments operates.
  • the autonomous driving system 200 may be configured with various modules.
  • the autonomous driving system 200 may include a complex sensor 210 , a localization module 220 , a virtual lane creation module 230 , a vehicle control module 240 , and a vehicle driving module 250 .
  • the apparatus 100 described with reference to FIG. 1 may be for implementing the function of the virtual lane generating module 230 .
  • Memory 110 and processor 120 of device 100 generate virtual lanes from inputs by composite sensor 210 and localization module 220, to plan and control the behavior of the vehicle.
  • Information on the virtual lane may be provided to the vehicle control module 240 requiring the information.
  • the complex sensor 210 may detect data on the surrounding environment of the vehicle using a camera, radar, LiDAR, GPS, or the like.
  • complex sensors can utilize HD maps to access information such as roads, terrain, and buildings.
  • the composite sensor 210 may include a lidar (LiDAR) that detects surrounding objects by utilizing the reflection of laser light, even when it is difficult to visually identify the actual lane on the road surface due to bad weather or road conditions through the lidar , driving data of surrounding vehicles, which is a basis for generating a virtual lane, may be obtained and provided to the virtual lane generating module 230 .
  • LiDAR lidar
  • the localization module 220 may match the location of the own vehicle on the HD map. In addition, when the relative position of the surrounding vehicle is detected by the complex sensor 210 , the approximate absolute positions of the surrounding vehicle and the host vehicle may be matched on the HD map. Meanwhile, the approximate absolute position of the own vehicle and lane information on the HD map by the localization module 220 may be utilized in the process of determining the driving trajectory of the surrounding vehicle.
  • the virtual lane generating module 230 may include a driving trajectory determining module 231 , a virtual lane generating module 232 , and a target selection module 233 .
  • the driving trajectory determining module 231 and the virtual lane generating module 232 may be understood to perform respective processing steps for generating a virtual lane performed in the device 100 .
  • the target selection module 233 may select a target vehicle that is a basis for speed control and inter-vehicle distance control of the host vehicle based on the virtual lane. That is, the autonomous driving system 200 may control the speed of the own vehicle so that an appropriate inter-vehicle distance with the target vehicle is maintained. , selecting a target vehicle used for speed control and inter-vehicle distance control according to autonomous driving based on the virtual lane.
  • the vehicle control module 240 may perform steering control, speed control, and inter-vehicle distance control of the own vehicle.
  • the vehicle control module 240 may control a variable such as a yaw rate that determines the steering of the own vehicle, that is, the lateral behavior of the vehicle, based on the virtual lane provided from the virtual lane generating module 230, and , the speed of the own vehicle and the inter-vehicle distance to the target vehicle may be controlled based on the target vehicle provided from the virtual lane generating module 230 .
  • the vehicle driving module 250 may drive the own vehicle according to a control command determined by the vehicle control module 240 , for example, according to a steering command or an acceleration/deceleration command.
  • FIG. 3 is a diagram for describing a detailed process of generating a virtual lane according to some exemplary embodiments.
  • a first example 310 and a second example 320 of the creation of a virtual lane are shown.
  • the first example 310 is to illustrate a case in which the CNN and self-attention of the present invention are not utilized, and the second example 320 is to illustrate a case in which the CNN and self-attention of the present invention are utilized. .
  • the first example 310 may represent a case in which the surrounding vehicle 312 and the surrounding vehicle 313 are driven together with the own vehicle 311 , and the second example 320 is also the same as the first example 310 .
  • a case in which the vehicle 321 , the surrounding vehicle 322 , and the surrounding vehicle 323 travel may be represented.
  • the surrounding vehicle 312 may drive while following the lane without changing the lane, but the surrounding vehicle 313 may change from before the change 313A through the lane change 313B to after the change ( 313C) to change the driving lane.
  • the autonomous driving system of the own vehicle 311 cannot interpret the behavior of the surrounding vehicle 313 as a lane change, the virtual lane 314 may be generated according to the driving trajectory of the surrounding vehicle 313, Accuracy of virtual lane generation may be deteriorated.
  • the device 100 divides the driving history of the surrounding vehicle into at least one driving section based on CNN and self-attention, and , a method of deriving a weighted driving history by assigning weights to each driving section, it is possible to consider the influence of a lane change of a surrounding vehicle.
  • the processor 120 may divide the driving history into at least one driving section by dividing the driving history into a lane following section and a lane change section according to a driving pattern. For example, in the second example 320 , the processor 120 lane-follows driving data corresponding to the before-change 323A and after-change 323C of the surrounding vehicle 323 based on the CNN and self-attention. As a section, driving data corresponding to the lane change 323B may be divided into a lane change section.
  • the processor 120 may derive the weighted driving history by setting the weight of the lane following section to be higher than the weight of the lane change section. For example, in the second example 320 , the processor 120 may give a high weight to the lane following section before the change 323A and after the change 323C, and the lane change of the lane change 323B A low weight can be given to the section.
  • the virtual lane 324 that more accurately matches the lane on the actual road surface that is not identified by visual information such as a camera is provided. can be generated, so that the accuracy of virtual lane generation can be prevented from being deteriorated.
  • FIG. 4 is a flowchart illustrating steps of configuring a method for generating a virtual lane according to some embodiments.
  • the method of generating a virtual lane may include steps 410 to 450 .
  • the present invention is not limited thereto, and general steps other than the steps shown in FIG. 4 may be further included in the method of generating a virtual lane.
  • the method of generating a virtual lane for assisting autonomous driving of FIG. 4 may include steps processed in time series by the apparatus 100 for generating a virtual lane described with reference to FIGS. 1 to 3 . Accordingly, even if the contents of the method of FIG. 4 are omitted below, the contents described above for the apparatus 100 may be equally applied to the method of FIG. 4 .
  • the device 100 may derive a driving history by accumulating driving data of at least one surrounding vehicle measured by the complex sensor over time.
  • the apparatus 100 may derive the driving history by generating driving data by applying Kalman filtering for data purification to data measured by the complex sensor.
  • the device 100 may divide the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN).
  • CNN convolutional neural network
  • the apparatus 100 may divide the driving history into at least one driving section by dividing the driving history into a lane following section and a lane change section according to the driving pattern.
  • the apparatus 100 may derive the weighted driving history by setting the weight of the lane following section to be higher than the weight of the lane change section.
  • the apparatus 100 may derive a weighted driving history of at least one surrounding vehicle by giving weights to at least one driving section based on self-attention.
  • the device 100 may determine the driving trajectory of at least one surrounding vehicle by performing curve fitting on the weighted driving history through CNN.
  • the apparatus 100 may determine the driving trajectory by performing curve fitting by inferring coefficients of a polynomial function for representing the trajectory fitted to the weighted driving history.
  • the device 100 may generate a virtual lane on the HD map for assisting autonomous driving based on the driving trajectory.
  • the self-attention may calculate an attention score by using a softmax function.
  • the composite sensor may include a lidar (LiDAR) that detects surrounding objects by utilizing the reflection of laser light.
  • LiDAR lidar
  • the device 100 may select a target vehicle used for speed control and inter-vehicle distance control according to autonomous driving based on the virtual lane.
  • the method of generating a virtual lane for assisting autonomous driving of FIG. 4 may be recorded in a computer-readable recording medium in which at least one program or software including instructions for executing the method is recorded.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and floppy disks. Magneto-optical media and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like may be included. Examples of program instructions may include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Disclosed is a method for generating a virtual lane for assisting autonomous driving, the method comprising the steps of: deriving a driving history by accumulating driving data of at least one surrounding vehicle measured by a composite sensor according to the passage of time; classifying the driving history into at least one driving section according to a driving pattern on the basis of self-attention implemented by a convolutional neural network (CNN); deriving a weighted driving history of the at least one surrounding vehicle by assigning a weight to the at least one driving section on the basis of the self-attention; determining a driving trajectory of the at least one surrounding vehicle by performing curve fitting on the weighted driving history through the CNN; and generating a virtual lane on an HD map for assisting autonomous driving on the basis of the driving trajectory.

Description

악천후 자율주행을 위한 교통 흐름 정보 인지 기반 가상 차선 생성 장치 및 방법Apparatus and method for traffic flow information recognition-based virtual lane creation for autonomous driving in bad weather
본 발명은 가상 차선을 생성하는 방법 및 장치에 관한 것이다. 보다 상세하게는, 본 발명은 CNN(convolutional neural network)의 셀프-어텐션(self-attention) 메커니즘을 활용하여 주변 차량의 주행 궤적을 결정하고, 그로부터 가상 차선을 생성하는 방법 및 장치에 관한 것이다.The present invention relates to a method and apparatus for generating a virtual lane. More particularly, the present invention relates to a method and apparatus for determining a driving trajectory of a surrounding vehicle by utilizing a self-attention mechanism of a convolutional neural network (CNN) and generating a virtual lane therefrom.
운전자 지원 시스템 및 지능형 차량 시스템과 같이 운전자에게 편의를 제공하는 다양한 기술들이 개발되고 있다. 특히, 자율 주행 기술에 의하면 주변 환경에 대한 인지와 측위, 주변 환경의 변화에 대한 판단과 계획, 및 그에 따른 차량 구동의 제어까지 수행되어 운전자의 개입 없이 차량이 자율적으로 주행할 수 있다.Various technologies that provide convenience to drivers, such as driver assistance systems and intelligent vehicle systems, are being developed. In particular, according to the autonomous driving technology, recognition and positioning of the surrounding environment, determination and planning for changes in the surrounding environment, and control of vehicle driving are performed accordingly, so that the vehicle can autonomously drive without driver intervention.
자율 주행이 원활하게 수행되기 위해서는 주변 환경에 대한 인지와 측위가 정밀하게 수행될 것이 요구될 수 있다. 이를 위해 레이더, 카메라 등과 같은 센서들을 통해 주변 차량, 보행자 및 교통 신호와 같은 교통 환경을 구성하는 오브젝트들의 상태가 검출될 수 있다. 그 중에서도, 노면 상에 표시되어 있는 차선에 대한 정보는 자차량의 거동을 계획하고, 주변 차량들의 거동을 예측하기 위해 주요하게 활용될 수 있다.In order for autonomous driving to be smoothly performed, recognition and positioning of the surrounding environment may be required to be precisely performed. To this end, the states of objects constituting the traffic environment, such as surrounding vehicles, pedestrians, and traffic signals, may be detected through sensors such as radar and cameras. Among them, information on lanes displayed on the road surface may be mainly utilized to plan the behavior of the own vehicle and predict the behavior of surrounding vehicles.
자차량 주변의 차선에 대한 정보를 획득하기 위해서는 카메라에 의한 영상과 같은 시각 정보가 주요한 획득 수단이 될 수 있다. 또는, 라이더(LiDAR) 등을 통한 레이저 스캐닝에 대한 반사도의 차이를 이용하여 노면 상에서 차선의 도색 부분이 아스팔트로부터 구별될 수도 있다.In order to acquire information on lanes around the own vehicle, visual information such as an image by a camera may be a main acquisition means. Alternatively, the painted portion of the lane on the road surface may be distinguished from the asphalt by using the difference in reflectivity for laser scanning through LiDAR or the like.
다만, 위와 같은 방식들은 기상 환경과 노면 상태가 양호한 것을 전제로 하는 것이므로, 강수량이 많은 경우 또는 적설로 인해 차선이 가려지는 경우와 같이 악천후 상황에서는 차선을 검출하는 것이 어렵게 되어 자율 주행이 원활하게 수행되지 못할 수 있다.However, since the above methods are based on the premise that the weather environment and road surface condition are good, it becomes difficult to detect lanes in bad weather conditions, such as when there is a lot of precipitation or when lanes are covered by snow, so autonomous driving is performed smoothly. may not be
본 발명으로부터 해결하고자 하는 기술적 과제는, 폭설 또는 폭우 등과 같은 기상 환경으로 인해 자차량 주변의 차선이 검출되지 못하는 경우에도 자율 주행에 따른 거동 계획 및 구동 제어가 수행될 수 있도록, 자차량 주변에 가상 차선을 생성하는 기술을 제공하는 것이다.The technical problem to be solved by the present invention is to provide a virtual environment around the own vehicle so that behavior planning and driving control according to autonomous driving can be performed even when a lane around the own vehicle is not detected due to a weather environment such as heavy snow or heavy rain. It is to provide technology to create lanes.
전술한 기술적 과제를 해결하기 위한 수단으로서, 본 발명의 일 측면에 따른 자율 주행을 보조하기 위한 가상 차선을 생성하는 방법은, 복합 센서에 의해 측정되는 적어도 하나의 주변 차량의 주행 데이터를 시간 흐름에 따라 누적하여 주행 히스토리를 도출하는 단계; CNN(convolutional neural network)에 의해 구현되는 셀프-어텐션(self-attention)에 기초하여 상기 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분하는 단계; 상기 셀프-어텐션에 기초하여 상기 적어도 하나의 주행 구간에 가중치를 부여함으로써 상기 적어도 하나의 주변 차량의 가중 주행 히스토리를 도출하는 단계; 상기 CNN을 통해 상기 가중 주행 히스토리에 대한 커브 피팅을 수행하여 상기 적어도 하나의 주변 차량의 주행 궤적을 결정하는 단계; 및 상기 주행 궤적에 기초하여 상기 자율 주행을 보조하기 위한 HD 맵 상에 상기 가상 차선을 생성하는 단계를 포함한다.As a means for solving the above-described technical problem, the method of generating a virtual lane for assisting autonomous driving according to an aspect of the present invention includes driving data of at least one surrounding vehicle measured by a complex sensor in time flow. deriving a driving history by accumulating accordingly; dividing the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN); deriving a weighted driving history of the at least one surrounding vehicle by assigning a weight to the at least one driving section based on the self-attention; determining a driving trajectory of the at least one surrounding vehicle by performing curve fitting on the weighted driving history through the CNN; and generating the virtual lane on an HD map for assisting the autonomous driving based on the driving trajectory.
본 발명의 다른 측면에 따른 자율 주행을 보조하기 위한 가상 차선을 생성하는 장치는, 적어도 하나의 프로그램을 저장하는 메모리; 및 상기 적어도 하나의 프로그램을 실행함으로써 상기 가상 차선을 생성하는 프로세서를 포함하고, 상기 프로세서는, 복합 센서에 의해 측정되는 적어도 하나의 주변 차량의 주행 데이터를 시간 흐름에 따라 누적하여 주행 히스토리를 도출하고, CNN(convolutional neural network)에 의해 구현되는 셀프-어텐션(self-attention)에 기초하여 상기 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분하고, 상기 셀프-어텐션에 기초하여 상기 적어도 하나의 주행 구간에 가중치를 부여함으로써 상기 적어도 하나의 주변 차량의 가중 주행 히스토리를 도출하고, 상기 CNN을 통해 상기 가중 주행 히스토리에 대한 커브 피팅을 수행하여 상기 적어도 하나의 주변 차량의 주행 궤적을 결정하고, 상기 주행 궤적에 기초하여 상기 자율 주행을 보조하기 위한 HD 맵 상에 상기 가상 차선을 생성한다.According to another aspect of the present invention, an apparatus for generating a virtual lane for assisting autonomous driving includes: a memory storing at least one program; and a processor for generating the virtual lane by executing the at least one program, wherein the processor is configured to accumulate driving data of at least one surrounding vehicle measured by a complex sensor over time to derive a driving history, , divides the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN), and divides the driving history into at least one driving section based on the self-attention. Deriving a weighted driving history of the at least one surrounding vehicle by assigning a weight to a driving section, and performing curve fitting on the weighted driving history through the CNN to determine the driving trajectory of the at least one surrounding vehicle, and The virtual lane is generated on the HD map for assisting the autonomous driving based on the driving trajectory.
본 발명에 따른 방법 및 장치에 의하면, 기상 악화로 인해 노면 상의 차선이 식별되기 어려운 경우에도, 주변 차량의 주행 흐름을 참고하여 가상 차선이 생성될 수 있으므로, 가상 차선을 기반으로 하는 자율 주행 과정이 원활하게 수행될 수 있다. 특히, 주변 차량의 주행 흐름을 참고하는 과정에서 셀프-어텐션 메커니즘에 따라 주변 차량의 주행 패턴이 구분되어, 차선 변경과 같이 차선 생성을 불분명하게 하는 거동이 낮은 가중치로 고려될 수 있으므로, 가상 차선이 생성되는 정확도가 향상될 수 있다.According to the method and apparatus according to the present invention, even when it is difficult to identify a lane on a road surface due to bad weather, a virtual lane can be created by referring to the driving flow of surrounding vehicles, so that the autonomous driving process based on the virtual lane is can be performed smoothly. In particular, in the process of referring to the driving flow of surrounding vehicles, the driving patterns of surrounding vehicles are classified according to the self-attention mechanism, and behaviors that obscure lane creation, such as lane changes, can be considered with low weight. The generated accuracy can be improved.
도 1은 일부 실시예에 따른 가상 차선을 생성하는 장치를 구성하는 요소들을 나타내는 블록도이다.1 is a block diagram illustrating elements constituting an apparatus for generating a virtual lane according to some exemplary embodiments.
도 2는 일부 실시예에 따른 가상 차선을 생성하는 장치를 포함하는 자율 주행 시스템이 동작하는 방식을 설명하기 위한 도면이다.FIG. 2 is a diagram for explaining how an autonomous driving system including an apparatus for generating a virtual lane according to some exemplary embodiments operates.
도 3은 일부 실시예에 따른 가상 차선이 생성되는 구체적인 과정을 설명하기 위한 도면이다.3 is a diagram for describing a detailed process of generating a virtual lane according to some exemplary embodiments.
도 4는 일부 실시예에 따른 가상 차선을 생성하는 방법을 구성하는 단계들을 나타내는 흐름도이다.4 is a flowchart illustrating steps of configuring a method for generating a virtual lane according to some embodiments.
이하에서는 도면을 참조하여 본 발명의 실시예들이 상세하게 설명될 것이다. 이하에서의 설명은 실시예들을 구체화하기 위한 것일 뿐, 본 발명에 따른 권리범위를 제한하거나 한정하기 위한 것은 아니다. 본 발명에 관한 기술 분야에서 통상의 지식을 가진 자가 발명의 상세한 설명 및 실시예들로부터 용이하게 유추할 수 있는 것은 본 발명에 따른 권리범위에 속하는 것으로 해석되어야 한다.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. The description below is only for specifying the embodiments, and is not intended to limit or limit the scope of rights according to the present invention. What a person of ordinary skill in the art related to the present invention can easily infer from the detailed description and embodiments of the invention should be construed as belonging to the scope of the present invention.
본 발명에서 사용되는 용어는 본 발명에 관한 기술 분야에서 널리 사용되는 일반적인 용어로 기재되었으나, 본 발명에서 사용되는 용어의 의미는 해당 분야에 종사하는 기술자의 의도, 새로운 기술의 출현, 심사기준 또는 판례 등에 따라 달라질 수 있다. 일부 용어는 출원인에 의해 임의로 선정될 수 있고, 이 경우 임의로 선정되는 용어의 의미가 상세하게 설명될 것이다. 본 발명에서 사용되는 용어는 단지 사전적 의미만이 아닌, 명세서의 전반적인 맥락을 반영하는 의미로 해석되어야 한다.The terms used in the present invention have been described as general terms widely used in the technical field related to the present invention, but the meaning of the terms used in the present invention is the intention of a technician in the relevant field, the emergence of new technology, examination standards or precedents. It may vary depending on Some terms may be arbitrarily selected by the applicant, and in this case, the meaning of the arbitrarily selected terms will be described in detail. Terms used in the present invention should be interpreted as meanings reflecting the overall context of the specification, not just dictionary meanings.
본 발명에서 사용되는 '구성된다' 또는 '포함한다' 와 같은 용어는 명세서에 기재되는 구성 요소들 또는 단계들을 반드시 모두 포함하는 것으로 해석되지 않아야 하며, 일부 구성 요소들 또는 단계들은 포함되지 않는 경우, 및 추가적인 구성 요소들 또는 단계들이 더 포함되는 경우 또한 해당 용어로부터 의도되는 것으로 해석되어야 한다.Terms such as 'consisting of' or 'comprising' used in the present invention should not be construed as necessarily including all of the components or steps described in the specification, and if some components or steps are not included, And when additional components or steps are further included, it should also be construed as intended from the term.
본 발명에서 사용되는 '제 1' 또는 '제 2' 와 같은 서수를 포함하는 용어는 다양한 구성 요소들 또는 단계들을 설명하기 위해 사용될 수 있으나, 해당 구성 요소들 또는 단계들은 서수에 의해 한정되지 않아야 한다. 서수를 포함하는 용어는 하나의 구성 요소 또는 단계를 다른 구성 요소들 또는 단계들로부터 구별하기 위한 용도로만 해석되어야 한다.Terms including an ordinal number such as 'first' or 'second' used in the present invention may be used to describe various components or steps, but the components or steps should not be limited by the ordinal number. . Terms containing an ordinal number should only be interpreted for the purpose of distinguishing one element or step from other elements or steps.
이하에서는 도면을 참조하여 본 발명의 실시예들이 상세하게 설명될 것이다. 본 발명에 관한 기술 분야에서 통상의 지식을 가진 자에게 널리 알려져 있는 사항들에 대해서는 자세한 설명이 생략된다.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Detailed descriptions of matters widely known to those of ordinary skill in the art related to the present invention will be omitted.
도 1은 일부 실시예에 따른 가상 차선을 생성하는 장치를 구성하는 요소들을 나타내는 블록도이다.1 is a block diagram illustrating elements constituting an apparatus for generating a virtual lane according to some exemplary embodiments.
도 1을 참조하면, 가상 차선을 생성하는 장치(100)는 메모리(110) 및 프로세서(120)를 포함할 수 있다. 다만 이에 제한되는 것은 아니고, 도 1에 도시된 구성 요소들 외에 다른 범용적인 구성 요소들이 장치(100)에 더 포함될 수 있다.Referring to FIG. 1 , an apparatus 100 for generating a virtual lane may include a memory 110 and a processor 120 . However, the present invention is not limited thereto, and other general-purpose components other than the components shown in FIG. 1 may be further included in the device 100 .
장치(100)는 가상 차선을 생성할 수 있도록, 차량에 탑재되는 컴퓨팅 디바이스일 수 있다. 장치(100)는 각종 데이터, 명령어들, 적어도 하나의 프로그램 또는 소프트웨어를 저장하기 위한 수단으로서 메모리(110)를 포함할 수 있고, 명령어들 또는 적어도 하나의 프로그램을 실행함으로써 각종 데이터에 대한 처리를 수행하기 위한 수단으로서 프로세서(120)를 포함할 수 있다.The apparatus 100 may be a computing device mounted on a vehicle so as to create a virtual lane. The device 100 may include the memory 110 as a means for storing various data, instructions, at least one program or software, and performs processing on various data by executing the instructions or at least one program. The processor 120 may be included as a means for doing so.
예를 들면, 장치(100)는 스마트 모빌리티 디바이스 등의 형태로 차량에 탑재되는 전장 부품을 의미할 수 있다. 다만 장치(100)가 차량에 탑재되는 형태에 제한되는 것은 아니고, 장치(100)는 무선으로 차량과 데이터를 주고받는 모바일 디바이스로 구현될 수도 있다.For example, the device 100 may refer to an electric component mounted in a vehicle in the form of a smart mobility device. However, the device 100 is not limited to a type mounted on a vehicle, and the device 100 may be implemented as a mobile device that wirelessly exchanges data with the vehicle.
메모리(110)는 가상 차선의 생성에 관련되는 각종 명령어들을 적어도 하나의 프로그램의 형태로 저장할 수 있다. 예를 들면, 메모리(110)는 컴퓨터 프로그램 또는 모바일 애플리케이션과 같은 소프트웨어를 구성하는 명령어들을 저장할 수 있다. 또한, 메모리(110)는 적어도 하나의 프로그램의 실행에 필요한 각종 데이터를 저장할 수 있다.The memory 110 may store various commands related to generation of a virtual lane in the form of at least one program. For example, the memory 110 may store instructions constituting software such as a computer program or mobile application. In addition, the memory 110 may store various data necessary for the execution of at least one program.
메모리(110)는 ROM(read only memory), PROM(programmable ROM), EPROM(electrically programmable ROM), EEPROM(electrically erasable and programmable ROM), 플래시 메모리(flash memory), PRAM(phase-change RAM), MRAM(magnetic RAM), RRAM(resistive RAM), FRAM(ferroelectric RAM) 등과 같은 비휘발성 메모리로 구현될 수 있고, 또는 DRAM(dynamic RAM), SRAM(static RAM), SDRAM(synchronous DRAM), PRAM(phase-change RAM), RRAM(resistive RAM), FeRAM(ferroelectric RAM) 등의 휘발성 메모리로 구현될 수 있다. 또한, 메모리(110)는 HDD(hard disk drive), SSD(solid state drive), SD(secure digital), Micro-SD(micro secure digital) 등으로 구현될 수도 있다.The memory 110 includes a read only memory (ROM), a programmable ROM (PROM), an electrically programmable ROM (EPROM), an electrically erasable and programmable ROM (EEPROM), a flash memory, a phase-change RAM (PRAM), and an MRAM. (magnetic RAM), RRAM (resistive RAM), FRAM (ferroelectric RAM), etc. may be implemented as a non-volatile memory, or DRAM (dynamic RAM), SRAM (static RAM), SDRAM (synchronous DRAM), PRAM (phase- It may be implemented as a volatile memory such as change RAM), resistive RAM (RRAM), and ferroelectric RAM (FeRAM). In addition, the memory 110 may be implemented as a hard disk drive (HDD), a solid state drive (SSD), a secure digital (SD), a micro secure digital (Micro-SD), or the like.
프로세서(120)는 메모리(110)에 저장되는 적어도 하나의 프로그램을 실행함으로써 가상 차선을 생성할 수 있다. 프로세서(120)는 가상 차선의 생성을 구현하기 위한 일련의 처리 과정들을 수행할 수 있다. 또한, 프로세서(120)는 장치(100)를 제어하기 위한 전반적인 기능을 수행할 수 있고, 장치(100) 내부의 각종 연산을 처리할 수 있다.The processor 120 may create a virtual lane by executing at least one program stored in the memory 110 . The processor 120 may perform a series of processing processes for implementing the creation of the virtual lane. In addition, the processor 120 may perform general functions for controlling the apparatus 100 , and may process various operations inside the apparatus 100 .
프로세서(120)는 다수의 논리 게이트들의 어레이 또는 범용적인 마이크로 프로세서로 구현될 수 있다. 프로세서(120)는 단일 개수 또는 복수의 프로세서들로 구성될 수 있다. 프로세서(120)는 적어도 하나의 프로그램을 저장하는 메모리(110)와 별개의 구성이 아닌, 메모리(110)와 함께 일체로 구성될 수도 있다. 프로세서(120)는 장치(100) 내에 구비되는 CPU(central processing unit), GPU(graphics processing unit) 및 AP(application processor) 중 적어도 하나일 수 있으나, 이는 예시에 불과할 뿐, 프로세서(120)는 다른 다양한 형태로도 구현될 수 있다.The processor 120 may be implemented as an array of a plurality of logic gates or a general-purpose microprocessor. The processor 120 may be configured with a single number or a plurality of processors. The processor 120 may be integrally configured with the memory 110 instead of being separate from the memory 110 for storing at least one program. The processor 120 may be at least one of a central processing unit (CPU), a graphics processing unit (GPU), and an application processor (AP) provided in the device 100 , but this is only an example, and the processor 120 may be It may be implemented in various forms.
가상 차선을 생성하는 장치(100)는 자율 주행을 보조하기 위한 것일 수 있다. 장치(100)의 프로세서(120)는 일련의 처리 단계들을 수행함으로써 가상 차선을 생성할 수 있다.The device 100 for generating a virtual lane may be for assisting autonomous driving. The processor 120 of the device 100 may create a virtual lane by performing a series of processing steps.
프로세서(120)는 복합 센서에 의해 측정되는 적어도 하나의 주변 차량의 주행 데이터를 시간 흐름에 따라 누적하여 주행 히스토리를 도출할 수 있다. 프로세서(120)는 복합 센서에 의해 매 시간 단위마다 측정되는 데이터에 기초하여 주변 차량의 위치 및 속도와 같은 주행 데이터를 획득할 수 있다. 예를 들면, 복합 센서에 의해 데이터가 측정되는 시간 단위는 100 ms일 수 있고, 또는 다른 적절한 수치로 설정될 수 있다.The processor 120 may derive a driving history by accumulating driving data of at least one surrounding vehicle measured by the complex sensor over time. The processor 120 may acquire driving data such as positions and speeds of surrounding vehicles based on data measured every time unit by the complex sensor. For example, the unit of time for which data is measured by the composite sensor may be 100 ms, or may be set to another suitable value.
복합 센서는 카메라, 레이더(Radar), 라이더(LiDAR) 및 GPS 등에 의해 차량의 주변 환경에 대한 데이터를 검출할 수 있다. 또한, 복합 센서는 HD 맵을 활용하여 도로, 지형, 건물과 같은 정보에 접근할 수 있다. 예를 들면, 복합 센서에 의해 측정되는 주변 차량의 주행 데이터는 주변 차량의 자차량에 대한 2차원 상대 위치, 종속도 및 횡속도와 같은 물리량들로 표현될 수 있다.The complex sensor may detect data on the surrounding environment of the vehicle by means of a camera, radar, LiDAR, and GPS. In addition, complex sensors can utilize HD maps to access information such as roads, terrain, and buildings. For example, driving data of a surrounding vehicle measured by the complex sensor may be expressed as physical quantities such as a two-dimensional relative position of the surrounding vehicle with respect to the own vehicle, a degree of dependence, and a lateral speed.
복합 센서에 의해 측정되는 주변 차량의 주행 데이터는 각 시간 단위마다 생성될 수 있고, 프로세서(120)는 각 시간 단위마다 생성되는 주행 데이터를 누적하여 주변 차량의 주행 히스토리를 도출할 수 있다. 자차량 주변에 둘 이상의 주변 차량들이 검출되는 경우, 주변 차량들 각각에 대한 주행 히스토리가 도출될 수 있다.The driving data of the surrounding vehicle measured by the complex sensor may be generated for each time unit, and the processor 120 may derive the driving history of the surrounding vehicle by accumulating the driving data generated for each time unit. When two or more surrounding vehicles are detected around the own vehicle, driving histories for each of the surrounding vehicles may be derived.
주행 히스토리를 도출하는 과정에서, 프로세서(120)는 복합 센서에 의해 측정되는 데이터에 데이터 정제를 위한 칼만 필터링(Kalman filtering)을 적용하여 주행 데이터를 생성함으로써 주행 히스토리를 도출할 수 있다. 칼만 필터링은 보다 정제된 데이터에 기반하여 주행 히스토리를 도출하기 위한 것으로서, 주행 데이터가 주변 차량의 2차원의 상대 위치 및 상대 속도로 표현되는 경우 등속도를 가정하는 모델을 통해 주변 차량의 주행 데이터를 추정 및 보정할 수 있다.In the process of deriving the driving history, the processor 120 may derive the driving history by generating driving data by applying Kalman filtering for data purification to data measured by the complex sensor. Kalman filtering is to derive driving history based on more refined data. When driving data is expressed in two-dimensional relative positions and relative speeds of surrounding vehicles, driving data of surrounding vehicles is analyzed through a model that assumes constant speed. It can be estimated and corrected.
프로세서(120)는 CNN(convolutional neural network)에 의해 구현되는 셀프-어텐션(self-attention)에 기초하여 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분할 수 있다. 메모리(110)는 셀프-어텐션 레이어를 포함하는 CNN을 저장할 수 있다. 메모리(110)는 CNN을 구성하는 파라미터들의 학습 완료된 값들을 저장하여 학습이 완료된 상태의 CNN을 저장할 수 있다. 프로세서(120)는 메모리(110)에 저장되는 CNN을 활용하여 주변 차량의 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분할 수 있다.The processor 120 may divide the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN). The memory 110 may store a CNN including a self-attention layer. The memory 110 may store the learned values of parameters constituting the CNN to store the learned CNN in a completed state. The processor 120 may divide the driving history of the surrounding vehicle into at least one driving section according to the driving pattern by utilizing the CNN stored in the memory 110 .
CNN은 컨볼루셔널 레이어(convolutional layer)를 포함하는 심층 신경망(deep neural network)의 하나로서, 입력들의 패턴을 분류하는 기계 학습에 다양하게 활용될 수 있다. 예를 들면, CNN은 셀프-어텐션 레이어를 포함하여 복합 센서에 의해 측정되는 주변 차량의 주행 데이터의 패턴을 분류할 수 있다. 즉, 프로세서(120)는 CNN에 기초하여 주행 데이터를 패턴별로 분류함으로써 주행 히스토리를 적어도 하나의 주행 구간으로 구분할 수 있다.CNN is one of deep neural networks including a convolutional layer, and can be variously used for machine learning that classifies patterns of inputs. For example, a CNN may include a self-attention layer to classify patterns in driving data of surrounding vehicles measured by complex sensors. That is, the processor 120 may divide the driving history into at least one driving section by classifying driving data by patterns based on CNN.
메모리(110)에 저장되어 프로세서(120)에 의해 활용되는 CNN은 셀프-어텐션 메커니즘을 활용하기 위해 셀프-어텐션 레이어를 포함할 수 있다. 셀프-어텐션 메커니즘에 의하면 연속하는 시계열 데이터에서 서로 높은 연관성을 가져 패턴을 형성하는 데이터 구간이 분류될 수 있고, 분류되는 데이터 구간들이 가중치로 표현될 수 있다.The CNN stored in the memory 110 and utilized by the processor 120 may include a self-attention layer to utilize the self-attention mechanism. According to the self-attention mechanism, data sections that have high correlation with each other in successive time series data to form a pattern may be classified, and the classified data sections may be expressed as weights.
셀프-어텐션 레이어는 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분하기 위해 어텐션 스코어(attention score)를 산정할 수 있다. 어텐션 스코어는 다양한 방식으로 산정될 수 있다. 예를 들면, CNN의 셀프-어텐션은, 소프트맥스 함수(softmax function)를 활용하여 어텐션 스코어를 산정할 수 있다. 소프트맥스 함수는 임의의 차원을 갖는 벡터에 대하여, 벡터의 성분들 각각에 대한 확률을 추정하는 함수로서, 기계 학습 분야에서 활용되는 주요 활성화 함수 중 하나를 의미할 수 있다.The self-attention layer may calculate an attention score in order to divide the driving history into at least one driving section according to a driving pattern. The attention score may be calculated in various ways. For example, the self-attention of CNN may calculate an attention score by using a softmax function. The softmax function is a function for estimating the probability of each component of a vector having an arbitrary dimension, and may refer to one of the main activation functions used in the field of machine learning.
프로세서(120)는 셀프-어텐션에 기초하여 적어도 하나의 주행 구간에 가중치를 부여함으로써 적어도 하나의 주변 차량의 가중 주행 히스토리를 도출할 수 있다. CNN의 셀프-어텐션에 의해 적어도 하나의 주행 구간에 대해 가중치가 부여될 수 있고, 프로세서(120)는 해당 가중치를 고려하여, 즉 높은 가중치의 주행 구간을 낮은 가중치의 주행 구간보다 높은 비중으로 고려하여 가중 주행 히스토리를 도출할 수 있다. 즉, 프로세서(120)는 주행 히스토리 중 불필요한 구간을 배제하여 가중 주행 히스토리를 도출할 수 있다.The processor 120 may derive a weighted driving history of at least one surrounding vehicle by assigning a weight to at least one driving section based on self-attention. A weight may be given to at least one driving section by the self-attention of the CNN, and the processor 120 considers the corresponding weight, that is, a driving section with a high weight as a higher weight than a driving section with a low weight. A weighted driving history can be derived. That is, the processor 120 may derive the weighted driving history by excluding unnecessary sections from the driving history.
프로세서(120)는 CNN을 통해 가중 주행 히스토리에 대한 커브 피팅을 수행하여 적어도 하나의 주변 차량의 주행 궤적을 결정할 수 있다. 커브 피팅은 가중 주행 히스토리를 구성하는 주행 데이터들에 부합하는 궤적을 결정하는 과정으로서, 예를 들면 노면에 해당하는 2차원 평면 상에서 가중 주행 히스토리에 피팅되는 곡선 또는 직선을 도출하는 과정을 의미할 수 있다. 이를 위해, CNN은 주변 차량의 주행 데이터를 입력받아 주행 궤적을 결정하도록 학습될 수 있다.The processor 120 may determine the driving trajectory of at least one surrounding vehicle by performing curve fitting on the weighted driving history through CNN. Curve fitting is a process of determining a trajectory that matches the driving data constituting the weighted driving history. For example, it may refer to a process of deriving a curve or a straight line that is fitted to the weighted driving history on a two-dimensional plane corresponding to the road surface. have. To this end, the CNN may be trained to receive driving data of surrounding vehicles and determine a driving trajectory.
프로세서(120)는 가중 주행 히스토리에 피팅되는 궤적을 나타내기 위한 다항함수의 계수를 추론하여 커브 피팅을 수행함으로써 주행 궤적을 결정할 수 있다. 즉, 주행 궤적은 노면에 해당하는 2차원 평면 상에서의 다항함수의 형태로 결정될 수 있다. 예를 들면, 프로세서(120)는 주행 궤적을 2차 함수의 곡선 또는 1차 함수의 직선으로 설정하여 주행 궤적이 가중 주행 히스토리에 피팅되도록 하는 2차 함수 또는 1차 함수의 계수들을 결정할 수 있다.The processor 120 may determine the driving trajectory by performing curve fitting by inferring coefficients of a polynomial function for representing a trajectory fitted to the weighted driving history. That is, the driving trajectory may be determined in the form of a polynomial function on a two-dimensional plane corresponding to the road surface. For example, the processor 120 may determine the coefficients of the quadratic function or the linear function by setting the driving trajectory as a curve of the quadratic function or a straight line of the linear function so that the driving trajectory is fitted to the weighted driving history.
프로세서(120)는 주행 궤적에 기초하여 자율 주행을 보조하기 위한 HD 맵 상에 가상 차선을 생성할 수 있다. HD 맵은 자율 주행의 계획 및 판단 단계의 판단 근거가 되는 주변 차량 또는 보행자 등과 같은 오브젝트들을 표시할 수 있다. 프로세서(120)는 HD 맵 상에 주변 차량의 주행 궤적에 기초하여, 예를 들면 각 주행 궤적이 가상 차선들 사이에 형성됨을 전제로 하여 가상 차선을 생성할 수 있다.The processor 120 may generate a virtual lane on the HD map for assisting autonomous driving based on the driving trajectory. The HD map may display objects, such as surrounding vehicles or pedestrians, which are the basis for determination in the planning and determination stage of autonomous driving. The processor 120 may generate virtual lanes based on driving trajectories of surrounding vehicles on the HD map, for example, assuming that each driving trajectory is formed between the virtual lanes.
메모리(110)가 CNN을 저장할 수 있고, CNN은 셀프-어텐션 레이어를 포함하여 주행 히스토리를 적어도 하나의 주행 구간으로 구분하고, 가중치 부여를 통해 가중 주행 히스토리를 도출하고, 커브 피팅을 수행하여 주행 궤적을 결정하도록 학습될 수 있고, 프로세서(120)는 CNN에 기초하여 가상 차선을 생성할 수 있으므로, 악천후 상황에서도, 즉 노면 상의 실제 차선이 카메라와 같은 시각 정보에 의해 식별되기 어려운 경우에도, 복합 센서에 의해 측정되는 주행 데이터만으로 가상 차선이 생성될 수 있으므로, 자율 주행의 기상 환경 및 노면 상태에 대한 강건성(robustness)이 확보될 수 있다.The memory 110 may store the CNN, and the CNN divides the driving history into at least one driving section including a self-attention layer, derives a weighted driving history through weighting, and performs curve fitting to perform a driving trajectory can be learned to determine, and the processor 120 can generate a virtual lane based on the CNN, so even in bad weather conditions, that is, even when the actual lane on the road surface is difficult to identify by visual information such as a camera, the complex sensor Since a virtual lane can be created only with driving data measured by , robustness to the weather environment and road surface condition of autonomous driving can be secured.
도 2는 일부 실시예에 따른 가상 차선을 생성하는 장치를 포함하는 자율 주행 시스템이 동작하는 방식을 설명하기 위한 도면이다.FIG. 2 is a diagram for explaining how an autonomous driving system including an apparatus for generating a virtual lane according to some exemplary embodiments operates.
도 2를 참조하면, 자율 주행 시스템(200)은 다양한 모듈들로 구성될 수 있다. 예시로서, 자율 주행 시스템(200)은 복합 센서(210), 로컬화 모듈(220), 가상 차선 생성 모듈(230), 차량 제어 모듈(240) 및 차량 구동 모듈(250)로 구성될 수 있다.Referring to FIG. 2 , the autonomous driving system 200 may be configured with various modules. As an example, the autonomous driving system 200 may include a complex sensor 210 , a localization module 220 , a virtual lane creation module 230 , a vehicle control module 240 , and a vehicle driving module 250 .
도 1을 통해 설명되는 장치(100)는 가상 차선 생성 모듈(230)의 기능을 구현하기 위한 것일 수 있다. 장치(100)의 메모리(110) 및 프로세서(120)는, 복합 센서(210) 및 로컬화 모듈(220)에 의한 입력으로부터 가상 차선을 생성하여, 차량의 거동을 계획하고 제어하기 위해 차선에 대한 정보를 필요로 하는 차량 제어 모듈(240)에 가상 차선에 관한 정보를 제공할 수 있다.The apparatus 100 described with reference to FIG. 1 may be for implementing the function of the virtual lane generating module 230 . Memory 110 and processor 120 of device 100 generate virtual lanes from inputs by composite sensor 210 and localization module 220, to plan and control the behavior of the vehicle. Information on the virtual lane may be provided to the vehicle control module 240 requiring the information.
복합 센서(210)는 카메라, 레이더(Radar), 라이더(LiDAR) 및 GPS 등에 의해 차량의 주변 환경에 대한 데이터를 검출할 수 있다. 또한, 복합 센서는 HD 맵을 활용하여 도로, 지형, 건물과 같은 정보에 접근할 수 있다. 특히, 복합 센서(210)는 레이저 광의 반사를 활용하여 주변 물체를 탐지하는 라이더(LiDAR)를 포함할 수 있으므로, 라이더를 통해 악천후 또는 도로 상태로 인해 노면 상의 실제 차선이 시각적으로 식별되기 어려운 경우에도, 가상 차선 생성의 기반이 되는 주변 차량의 주행 데이터를 획득하여 가상 차선 생성 모듈(230)에 제공할 수 있다.The complex sensor 210 may detect data on the surrounding environment of the vehicle using a camera, radar, LiDAR, GPS, or the like. In addition, complex sensors can utilize HD maps to access information such as roads, terrain, and buildings. In particular, since the composite sensor 210 may include a lidar (LiDAR) that detects surrounding objects by utilizing the reflection of laser light, even when it is difficult to visually identify the actual lane on the road surface due to bad weather or road conditions through the lidar , driving data of surrounding vehicles, which is a basis for generating a virtual lane, may be obtained and provided to the virtual lane generating module 230 .
로컬화 모듈(220)은 자차량의 위치를 HD 맵 상에 매칭시킬 수 있다. 또한, 복합 센서(210)에 의해 주변 차량의 상대 위치가 파악되는 경우 주변 차량 및 자차량의 대략적인 절대 위치를 HD 맵 상에 매칭시킬 수 있다. 한편, 로컬화 모듈(220)에 의한 자차량의 대략적인 절대 위치 및 HD 맵 상의 차선 정보는, 주변 차량의 주행 궤적을 결정하는 과정에서 활용될 수도 있다.The localization module 220 may match the location of the own vehicle on the HD map. In addition, when the relative position of the surrounding vehicle is detected by the complex sensor 210 , the approximate absolute positions of the surrounding vehicle and the host vehicle may be matched on the HD map. Meanwhile, the approximate absolute position of the own vehicle and lane information on the HD map by the localization module 220 may be utilized in the process of determining the driving trajectory of the surrounding vehicle.
가상 차선 생성 모듈(230)은 주행 궤적 결정 모듈(231), 가상 차선 생성 모듈(232) 및 타겟 선별 모듈(233)을 포함할 수 있다. 주행 궤적 결정 모듈(231), 가상 차선 생성 모듈(232)은 장치(100)에서 수행되는 가상 차선 생성을 위한 각 처리 단계를 수행하는 것으로 이해될 수 있다.The virtual lane generating module 230 may include a driving trajectory determining module 231 , a virtual lane generating module 232 , and a target selection module 233 . The driving trajectory determining module 231 and the virtual lane generating module 232 may be understood to perform respective processing steps for generating a virtual lane performed in the device 100 .
타겟 선별 모듈(233)은 가상 차선에 기초하여 자차량의 속도 제어 및 차간 거리 제어의 기초가 되는 타겟 차량을 선별할 수 있다. 즉, 자율 주행 시스템(200)은 타겟 차량과의 적정 차간 거리가 유지되도록 자차량의 속도를 제어할 수 있고, 이를 위해 장치(100)는 앞서 설명된 가상 차선을 생성하기 위한 처리 단계들에 더하여, 가상 차선에 기초하여 자율 주행에 따른 속도 제어 및 차간 거리 제어에 활용되는 타겟 차량을 선별하는 단계를 수행할 수 있다.The target selection module 233 may select a target vehicle that is a basis for speed control and inter-vehicle distance control of the host vehicle based on the virtual lane. That is, the autonomous driving system 200 may control the speed of the own vehicle so that an appropriate inter-vehicle distance with the target vehicle is maintained. , selecting a target vehicle used for speed control and inter-vehicle distance control according to autonomous driving based on the virtual lane.
차량 제어 모듈(240)은 자차량의 조향 제어, 속도 제어 및 차간 거리 제어를 수행할 수 있다. 차량 제어 모듈(240)은 가상 차선 생성 모듈(230)로부터 제공되는 가상 차선에 기초하여 자차량의 조향, 즉 차량의 횡방향 거동을 결정하는 요 레이트(yaw rate)와 같은 변수를 제어할 수 있고, 가상 차선 생성 모듈(230)로부터 제공되는 타겟 차량에 기초하여 자차량의 속도 및 타겟 차량과의 차간 거리를 제어할 수 있다.The vehicle control module 240 may perform steering control, speed control, and inter-vehicle distance control of the own vehicle. The vehicle control module 240 may control a variable such as a yaw rate that determines the steering of the own vehicle, that is, the lateral behavior of the vehicle, based on the virtual lane provided from the virtual lane generating module 230, and , the speed of the own vehicle and the inter-vehicle distance to the target vehicle may be controlled based on the target vehicle provided from the virtual lane generating module 230 .
차량 구동 모듈(250)은 차량 제어 모듈(240)에 의해 판단되는 제어 명령에 따라, 예를 들면 조향 명령 또는 가속/감속 명령에 따라 자차량을 구동할 수 있다.The vehicle driving module 250 may drive the own vehicle according to a control command determined by the vehicle control module 240 , for example, according to a steering command or an acceleration/deceleration command.
도 3은 일부 실시예에 따른 가상 차선이 생성되는 구체적인 과정을 설명하기 위한 도면이다.3 is a diagram for describing a detailed process of generating a virtual lane according to some exemplary embodiments.
도 3을 참조하면, 호천후(301)의 경우 차선 인지가 가능할 수 있으나, 악천후(302)의 경우 차선 인지가 불가할 수 있음이 도시되어 있고, 차선 인지가 불가한 악천후(302)의 경우에 있어서 가상 차선의 생성에 관한 제1 예시(310) 및 제2 예시(320)가 도시되어 있다. 제1 예시(310)는 본 발명의 CNN 및 셀프 어텐션이 활용되지 않은 경우를 예시하기 위한 것이고, 제2 예시(320)는 본 발명의 CNN 및 셀프 어텐션이 활용되는 경우를 예시하기 위한 것일 수 있다.Referring to FIG. 3 , it is shown that lane recognition may be possible in the case of good weather 301 , but lane recognition may not be possible in the case of bad weather 302 , and in the case of bad weather 302 in which lane recognition is impossible A first example 310 and a second example 320 of the creation of a virtual lane are shown. The first example 310 is to illustrate a case in which the CNN and self-attention of the present invention are not utilized, and the second example 320 is to illustrate a case in which the CNN and self-attention of the present invention are utilized. .
제1 예시(310)는 자차량(311)과 함께 주변 차량(312) 및 주변 차량(313)이 주행하는 경우를 나타낼 수 있고, 제2 예시(320) 또한 제1 예시(310)와 마찬가지로 자차량(321), 주변 차량(322) 및 주변 차량(323)이 주행하는 경우를 나타낼 수 있다.The first example 310 may represent a case in which the surrounding vehicle 312 and the surrounding vehicle 313 are driven together with the own vehicle 311 , and the second example 320 is also the same as the first example 310 . A case in which the vehicle 321 , the surrounding vehicle 322 , and the surrounding vehicle 323 travel may be represented.
제1 예시(310)에서는, 주변 차량(312)은 차선을 변경하지 않고 차선을 추종하며 주행할 수 있으나, 주변 차량(313)은 차선 변경(313B)을 통해 변경전(313A)에서 변경후(313C)로 주행 차선을 변경할 수 있다. 이에 대하여, 자차량(311)의 자율 주행 시스템에서 주변 차량(313)의 거동을 차선 변경으로 해석하지 못하는 경우, 주변 차량(313)의 주행 궤적에 따라 가상 차선(314)이 생성될 수 있으므로, 가상 차선 생성의 정확도가 저하될 수 있다.In the first example 310, the surrounding vehicle 312 may drive while following the lane without changing the lane, but the surrounding vehicle 313 may change from before the change 313A through the lane change 313B to after the change ( 313C) to change the driving lane. In contrast, when the autonomous driving system of the own vehicle 311 cannot interpret the behavior of the surrounding vehicle 313 as a lane change, the virtual lane 314 may be generated according to the driving trajectory of the surrounding vehicle 313, Accuracy of virtual lane generation may be deteriorated.
제1 예시(310)에서와 같은 가상 차선 생성의 정확도가 저하되는 문제를 해결하기 위하여, 장치(100)는 CNN 및 셀프-어텐션에 기초하여 주변 차량의 주행 히스토리를 적어도 하나의 주행 구간으로 구분하고, 각 주행 구간에 가중치를 부여하여 가중 주행 히스토리를 도출하는 방식으로, 주변 차량의 차선 변경으로 인한 영향을 고려할 수 있다.In order to solve the problem that the accuracy of generating a virtual lane is reduced as in the first example 310, the device 100 divides the driving history of the surrounding vehicle into at least one driving section based on CNN and self-attention, and , a method of deriving a weighted driving history by assigning weights to each driving section, it is possible to consider the influence of a lane change of a surrounding vehicle.
구체적으로, 프로세서(120)는 주행 히스토리를 주행 패턴에 따라 차선 추종 구간 및 차선 변경 구간으로 구분함으로써 주행 히스토리를 적어도 하나의 주행 구간으로 구분할 수 있다. 예를 들면, 제2 예시(320)에서, 프로세서(120)는 CNN 및 셀프-어텐션에 기초하여 주변 차량(323)의 변경전(323A) 및 변경후(323C)에 해당하는 주행 데이터들을 차선 추종 구간으로, 차선 변경(323B)에 해당하는 주행 데이터들을 차선 변경 구간으로 구분할 수 있다.Specifically, the processor 120 may divide the driving history into at least one driving section by dividing the driving history into a lane following section and a lane change section according to a driving pattern. For example, in the second example 320 , the processor 120 lane-follows driving data corresponding to the before-change 323A and after-change 323C of the surrounding vehicle 323 based on the CNN and self-attention. As a section, driving data corresponding to the lane change 323B may be divided into a lane change section.
또한, 프로세서(120)는 차선 추종 구간의 가중치를 차선 변경 구간의 가중치보다 높게 설정함으로써 가중 주행 히스토리를 도출할 수 있다. 예를 들면, 제2 예시(320)에서, 프로세서(120)는 변경전(323A) 및 변경후(323C)의 차선 추종 구간에 대해서는 높은 가중치를 부여할 수 있고, 차선 변경(323B)의 차선 변경 구간에 대해서는 낮은 가중치를 부여할 수 있다.Also, the processor 120 may derive the weighted driving history by setting the weight of the lane following section to be higher than the weight of the lane change section. For example, in the second example 320 , the processor 120 may give a high weight to the lane following section before the change 323A and after the change 323C, and the lane change of the lane change 323B A low weight can be given to the section.
그에 따라, 차선 변경(323B)에 관한 주행 데이터들이 낮은 비중으로 가중 주행 히스토리에 반영될 수 있으므로, 카메라 등의 시각 정보로는 식별되지 않는 실제 노면 상의 차선에 보다 정확하게 부합하는 가상 차선(324)이 생성될 수 있으므로, 가상 차선 생성의 정확도가 저하되는 것이 방지될 수 있다.Accordingly, since driving data related to the lane change 323B may be reflected in the weighted driving history with a low weight, the virtual lane 324 that more accurately matches the lane on the actual road surface that is not identified by visual information such as a camera is provided. can be generated, so that the accuracy of virtual lane generation can be prevented from being deteriorated.
도 4는 일부 실시예에 따른 가상 차선을 생성하는 방법을 구성하는 단계들을 나타내는 흐름도이다.4 is a flowchart illustrating steps of configuring a method for generating a virtual lane according to some embodiments.
도 4를 참조하면, 가상 차선을 생성하는 방법은 단계 410 내지 단계 450를 포함할 수 있다. 다만 이에 제한되는 것은 아니고, 도 4에 도시되는 단계들 외에 다른 범용적인 단계들이 가상 차선을 생성하는 방법에 더 포함될 수 있다.Referring to FIG. 4 , the method of generating a virtual lane may include steps 410 to 450 . However, the present invention is not limited thereto, and general steps other than the steps shown in FIG. 4 may be further included in the method of generating a virtual lane.
도 4의 자율 주행을 보조하기 위한 가상 차선을 생성하는 방법은 도 1 내지 도 3을 통해 설명되는 가상 차선을 생성하는 장치(100)에서 시계열적으로 처리되는 단계들로 구성될 수 있다. 따라서, 도 4의 방법에 대해 이하에서 생략되는 내용이라 할지라도 장치(100)에 대해 이상에서 설명되는 내용은 도 4의 방법에 대해서도 동일하게 적용될 수 있다.The method of generating a virtual lane for assisting autonomous driving of FIG. 4 may include steps processed in time series by the apparatus 100 for generating a virtual lane described with reference to FIGS. 1 to 3 . Accordingly, even if the contents of the method of FIG. 4 are omitted below, the contents described above for the apparatus 100 may be equally applied to the method of FIG. 4 .
단계 410에서, 장치(100)는 복합 센서에 의해 측정되는 적어도 하나의 주변 차량의 주행 데이터를 시간 흐름에 따라 누적하여 주행 히스토리를 도출할 수 있다.In operation 410 , the device 100 may derive a driving history by accumulating driving data of at least one surrounding vehicle measured by the complex sensor over time.
장치(100)는 복합 센서에 의해 측정되는 데이터에 데이터 정제를 위한 칼만 필터링(Kalman filtering)을 적용하여 주행 데이터를 생성함으로써 주행 히스토리를 도출할 수 있다.The apparatus 100 may derive the driving history by generating driving data by applying Kalman filtering for data purification to data measured by the complex sensor.
단계 420에서, 장치(100)는 CNN(convolutional neural network)에 의해 구현되는 셀프-어텐션(self-attention)에 기초하여 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분할 수 있다.In operation 420 , the device 100 may divide the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN).
장치(100)는 주행 히스토리를 상기 주행 패턴에 따라 차선 추종 구간 및 차선 변경 구간으로 구분함으로써 주행 히스토리를 적어도 하나의 주행 구간으로 구분할 수 있다.The apparatus 100 may divide the driving history into at least one driving section by dividing the driving history into a lane following section and a lane change section according to the driving pattern.
장치(100)는 차선 추종 구간의 가중치를 차선 변경 구간의 가중치보다 높게 설정함으로써 가중 주행 히스토리를 도출할 수 있다.The apparatus 100 may derive the weighted driving history by setting the weight of the lane following section to be higher than the weight of the lane change section.
단계 430에서, 장치(100)는 셀프-어텐션에 기초하여 적어도 하나의 주행 구간에 가중치를 부여함으로써 적어도 하나의 주변 차량의 가중 주행 히스토리를 도출할 수 있다.In operation 430 , the apparatus 100 may derive a weighted driving history of at least one surrounding vehicle by giving weights to at least one driving section based on self-attention.
단계 440에서, 장치(100)는 CNN을 통해 가중 주행 히스토리에 대한 커브 피팅을 수행하여 적어도 하나의 주변 차량의 주행 궤적을 결정할 수 있다.In operation 440 , the device 100 may determine the driving trajectory of at least one surrounding vehicle by performing curve fitting on the weighted driving history through CNN.
장치(100)는 가중 주행 히스토리에 피팅되는 궤적을 나타내기 위한 다항함수의 계수를 추론하여 커브 피팅을 수행함으로써 주행 궤적을 결정할 수 있다.The apparatus 100 may determine the driving trajectory by performing curve fitting by inferring coefficients of a polynomial function for representing the trajectory fitted to the weighted driving history.
단계 450에서, 장치(100)는 주행 궤적에 기초하여 자율 주행을 보조하기 위한 HD 맵 상에 가상 차선을 생성할 수 있다.In operation 450 , the device 100 may generate a virtual lane on the HD map for assisting autonomous driving based on the driving trajectory.
셀프-어텐션은, 소프트맥스 함수(softmax function)를 활용하여 어텐션 스코어(attention score)를 산정할 수 있다.The self-attention may calculate an attention score by using a softmax function.
복합 센서는, 레이저 광의 반사를 활용하여 주변 물체를 탐지하는 라이더(LiDAR)를 포함할 수 있다.The composite sensor may include a lidar (LiDAR) that detects surrounding objects by utilizing the reflection of laser light.
단계 410 내지 단계 450에 더하여, 장치(100)는 가상 차선에 기초하여 자율 주행에 따른 속도 제어 및 차간 거리 제어에 활용되는 타겟 차량을 선별할 수 있다.In addition to steps 410 to 450 , the device 100 may select a target vehicle used for speed control and inter-vehicle distance control according to autonomous driving based on the virtual lane.
한편, 도 4의 자율 주행을 보조하기 위한 가상 차선을 생성하는 방법은 그 방법을 실행하는 명령어들을 포함하는 적어도 하나의 프로그램 또는 소프트웨어가 기록되는 컴퓨터로 판독 가능한 기록 매체에 기록될 수 있다.Meanwhile, the method of generating a virtual lane for assisting autonomous driving of FIG. 4 may be recorded in a computer-readable recording medium in which at least one program or software including instructions for executing the method is recorded.
컴퓨터로 판독 가능한 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(magnetic media), CD-ROM, DVD와 같은 광기록 매체(optical media), 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함될 수 있다. 프로그램 명령어의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드가 포함될 수 있다.Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and floppy disks. Magneto-optical media and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like may be included. Examples of program instructions may include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
이상에서 본 발명의 실시예들이 상세하게 설명되었으나 본 발명에 따른 권리범위가 이에 한정되는 것은 아니고, 다음의 청구범위에 기재되어 있는 본 발명의 기본 개념을 이용한 당업자의 여러 변형 및 개량 형태 또한 본 발명에 따른 권리범위에 포함되는 것으로 해석되어야 한다.Although the embodiments of the present invention have been described in detail above, the scope of the rights according to the present invention is not limited thereto, and various modifications and improvements by those skilled in the art using the basic concept of the present invention described in the following claims are also provided. It should be interpreted as being included in the scope of rights according to the

Claims (16)

  1. 자율 주행을 보조하기 위한 가상 차선을 생성하는 방법에 있어서,A method of generating a virtual lane for assisting autonomous driving, the method comprising:
    복합 센서에 의해 측정되는 적어도 하나의 주변 차량의 주행 데이터를 시간 흐름에 따라 누적하여 주행 히스토리를 도출하는 단계;deriving a driving history by accumulating driving data of at least one surrounding vehicle measured by a complex sensor over time;
    CNN(convolutional neural network)에 의해 구현되는 셀프-어텐션(self-attention)에 기초하여 상기 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분하는 단계;dividing the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN);
    상기 셀프-어텐션에 기초하여 상기 적어도 하나의 주행 구간에 가중치를 부여함으로써 상기 적어도 하나의 주변 차량의 가중 주행 히스토리를 도출하는 단계;deriving a weighted driving history of the at least one surrounding vehicle by assigning a weight to the at least one driving section based on the self-attention;
    상기 CNN을 통해 상기 가중 주행 히스토리에 대한 커브 피팅을 수행하여 상기 적어도 하나의 주변 차량의 주행 궤적을 결정하는 단계; 및determining a driving trajectory of the at least one surrounding vehicle by performing curve fitting on the weighted driving history through the CNN; and
    상기 주행 궤적에 기초하여 상기 자율 주행을 보조하기 위한 HD 맵 상에 상기 가상 차선을 생성하는 단계를 포함하는 방법.and generating the virtual lane on an HD map for assisting the autonomous driving based on the driving trajectory.
  2. 제 1항에 있어서,The method of claim 1,
    상기 적어도 하나의 주행 구간으로 구분하는 단계는, 상기 주행 히스토리를 상기 주행 패턴에 따라 차선 추종 구간 및 차선 변경 구간으로 구분하는 단계를 포함하는 방법.The dividing into the at least one driving section includes dividing the driving history into a lane following section and a lane change section according to the driving pattern.
  3. 제 2항에 있어서,3. The method of claim 2,
    상기 가중 주행 히스토리를 도출하는 단계는, 상기 차선 추종 구간의 가중치를 상기 차선 변경 구간의 가중치보다 높게 설정하는 단계를 포함하는 방법.The deriving the weighted driving history may include setting a weight of the lane following section to be higher than a weight of the lane change section.
  4. 제 1항에 있어서,The method of claim 1,
    상기 셀프-어텐션은, 소프트맥스 함수(softmax function)를 활용하여 어텐션 스코어(attention score)를 산정하는 방법.The self-attention is a method of calculating an attention score by using a softmax function.
  5. 제 1항에 있어서,The method of claim 1,
    상기 주행 궤적을 결정하는 단계는, 상기 가중 주행 히스토리에 피팅되는 궤적을 나타내기 위한 다항함수의 계수를 추론하여 상기 커브 피팅을 수행하는 단계를 포함하는 방법.The determining of the driving trajectory may include performing the curve fitting by inferring coefficients of a polynomial function for representing a trajectory fitted to the weighted driving history.
  6. 제 1항에 있어서,The method of claim 1,
    상기 주행 히스토리를 도출하는 단계는, 상기 복합 센서에 의해 측정되는 데이터에 데이터 정제를 위한 칼만 필터링(Kalman filtering)을 적용하여 상기 주행 데이터를 생성하는 단계를 포함하는 방법.The deriving of the driving history may include generating the driving data by applying Kalman filtering for data purification to data measured by the composite sensor.
  7. 제 1항에 있어서,The method of claim 1,
    상기 복합 센서는, 레이저 광의 반사를 활용하여 주변 물체를 탐지하는 라이더(LiDAR)를 포함하는 방법.The composite sensor includes a lidar (LiDAR) that detects a surrounding object by utilizing the reflection of laser light.
  8. 제 1항에 있어서,The method of claim 1,
    상기 방법은, 상기 가상 차선에 기초하여 상기 자율 주행에 따른 속도 제어 및 차간 거리 제어에 활용되는 타겟 차량을 선별하는 단계를 더 포함하는 방법.The method may further include selecting a target vehicle used for speed control and inter-vehicle distance control according to the autonomous driving based on the virtual lane.
  9. 자율 주행을 보조하기 위한 가상 차선을 생성하는 장치에 있어서,An apparatus for generating a virtual lane for assisting autonomous driving, comprising:
    적어도 하나의 프로그램을 저장하는 메모리; 및a memory storing at least one program; and
    상기 적어도 하나의 프로그램을 실행함으로써 상기 가상 차선을 생성하는 프로세서를 포함하고,a processor for generating the virtual lane by executing the at least one program;
    상기 프로세서는,The processor is
    복합 센서에 의해 측정되는 적어도 하나의 주변 차량의 주행 데이터를 시간 흐름에 따라 누적하여 주행 히스토리를 도출하고,Deriving a driving history by accumulating driving data of at least one surrounding vehicle measured by a complex sensor over time,
    CNN(convolutional neural network)에 의해 구현되는 셀프-어텐션(self-attention)에 기초하여 상기 주행 히스토리를 주행 패턴에 따라 적어도 하나의 주행 구간으로 구분하고,Classifying the driving history into at least one driving section according to a driving pattern based on self-attention implemented by a convolutional neural network (CNN),
    상기 셀프-어텐션에 기초하여 상기 적어도 하나의 주행 구간에 가중치를 부여함으로써 상기 적어도 하나의 주변 차량의 가중 주행 히스토리를 도출하고,Deriving a weighted driving history of the at least one surrounding vehicle by assigning a weight to the at least one driving section based on the self-attention,
    상기 CNN을 통해 상기 가중 주행 히스토리에 대한 커브 피팅을 수행하여 상기 적어도 하나의 주변 차량의 주행 궤적을 결정하고,determining the driving trajectory of the at least one surrounding vehicle by performing curve fitting on the weighted driving history through the CNN;
    상기 주행 궤적에 기초하여 상기 자율 주행을 보조하기 위한 HD 맵 상에 상기 가상 차선을 생성하는 장치.An apparatus for generating the virtual lane on an HD map for assisting the autonomous driving based on the driving trajectory.
  10. 제 9항에 있어서,10. The method of claim 9,
    상기 프로세서는, 상기 주행 히스토리를 상기 주행 패턴에 따라 차선 추종 구간 및 차선 변경 구간으로 구분함으로써 상기 적어도 하나의 주행 구간으로 구분하는 장치.The processor divides the driving history into the at least one driving section by dividing the driving history into a lane following section and a lane change section according to the driving pattern.
  11. 제 10항에 있어서,11. The method of claim 10,
    상기 프로세서는, 상기 차선 추종 구간의 가중치를 상기 차선 변경 구간의 가중치보다 높게 설정함으로써 상기 가중 주행 히스토리를 도출하는 장치.The processor is configured to set a weight of the lane following section to be higher than a weight of the lane change section to derive the weighted driving history.
  12. 제 9항에 있어서,10. The method of claim 9,
    상기 셀프-어텐션은, 소프트맥스 함수(softmax function)를 활용하여 어텐션 스코어(attention score)를 산정하는 장치.The self-attention is an apparatus for calculating an attention score by using a softmax function.
  13. 제 9항에 있어서,10. The method of claim 9,
    상기 프로세서는, 상기 가중 주행 히스토리에 피팅되는 궤적을 나타내기 위한 다항함수의 계수를 추론하여 상기 커브 피팅을 수행함으로써 상기 주행 궤적을 결정하는 장치.and the processor determines the driving trajectory by performing the curve fitting by inferring coefficients of a polynomial function for representing a trajectory fitted to the weighted driving history.
  14. 제 9항에 있어서,10. The method of claim 9,
    상기 프로세서는, 상기 복합 센서에 의해 측정되는 데이터에 데이터 정제를 위한 칼만 필터링(Kalman filtering)을 적용하여 상기 주행 데이터를 생성함으로써 상기 주행 히스토리를 도출하는 장치.The processor is configured to generate the driving data by applying Kalman filtering for data purification to data measured by the composite sensor to derive the driving history.
  15. 제 9항에 있어서,10. The method of claim 9,
    상기 복합 센서는, 레이저 광의 반사를 활용하여 주변 물체를 탐지하는 라이더(LiDAR)를 포함하는 장치.The composite sensor is a device including a lidar (LiDAR) for detecting a surrounding object by utilizing the reflection of laser light.
  16. 제 9항에 있어서,10. The method of claim 9,
    상기 프로세서는, 상기 가상 차선에 기초하여 상기 자율 주행에 따른 속도 제어 및 차간 거리 제어에 활용되는 타겟 차량을 선별하는 장치.The processor is an apparatus for selecting a target vehicle used for speed control and inter-vehicle distance control according to the autonomous driving based on the virtual lane.
PCT/KR2021/000335 2020-05-29 2021-01-11 Virtual lane generation apparatus and method based on traffic flow information perception for autonomous driving in adverse weather conditions WO2021241834A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200064709A KR102342414B1 (en) 2020-05-29 2020-05-29 Apparatus and method for virtual lane generation based on traffic flow for autonomous driving in severe weather condition
KR10-2020-0064709 2020-05-29

Publications (1)

Publication Number Publication Date
WO2021241834A1 true WO2021241834A1 (en) 2021-12-02

Family

ID=78744981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/000335 WO2021241834A1 (en) 2020-05-29 2021-01-11 Virtual lane generation apparatus and method based on traffic flow information perception for autonomous driving in adverse weather conditions

Country Status (2)

Country Link
KR (1) KR102342414B1 (en)
WO (1) WO2021241834A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114312840A (en) * 2021-12-30 2022-04-12 重庆长安汽车股份有限公司 Automatic driving obstacle target track fitting method, system, vehicle and storage medium
CN114842440A (en) * 2022-06-30 2022-08-02 小米汽车科技有限公司 Automatic driving environment sensing method and device, vehicle and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101526816B1 (en) * 2014-09-12 2015-06-05 현대자동차주식회사 System for estimating a lane and method thereof
KR20180051836A (en) * 2016-11-09 2018-05-17 삼성전자주식회사 Generating method and apparatus of virtual driving lane for driving car
US20200026282A1 (en) * 2018-07-23 2020-01-23 Baidu Usa Llc Lane/object detection and tracking perception system for autonomous vehicles
KR20200058272A (en) * 2018-11-19 2020-05-27 건국대학교 산학협력단 Method and system for providing road driving situation through preprocessing of road driving image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101782423B1 (en) 2015-03-19 2017-09-29 현대자동차주식회사 Audio navigation device, vehicle having the same, user device, and method for controlling vehicle
KR102456626B1 (en) * 2016-01-15 2022-10-18 현대자동차주식회사 Apparatus and method for traffic lane recognition in automatic steering control of vehilcles
KR20180124713A (en) * 2017-05-12 2018-11-21 삼성전자주식회사 An electronic device and method thereof for estimating shape of road
KR102421855B1 (en) 2017-09-28 2022-07-18 삼성전자주식회사 Method and apparatus of identifying driving lane
KR102553247B1 (en) 2018-04-27 2023-07-07 주식회사 에이치엘클레무브 Lane keep assist system and method for improving safety in forward vehicle follower longitudinal control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101526816B1 (en) * 2014-09-12 2015-06-05 현대자동차주식회사 System for estimating a lane and method thereof
KR20180051836A (en) * 2016-11-09 2018-05-17 삼성전자주식회사 Generating method and apparatus of virtual driving lane for driving car
US20200026282A1 (en) * 2018-07-23 2020-01-23 Baidu Usa Llc Lane/object detection and tracking perception system for autonomous vehicles
KR20200058272A (en) * 2018-11-19 2020-05-27 건국대학교 산학협력단 Method and system for providing road driving situation through preprocessing of road driving image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HOU YUENAN; MA ZHENG; LIU CHUNXIAO; LOY CHEN CHANGE: "Learning Lightweight Lane Detection CNNs by Self Attention Distillation", 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 27 October 2019 (2019-10-27), pages 1013 - 1021, XP033723536, DOI: 10.1109/ICCV.2019.00110 *
KOH YOUNGIL: "Online Vehicle Motion Learning based Steering Control for an Automated Driving System using Incremental Sparse Spectrum Gaussian Process Regression", THESES, 11 February 2020 (2020-02-11), pages 1 - 134, XP055871589 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114312840A (en) * 2021-12-30 2022-04-12 重庆长安汽车股份有限公司 Automatic driving obstacle target track fitting method, system, vehicle and storage medium
CN114312840B (en) * 2021-12-30 2023-09-22 重庆长安汽车股份有限公司 Automatic driving obstacle target track fitting method, system, vehicle and storage medium
CN114842440A (en) * 2022-06-30 2022-08-02 小米汽车科技有限公司 Automatic driving environment sensing method and device, vehicle and readable storage medium
CN114842440B (en) * 2022-06-30 2022-09-09 小米汽车科技有限公司 Automatic driving environment sensing method and device, vehicle and readable storage medium

Also Published As

Publication number Publication date
KR102342414B1 (en) 2021-12-24
KR20210148518A (en) 2021-12-08

Similar Documents

Publication Publication Date Title
CN111971574B (en) Deep learning based feature extraction for LIDAR localization of autonomous vehicles
Chen et al. Deepdriving: Learning affordance for direct perception in autonomous driving
JP7060625B2 (en) LIDAR positioning to infer solutions using 3DCNN network in self-driving cars
JP7256758B2 (en) LIDAR positioning with time smoothing using RNN and LSTM in autonomous vehicles
CN111367282B (en) Robot navigation method and system based on multimode perception and reinforcement learning
EP3722908B1 (en) Learning a scenario-based distribution of human driving behavior for realistic simulation model
CN111874006B (en) Route planning processing method and device
EP3032454B1 (en) Method and system for adaptive ray based scene analysis of semantic traffic spaces and vehicle equipped with such system
CN110047276B (en) Method and device for determining congestion state of obstacle vehicle and related product
CN110377025A (en) Sensor aggregation framework for automatic driving vehicle
WO2018101526A1 (en) Method for detecting road region and lane by using lidar data, and system therefor
US11460851B2 (en) Eccentricity image fusion
CN110347145A (en) Perception for automatic driving vehicle assists
WO2021241834A1 (en) Virtual lane generation apparatus and method based on traffic flow information perception for autonomous driving in adverse weather conditions
CN110569602B (en) Data acquisition method and system for unmanned vehicle
US11055859B2 (en) Eccentricity maps
WO2021153989A1 (en) System and method for longitudinal reaction-control of electric autonomous vehicle
WO2022052856A1 (en) Vehicle-based data processing method and apparatus, computer, and storage medium
CN116703966A (en) Multi-object tracking
CN116266380A (en) Environment data reconstruction method, device, system and storage medium
Kumar et al. Vision-based outdoor navigation of self-driving car using lane detection
EP3722907B1 (en) Learning a scenario-based distribution of human driving behavior for realistic simulation model and deriving an error model of stationary and mobile sensors
JP6946456B2 (en) Corner negotiation method for self-driving vehicles that do not require maps and positioning
CN113762030A (en) Data processing method and device, computer equipment and storage medium
Reddy Driverless car: software modelling and design using Python and Tensorflow

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21813674

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21813674

Country of ref document: EP

Kind code of ref document: A1