WO2022147785A1 - Autonomous driving scenario identifying method and apparatus - Google Patents

Autonomous driving scenario identifying method and apparatus Download PDF

Info

Publication number
WO2022147785A1
WO2022147785A1 PCT/CN2021/070939 CN2021070939W WO2022147785A1 WO 2022147785 A1 WO2022147785 A1 WO 2022147785A1 CN 2021070939 W CN2021070939 W CN 2021070939W WO 2022147785 A1 WO2022147785 A1 WO 2022147785A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
driving
sequence
driving behavior
similarity
Prior art date
Application number
PCT/CN2021/070939
Other languages
French (fr)
Chinese (zh)
Inventor
王嘉伟
曾振华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/070939 priority Critical patent/WO2022147785A1/en
Priority to CN202180000124.8A priority patent/CN112805724B/en
Publication of WO2022147785A1 publication Critical patent/WO2022147785A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present application relates to the technical field of intelligent driving, and in particular, to a method and device for recognizing a driving scene of a vehicle.
  • the current intelligent driving solutions for identifying vehicle driving behaviors usually collect vehicle driving data from the fleet to pre-train a deep learning model, and then use the trained deep learning model to identify the driving behavior of the vehicle. For example, when determining whether the front vehicle cuts in, the prior art usually first captures the driving data of the own vehicle itself and vehicles in all the forward lanes, including vehicle position and vehicle speed, and uses a pre-trained recognition module to perform action recognition. If it is recognized that a cut-in action has occurred, the complete data of the cut-in action, including: driving data of the vehicle and the cut-in vehicle and road conditions, is sent to the cloud server, and the cloud identifies whether the cut-in action of the preceding vehicle has occurred according to the uploaded data.
  • the present application provides a method and device for recognizing a driving scene of a vehicle, which uses a dynamic time warping algorithm to calculate the similarity and recognizes the driving area of the vehicle to obtain the driving behavior of the vehicle, so as to solve the problem in the prior art when the driving behavior is recognized by the model. It is difficult and costly to accumulate the training data set of the vehicle, and the driving scene of the vehicle is matched in the scene library to realize the purpose of vehicle driving scene recognition.
  • the present application provides a method for identifying a vehicle driving scene, the method comprising:
  • a feature sequence of a second vehicle is obtained according to the driving data, the second vehicle is used to indicate vehicles around the first vehicle, and the feature sequence includes: a speed sequence, a distance between the second vehicle and a reference lane line a sequence and a sequence of angles between the second vehicle and the reference lane line;
  • a pre-built scene library is used to match the driving scene of the second vehicle in the current period.
  • the present application adopts the method of calculating the similarity of the dynamic time warping algorithm to identify the driving behavior of the second vehicle, which improves the accuracy of identifying the driving behavior compared to the recognition through the trained model; the dynamic time warping algorithm can efficiently process the driving behavior.
  • the characteristics of a large number of data streams and short time improve the efficiency of identifying driving behavior; in addition, the present application actively matches the driving scene of the second vehicle through the driving behavior of the vehicle, which can provide a reference for the driving operation of the first vehicle, to a certain extent can improve the safety of the first vehicle.
  • the method before obtaining the similarity between the feature sequence of the second vehicle and the driving behavior reference sequence by using a dynamic time warping algorithm, the method further includes:
  • the preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from a database, where the database includes driving behavior reference sequences corresponding to different areas.
  • the present application can further improve the speed of identifying the driving behavior of the second vehicle around it by identifying the area where the first vehicle is traveling, and further save the time cost of identifying the driving behavior.
  • the method before obtaining the feature sequence of the second vehicle according to the driving data, the method further includes:
  • Noise reduction processing and smoothing processing are performed on the driving data.
  • the reference lane line is a lane line of the lane in which the first vehicle travels.
  • the lane in which the first vehicle travels is a curved lane, and the method further includes:
  • the distance sequence is curvature compensated according to the curvature of the curved lane.
  • the present application performs curvature compensation on the distance sequence, which can effectively reduce the influence of road surface features on the collected driving data and improve the accuracy of identifying driving behavior.
  • the determining, according to the similarity, the driving behavior of the second vehicle in the current period includes:
  • the driving behavior represented by the driving behavior reference sequence corresponding to the degree of similarity satisfying the preset condition is used as the driving behavior of the second vehicle in the current period.
  • it also includes:
  • the scene library is constructed according to the different scenes and the driving behaviors corresponding to the different scenes.
  • the present application further provides a vehicle driving scene recognition device, the device comprising:
  • a data acquisition unit for acquiring the driving data of the first vehicle in the current period
  • a feature acquisition unit configured to obtain a feature sequence of a second vehicle according to the driving data, the second vehicle is used to indicate vehicles around the first vehicle, and the feature sequence includes: a speed sequence, the second vehicle a sequence of distances from the reference lane line and a sequence of included angles between the second vehicle and the reference lane line;
  • a similarity calculation unit configured to obtain the similarity between the feature sequence of the second vehicle and a plurality of preset driving behavior reference sequences by using a dynamic time warping algorithm
  • a behavior recognition unit configured to determine the driving behavior of the second vehicle in the current period according to the similarity
  • a scene matching unit configured to match the driving scene of the second vehicle in the current period by using a pre-built scene library according to the driving behavior of the second vehicle in the current period.
  • the similarity calculation unit is further used for:
  • the preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from a database, where the database includes driving behavior reference sequences corresponding to different areas.
  • the feature acquisition unit is further configured to:
  • Noise reduction processing and smoothing processing are performed on the driving data.
  • the reference lane line is a lane line of the lane in which the first vehicle travels.
  • the lane in which the first vehicle travels is a curved lane
  • the apparatus further includes:
  • a curvature compensation unit configured to perform curvature compensation on the distance sequence according to the curvature of the curved lane.
  • the behavior identification unit is specifically used for:
  • the driving behavior represented by the driving behavior reference sequence corresponding to the degree of similarity satisfying the preset condition is used as the driving behavior of the second vehicle in the current period.
  • a scene library construction unit for:
  • the scene library is constructed according to the different scenes and the driving behaviors corresponding to the different scenes.
  • the present application also provides a computer storage medium, where instructions are stored in the computer storage medium, and when the instructions are executed on the computer, the computer is made to execute the vehicle driving according to the first aspect of the present application. Scene recognition method.
  • the present application further provides a computer program product including instructions, which, when the instructions are executed on a computer, cause the computer to execute the method for recognizing a vehicle driving scene as described in the first aspect of the present application.
  • Fig. 1 is the application scene system structure diagram provided by this application.
  • FIG. 2 is a flowchart of a method for identifying a vehicle driving scene provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of the effect after noise reduction and smoothing processing provided by an embodiment of the present application.
  • 4a is a schematic diagram of the positional relationship between the first vehicle and the second vehicle in a straight lane provided by an embodiment of the present application;
  • FIG. 4b is a schematic diagram of the positional relationship between the first vehicle and the second vehicle in a curved lane provided by an embodiment of the present application;
  • FIG. 5 is a schematic diagram of calculating similarity by combining a window function and a dynamic time warping algorithm provided by an embodiment of the present application
  • Fig. 6 is the similarity comparison diagram of two sequences provided in the embodiment of the present application.
  • FIG. 7a is a schematic diagram of the distance change between the second vehicle and the reference lane line when the second vehicle changes lanes to the left according to an embodiment of the present application;
  • 7b is a schematic diagram of the distance change between the second vehicle and the reference lane line when the second vehicle changes lanes to the right according to an embodiment of the present application;
  • FIG. 7c is a schematic diagram of the distance change between the first vehicle and the reference lane line when the first vehicle changes lanes to the left according to an embodiment of the present application;
  • FIG. 7d is a schematic diagram of the distance change between the first vehicle and the reference lane line when the first vehicle changes lanes to the right according to an embodiment of the present application;
  • FIG. 8 is a corresponding relationship diagram of driving scenarios and driving behaviors provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of a method for recognizing a vehicle driving scene provided by an embodiment of the present application.
  • FIG. 10 is a probability distribution diagram of the second vehicle inserting into the driving lane of the first vehicle in the two areas provided by the present application;
  • FIG. 11 is a schematic diagram of a vehicle driving scene recognition device provided by an embodiment of the present application.
  • words such as “exemplary”, “for example” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary,” “such as,” or “by way of example” should not be construed as preferred or advantageous over other embodiments or designs. Rather, use of words such as “illustrative,” “such as,” or “for example,” is intended to present the related concepts in a specific manner.
  • the term “and/or” is only an association relationship for describing associated objects, indicating that there may be three relationships, for example, A and/or B, which may indicate: A alone exists, A alone exists There is B, and there are three cases of A and B at the same time.
  • the term “plurality” means two or more.
  • multiple systems refer to two or more systems
  • multiple screen terminals refer to two or more screen terminals.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implying the indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • FIG. 1 is a structural diagram of an application system for vehicle driving scene recognition provided by the present application.
  • the application system includes: vehicles running on highways or main roads in urban areas and cloud servers for computing.
  • the vehicle is equipped with sensors, a positioning module (GPS or HD Map), and a wireless gateway T-Box responsible for data transmission between the sensors and the cloud server.
  • GPS positioning module
  • T-Box wireless gateway
  • the structure of the application system for vehicle driving scene recognition illustrated in the embodiments of the present application does not constitute a specific limitation to the present application.
  • the application system structure of vehicle driving scene recognition may include more or less modules than shown, or some modules are combined, or some modules are split, or different modules are arranged.
  • the illustrated modules may be implemented in hardware, or a combination of software and hardware.
  • the vehicle in the application system can be an intelligent vehicle that can simulate the operation of a driver, or an ordinary vehicle that requires manual driving. According to the energy consumption, the vehicle can be a new energy vehicle or an ordinary fuel vehicle.
  • the embodiments of the present application do not specifically limit the types of vehicles.
  • the driving behavior of the vehicle will be affected by the surrounding vehicles.
  • a certain vehicle in the application system is marked as the first vehicle, and the vehicles around the first vehicle are marked as the second vehicle.
  • the settings of the first vehicle and the second vehicle are relative, and the first vehicle and the second vehicle may be each other's surrounding vehicles.
  • a distance threshold may be set, and only the driving scene of the second vehicle within the distance threshold is identified. Setting the distance threshold can reduce the amount of data processing on the one hand, and save the cost of arranging sensors for the first vehicle on the other hand.
  • This embodiment of the present application does not specifically limit whether a sensor is installed on the second vehicle.
  • the camera is used to record image data corresponding to the driving scene of the second vehicle.
  • the working principle of the camera is to collect images through the lens, and then process the collected images by the internal photosensitive components and control components, and then convert them into digital signals that can be recognized by other systems; other systems obtain digital signals through the transmission port of the camera, and then proceed.
  • Image restoration can get images that are consistent with the actual scene.
  • the field of view of the camera to collect image data and the installation number and installation position of the camera can further design a feasible solution according to actual needs.
  • the embodiments of the present application do not specifically limit the field of view, installation quantity, and installation position of the cameras.
  • the type of camera can be selected according to the different needs of users, as long as it can realize basic functions such as video camera, communication and still image capture.
  • the camera may be one or more types of commonly used in-vehicle cameras, such as a binocular camera and a monocular camera.
  • the camera can also be one or two types of digital camera and analog camera, the difference between the two is that the image processing process of the lens is different.
  • the digital camera converts the collected analog signal into a digital signal for storage, while the analog camera uses a specific video capture card to convert the analog signal into a digital mode, compress it and store it.
  • the camera can also be one or both of the complementary metal oxide semiconductor (CMOS) type camera and the charge-coupled device (CCD) type camera. kind.
  • CMOS complementary metal oxide semiconductor
  • CCD charge-coupled device
  • the camera can also be one or more types of serial port, parallel port, Universal Serial Bus (Universal Serial Bus, USB) and FireWire interface (IEEE1394).
  • USB Universal Serial Bus
  • FireWire interface IEEE1394
  • Radar radio detection and ranging
  • Radar can be used to measure the distance between the first vehicle and different targets in road conditions, and can also be used to measure the speed of the second vehicle.
  • Radar is an electronic device that detects the target by emitting electromagnetic waves.
  • the working principle of the radar is to irradiate the target by emitting electromagnetic waves and receive its echoes, thereby obtaining the distance from the detected target to the electromagnetic wave emission point, and the rate of change of the distance (radial velocity). , bearing and altitude.
  • the type of radar can be selected according to the needs of the actual application scenario, which can be one or more of commonly used vehicle-mounted radars such as speed radar, lidar, millimeter-wave radar, and ultrasonic radar.
  • vehicle-mounted radars such as speed radar, lidar, millimeter-wave radar, and ultrasonic radar.
  • LiDAR has the advantages of high precision, high resolution and the ability to build a 3D model of the surrounding because it sends a laser beam for detection, and is often used in vehicle automatic driving systems or assisted driving systems; for example, adaptive cruise control (Adaptive cruise control, ACC) ), Forward Collision Warning (FCW), Lane Keeping Assist (LKA) and Automatic Parking (AP).
  • Adaptive cruise control ACC
  • FCW Forward Collision Warning
  • LKA Lane Keeping Assist
  • AP Automatic Parking
  • the radar installed on the first vehicle may also be other types of radars that can satisfy the functions involved in the embodiments of the present application.
  • a lane line detection method based on the density of radar scanning points is used. This method obtains the coordinates of the radar scanning points and converts them into a grid map, and maps the grid map with the original data. It is a direct coordinate grid map or a polar coordinate grid map. According to the needs of post-processing, the polar coordinate grid map is directly used for lane line recognition, that is, a grid with multiple point mappings is regarded as a lane line point.
  • the range of radar detection targets is related to its installation quantity, installation position and set detection distance.
  • the installation quantity, installation position and detection distance of radars can be deployed according to actual needs.
  • the embodiments of the present application do not specifically limit the factors affecting the radar detection range.
  • the first vehicle may also be equipped with an acceleration sensor.
  • the acceleration sensor is used to measure acceleration and centripetal acceleration when the first vehicle is traveling.
  • Acceleration sensors are usually composed of mass blocks, dampers, elastic elements, sensitive elements and adaptive circuits. During the acceleration process, the acceleration value is obtained by using Newton's second law by measuring the inertial force on the mass block.
  • Acceleration sensors commonly used in automobiles are piezoresistive acceleration sensors, and can also be other types of sensors, such as capacitive, inductive, strain, and piezoelectric.
  • the first vehicle may also be installed with sensors for collecting environmental data, and the environmental data may be used to judge the next driving scene of the first vehicle.
  • Environmental data includes but is not limited to temperature, humidity, air pressure, weather conditions, number of lanes, distance from traffic lights (traffic lights), ramp locations, prohibited areas, pedestrian locations, traffic light status information, etc.
  • the sensor may also include, but is not limited to, a positioning sensor, an inertial measurement unit (IMU), a temperature sensor, a humidity sensor, a gas detection sensor, an environmental sensor, or other sensors for collecting environmental data etc., which are not limited in the embodiments of the present application.
  • IMU inertial measurement unit
  • T-Box has the functions of long-distance wireless communication and CAN communication, and provides long-distance communication interface for the whole vehicle.
  • the various data collected by the sensor are uploaded to the cloud server, and the data sent by the cloud server is fed back to the vehicle terminal installed in the vehicle or to the control system of the vehicle, so as to realize the assisted driving of the driver or the vehicle control system to control the vehicle.
  • the in-vehicle terminal may be a mobile phone, a tablet computer or other mobile intelligent terminal used by the user, or an in-vehicle computer installed on the vehicle.
  • T-Box can also obtain vehicle status data through CAN bus interface, such as: vehicle information, vehicle controller information, motor controller information, battery management system BMS, on-board charger and mileage, average vehicle speed, fuel usage, Average fuel consumption, etc.
  • vehicle status data such as: vehicle information, vehicle controller information, motor controller information, battery management system BMS, on-board charger and mileage, average vehicle speed, fuel usage, Average fuel consumption, etc.
  • T-box can also provide computing or storage functions.
  • the positioning module is used to realize the collection of vehicle position information.
  • the positioning module can be a positioning module based on the global satellite positioning system (GPS), which can locate the vehicle by receiving GPS signals; it can also be a positioning module based on other satellite positioning systems, such as a positioning module based on the Beidou satellite positioning system, a positioning module based on the GLONER The positioning module of the GLONASS GPS and the positioning module based on the Galileo GPS.
  • GPS global satellite positioning system
  • FIG. 2 is a flowchart of a method for recognizing a driving scene of a vehicle provided by an embodiment of the present application. As shown in FIG. 2 , the method is applied to the first vehicle in the system shown in FIG. 1 , and the specific process of identifying the driving scene of the second vehicle around the first vehicle includes the following steps S1 to S4 .
  • Step S1 Obtain a feature sequence of the second vehicle in the current period according to the driving data of the first vehicle in the current period.
  • the above-mentioned driving data is collected by a sensor installed on the first vehicle.
  • the sensor collects the driving data of the first vehicle in the current period according to the preset sampling frequency.
  • the driving data includes: the distance between the first vehicle and the reference lane line, the relative distance between the first vehicle and the second vehicle, the speed of the first vehicle, the centripetal acceleration of the first vehicle, the speed of the second vehicle, and the deviation of the second vehicle from the first vehicle.
  • the included angle of the vertical direction of a vehicle is: the distance between the first vehicle and the reference lane line, the relative distance between the first vehicle and the second vehicle, the speed of the first vehicle, the centripetal acceleration of the first vehicle, the speed of the second vehicle, and the deviation of the second vehicle from the first vehicle.
  • the distance between the first vehicle and the reference lane line, the relative distance between the first vehicle and the second vehicle, and the speed of the first vehicle can be measured by radar, and the centripetal acceleration of the first vehicle can be measured by the acceleration sensor.
  • the included angle in the vertical direction of a vehicle can be obtained by calculating the image collected by the camera.
  • the sensor After the sensor collects the driving data, it is transmitted to the T-Box of the first vehicle.
  • the T-Box of the first vehicle uploads the driving data to the cloud server.
  • the cloud server After the cloud server receives the uploaded driving data, it first performs noise reduction and smoothing processing on the driving data.
  • a pre-designed mean filter, median filter, Gaussian filter or bilateral filter can be used to perform noise reduction and smoothing processing on the driving data.
  • Figure 3 shows a graph of the two types of data without noise reduction and smoothing. As can be seen from Figure 3, due to the interference of the environment to the sensor, there are many glitches in the data sequence before processing. Noise reduction and smoothing are performed to filter out the glitches in the sequence and improve the accuracy of the data.
  • the cloud server obtains the feature sequence of the second vehicle in the processed driving data.
  • the feature sequence of the second vehicle includes: a distance sequence between the second vehicle and the reference lane line, an angle sequence between the second vehicle and the reference lane line, and a speed sequence of the second vehicle.
  • the angle between the second vehicle and the reference lane line can be obtained according to the image collected by the camera installed on the first vehicle; the speed of the second vehicle can be obtained by the speed measuring radar of the first vehicle.
  • the left lane line of the lane where the first vehicle travels is used as the reference lane line.
  • the right side lane line of the lane where the first vehicle travels may be used as the reference lane line
  • the category of the lane in which the first vehicle travels is obtained.
  • the categories of lanes are generally divided into straight lanes and curved lanes.
  • the centripetal acceleration of the first vehicle when the centripetal acceleration of the first vehicle is zero, it is considered that the type of the lane in which the first vehicle travels is a straight lane, and when it is not zero, the type of the lane in which the first vehicle travels is a curved lane.
  • FIG. 4a a schematic diagram of the positional relationship between the first vehicle and the second vehicle in the straight lane is shown in FIG. 4a.
  • d Sx is the distance between the first vehicle and the second vehicle in the first direction
  • d Sy is the distance between the first vehicle and the second vehicle in the second direction.
  • the distance Ego_distoleft between the first vehicle and the reference lane line, and the distance d Sy between the first vehicle and the second vehicle in the second direction, can be obtained to obtain the distance sequence between the second vehicle and the reference lane line Obj_distoleft.
  • FIG. 4b a schematic diagram of the positional relationship between the first vehicle and the second vehicle in the curved lane is shown in FIG. 4b.
  • the curvature of the curved lane itself will increase the relative distance between the two vehicles. Therefore, it is necessary to perform curvature compensation on the distance sequence obtained above to eliminate the interference caused by the road surface characteristics to the distance sequence.
  • y off is the distance compensation sequence between the second vehicle and the reference lane line
  • r is the radius of the lane where the first vehicle travels
  • the significance of the sign function sign(a y ) is to determine whether the second vehicle is on the left or right side of the first vehicle according to the centripetal acceleration of the first vehicle, so as to obtain the sign of the distance y off , when the second vehicle is on the first vehicle When the vehicle is to the right, the sign of y off is negative, otherwise, the sign of y off is positive.
  • d is the intermediate quantity
  • the curvature compensation is performed on the reference sequence by using the distance compensation sequence, and the time sequence of the distance between the second vehicle and the reference lane line after compensation is obtained.
  • Step S2. Calculate the similarity between the feature sequence of the second vehicle in the current period and the preset driving behavior reference sequence.
  • this step firstly uses a preset window function to preprocess the feature sequence of the second vehicle, and then uses the dynamic time warping algorithm to calculate the similarity between the feature sequence and the preset driving behavior reference sequence Spend.
  • the feature sequence is first discretized by using a preset window function.
  • Common window functions include rectangular window, triangular window, Hanning window, Hamming window and Gaussian window.
  • the window function adopted in the embodiments of the present application is a rectangular window.
  • winlen represents the width of the scan window, and shift represents the scan offset.
  • the scanning window width is positively related to the length of the reference signal and positively related to the calculation time; the scanning offset is positively related to the sampling frequency and negatively related to the calculation time.
  • the specific values of the two parameters will affect the accuracy of identifying driving behavior and the cost of computing time. In practical applications, the values of the two parameters can be selected and set from the combined values listed in Table 1 according to the identification accuracy of the vehicle driving behavior and the actual requirements of the computing time cost.
  • Table 1 The combination value table of the scan window width and scan offset of the window function
  • Figure 6 shows the similarity comparison of the two sequences.
  • the dark curve in Figure 6 represents the reference sequence, and the light curve represents the sequence to be compared.
  • the similarity between two sequences can be identified from the change trend of the sequences.
  • the sequence on the left side of FIG. 6 is separated and enlarged to obtain the comparison diagram on the right side of FIG. 6 .
  • the dynamic time warping algorithm can be used to quantify this similarity and obtain the two sequences. similarity between.
  • the preset driving behavior reference sequence is a standard feature sequence of each driving behavior extracted by analyzing the collected historical data.
  • the similarity is obtained by the dynamic time warping algorithm, which improves the speed of identifying the driving behavior of the vehicle; the use of the window function solves the problem that the dynamic time warping algorithm can only measure the similarity between two discrete time series and cannot handle continuous time series. .
  • the preset driving behaviors include: left lane change, right lane change, left turn, right turn, left turn U-turn, right turn U-turn, acceleration, deceleration, and constant speed.
  • whether the second vehicle is performing the driving behavior of turning left, turning right, turning left, or turning right can be identified by the sequence of included angles between the second vehicle and the reference lane line.
  • the change trend of left turn is from an initial value greater than 0 and less than 90. 90.
  • the driving behavior is left steering; when the included angle sequence between the second vehicle and the reference lane line does not appear to be greater than 90°, but an included angle less than 0° appears in some time periods, the driving behavior of the second vehicle is determined to be right steering. Whether the second vehicle is performing acceleration, deceleration or constant speed driving behavior can be identified by the speed sequence of the second vehicle.
  • Whether the second vehicle is performing the driving behavior of changing lanes to the left or changing lanes can be identified through a sequence of distances between the second vehicle and the reference lane line.
  • the reference lane line as the left lane line of the first vehicle as an example, the difference in the distance change from the reference lane line when the second vehicle changes lanes left and right is described.
  • the right lane line of the lane where the first vehicle travels may also be used as the reference lane line.
  • Fig. 7a shows a schematic diagram of the distance change from the reference lane line when the second vehicle changes lanes left.
  • the horizontal axis of FIG. 7a represents time, and the vertical axis represents the distance between the second vehicle and the reference lane line.
  • the second vehicle when the second vehicle changes lanes to the left, when the reference lane line does not change, the second vehicle will gradually approach the reference lane line, and then the distance between the second vehicle and the reference lane line will gradually change from a certain initial value. become smaller.
  • the fact that the reference lane line does not change indicates that the first vehicle is changing lanes.
  • Fig. 7b shows a schematic diagram of the distance change from the reference lane line when the second vehicle changes lanes right.
  • the horizontal axis of FIG. 7b represents time, and the vertical axis represents the distance between the second vehicle and the reference lane line.
  • the change trend is opposite to the change trend of the left lane change.
  • the second vehicle gradually moves away from the reference lane line, and the second vehicle and the reference lane
  • the distance of the line first increases gradually from a certain initial value.
  • Fig. 7c shows a schematic diagram of the change of the distance from the reference lane line when the first vehicle changes lanes to the left.
  • the horizontal axis of FIG. 7c represents time, and the vertical axis represents the distance between the first vehicle and the reference lane line.
  • the distance between the first vehicle and the reference lane line first gradually decreases from a certain initial value to 0, and then the corresponding reference lane line also changes due to the change of the driving lane. , so the distance between the first vehicle and the reference lane line suddenly changes from 0 to the maximum value, and then shows a gradually decreasing trend.
  • the distance maximum in Figure 7a is related to the width of the lane in which the vehicle travels.
  • FIG. 7d shows a schematic diagram of the distance change from the reference lane line when the first vehicle changes lanes right.
  • the horizontal axis of FIG. 7d represents time
  • the vertical axis represents the distance between the first vehicle and the reference lane line.
  • the distance between the first vehicle and the reference lane line first gradually increases from a certain initial value to a maximum value, and then due to the change of the driving lane, the corresponding reference lane line also occurs. changes, the distance between the first vehicle and the reference lane line suddenly changes from the maximum value to 0, and then shows a gradually increasing trend.
  • the distance maximum of Figure 7d is also related to the width of the lane in which the vehicle travels.
  • Step S3. Obtain the driving behavior of the second vehicle according to the similarity.
  • the preset driving behavior corresponding to the reference sequence for calculating the similarity is the driving behavior of the second vehicle, according to which the current driving behavior can be obtained.
  • the second vehicle is in the Perform the appropriate driving behavior.
  • the second vehicle performs three types of driving behaviors of acceleration, deceleration or constant speed, as well as left lane change or right lane change driving behavior.
  • Step S4. Match the driving scene of the second vehicle in the preset scene library according to the driving behavior of the second vehicle.
  • This step is to use the scene library to classify the driving behavior of the second vehicle obtained in step S3 to obtain the driving scene of the second vehicle.
  • FIG. 8 shows a corresponding relationship diagram between driving scenarios and driving behaviors.
  • the driving scenarios include: lane keeping, steering, U-turn, lane change, and overtaking.
  • the driving behaviors corresponding to the lane keeping scenario include: vehicle acceleration, vehicle deceleration, and vehicle driving at a constant speed;
  • the driving behavior corresponding to the steering scene includes: the vehicle turns left and the vehicle turns right;
  • the driving behaviors corresponding to the U-turn scene include: the vehicle turns left and the vehicle turns right;
  • the driving behaviors corresponding to the lane change scenarios include: vehicle left lane change and vehicle right lane change;
  • the driving behaviors corresponding to the overtaking scenario include: the vehicle accelerates, the vehicle changes lanes to the left, the vehicle drives at a constant speed, the vehicle changes lanes to the right, and the vehicle decelerates.
  • the starting and parking scenarios can be obtained by judging the change of the speed of the vehicle. For example, when the speed of the vehicle gradually increases from 0, it can be confirmed that the vehicle executes the start-up scenario; when the vehicle gradually decreases from a certain speed to 0, it can be confirmed that the vehicle executes the parking scenario.
  • calculating the similarity between the feature sequence of the second vehicle and the preset driving behavior reference sequence in step S2 in the above-mentioned vehicle driving scene recognition method may further include: as shown in FIG. 9 Step S21 to Step S23.
  • Step S21 Identify the driving area of the first vehicle in the current time period.
  • a location module on the first vehicle may be used to determine the driving area of the first vehicle in the current time period.
  • the characteristics of the driving behavior of vehicles in different areas which are related to the driving habits and road topology characteristics of drivers in the area.
  • the embodiment of the present application performs statistical analysis on the historical data of vehicle cut-ins in two areas, and obtains a probability distribution diagram of the driving behavior of the second vehicle in the two areas. When the probabilities are the same, the temporal distances between vehicles in each area are different.
  • FIG. 10 shows a probability distribution diagram of the insertion of the second vehicle into the driving lane of the first vehicle in area 1 and area 2.
  • the time distance between two vehicles in area 1 is between ⁇ 1 - ⁇ 1 and ⁇ 1 + ⁇ 1
  • the time between two vehicles in area 2 The distance is between ⁇ 2 - ⁇ 2 and ⁇ 2 + ⁇ 2 .
  • ⁇ 1 is the mean value of the time distance between the first vehicle and the second vehicle when the second vehicle in area 1 changes lanes to the left
  • ⁇ 1 is the time distance between the first vehicle and the second vehicle when the second vehicle in area 1 changes lanes
  • ⁇ 2 is the mean value of the time distance between the first vehicle and the second vehicle when the second vehicle in area 2 changes lanes
  • ⁇ 2 is the second vehicle in area 2
  • Step S22 Extract the driving behavior reference sequence corresponding to the driving area in the database.
  • the database includes driving behavior reference sequences corresponding to different regions.
  • the driving behavior reference sequences corresponding to different regions are obtained by performing statistical analysis on the vehicle driving data corresponding to the corresponding driving behaviors.
  • Step S23 Combine the window function and the dynamic time warping algorithm to obtain the similarity between the feature sequence and the extracted driving behavior reference sequence.
  • the embodiment of the present application is based on the precondition that the characteristics of the driving behavior of vehicles in different areas are different.
  • the driving behavior benchmark sequence can improve the accuracy of similarity, reduce the amount of calculation, and improve the recognition speed.
  • the present application further provides a vehicle driving scene recognition device.
  • the identification device is deployed in the cloud server, and can be configured to communicate with the vehicle-mounted terminal on the first vehicle, so as to feed back the identification result of the identification device to the user.
  • FIG. 11 shows a vehicle driving scene recognition device provided by an embodiment of the present application. As shown in Figure 11, the identification device specifically includes:
  • a data acquisition unit for acquiring the driving data of the first vehicle in the current period
  • a feature obtaining unit configured to obtain a feature sequence of the second vehicle according to the driving data
  • a similarity calculation unit configured to obtain the similarity between the feature sequence of the second vehicle and a plurality of preset driving behavior reference sequences by using a dynamic time warping algorithm
  • a behavior recognition unit configured to determine the driving behavior of the second vehicle in the current period according to the similarity
  • the scene matching unit is configured to use a pre-built scene library to match the driving scene of the second vehicle in the current period according to the driving behavior of the second vehicle in the current period.
  • the similarity calculation unit is also used for:
  • a preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from the database.
  • the feature acquisition unit is also used for:
  • Noise reduction and smoothing are performed on the driving data.
  • the reference lane line is one lane line of the lane in which the first vehicle travels.
  • the lane in which the first vehicle travels is a curved lane
  • the device further includes:
  • the curvature compensation unit is used to perform curvature compensation on the distance sequence according to the curvature of the curved lane.
  • the behavior recognition unit is specifically used for:
  • the driving behavior represented by the driving behavior reference sequence corresponding to the similarity satisfying the preset condition is taken as the driving behavior of the second vehicle in the current period.
  • the apparatus further includes a scene library construction unit for:
  • the scene library is constructed according to different scenes and driving behaviors corresponding to the different scenes.
  • the vehicle-mounted terminal may be one of a smart phone, a tablet computer, or a vehicle-mounted computer.
  • the vehicle terminal includes a processor, a display module and a data interface.
  • the processor may be a general purpose processor or a special purpose processor.
  • a processor may include a central processing unit (CPU) and/or a baseband processor.
  • the baseband processor may be used to process communication data, and the CPU may be used to implement corresponding control and processing functions, execute software programs, and process data of software programs.
  • the data interface is used to receive data
  • the display module is used to display driving behavior and scene recognition results.
  • the processor receives data through a data interface, and sends a display instruction to the display module after calculation, so as to display the result sent by the cloud.
  • the vehicle terminal may also include: a charging management module, a power management module, a battery, an antenna, a mobile communication module, a wireless communication module, an audio module, a speaker, an earphone interface, an audio Bluetooth module, a display screen, a modem, and a baseband processor.
  • a charging management module a power management module, a battery, an antenna, a mobile communication module, a wireless communication module, an audio module, a speaker, an earphone interface, an audio Bluetooth module, a display screen, a modem, and a baseband processor.
  • the charging management module can receive the charging input of the wired charger through the USB interface.
  • the charging management module may receive wireless charging input through the wireless charging coil of the terminal device. While the charging management module charges the battery, it can also supply power to other devices through the power management module.
  • the power management module is used to connect the battery, the charge management module and the processor.
  • the power management module receives input from the battery and/or charging management module, and supplies power to the processor, the display screen, and the wireless communication module.
  • the power management module can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module may also be provided in the processor.
  • the power management module and the charging management module may also be provided in the same device.
  • the wireless communication function of the vehicle terminal can realize the communication with the server through the antenna, mobile communication module, wireless communication module, modem and baseband processor.
  • the mobile communication module can provide wireless communication solutions including 2G/3G/4G/5G applied on the vehicle terminal.
  • the mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module can receive electromagnetic waves by at least two antennas, filter and amplify the received electromagnetic waves, and transmit them to the modem for demodulation.
  • the mobile communication module can also amplify the signal modulated by the modem and radiate it into electromagnetic waves through the antenna.
  • at least part of the functional modules of the mobile communication module may be provided in the same device as at least part of the modules of the processor.
  • a modem may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the modem may be a stand-alone device.
  • the modem may be independent of the processor, and may be provided in the same device as the mobile communication module or other functional modules.
  • the mobile communication module may be a module in a modem.
  • the wireless communication module can provide applications on terminal devices including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), global navigation satellite systems (GNSS) , frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • GNSS global navigation satellite systems
  • FM frequency modulation
  • FM near field communication technology
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the wireless communication module receives electromagnetic waves through the antenna, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor.
  • the wireless communication module can also receive the signal to be sent from the processor, frequency-modulate it, amplify it, and radiate it into electromagnetic waves through the antenna.
  • the vehicle-mounted terminal can communicate with the server through the mobile communication module and the wireless communication module, receive the identification result issued by the server, or transmit data to the server.
  • the display screen is used to display the recognized driving behavior and driving scenarios in the form of images or videos.
  • the display includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light-emitting diode quantum dot light emitting diodes, QLED
  • the vehicle-mounted terminal may include one or more display screens.
  • the display screen may also be used to display the interface of the application, displaying visual controls in the interface of the application.
  • the audio module is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input to digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be provided in the processor, or some functional modules of the audio module may be provided in the processor.
  • the audio module is used to feed back the recognition result to the user in the form of speech.
  • the loudspeaker is used for feedback of the external sound to the user.
  • the headphone jack is used to connect wired headphones.
  • the headphone interface can be a USB interface, or a 3.5mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the audio Bluetooth module is used to connect the user's Bluetooth headset. This embodiment of the present application does not limit the Bluetooth technology version applied to the audio Bluetooth module, and the Bluetooth chip may be a chip applying any version of the Bluetooth technology.
  • the user can receive the voice corresponding to the recognition result in wired or wireless form through the headphone interface or the audio Bluetooth module.
  • the present application further provides a computer storage medium, where instructions are stored in the computer storage medium, and when the instructions are executed on the computer, the computer is made to execute a vehicle driving scene recognition method as in the embodiments of the present application .
  • the present application further provides a computer program product containing instructions, when the instructions are run on a computer, the computer executes a vehicle driving scene recognition method as in the embodiments of the present application.
  • the method steps in the embodiments of the present application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • the software instructions may be composed of corresponding software modules, and the software modules may be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), Erasable Programmable Read-Only Memory (erasablePROM, EPROM), Electrically Erasable Programmable Read-Only Memory (electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs, or any other form of storage medium known in the art middle.
  • An illustrative storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage medium may reside in an ASIC.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted over a computer-readable storage medium.
  • the computer instructions can be sent from one website site, computer, server, or data center to another website site by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) , computer, server or data center.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present application provides an autonomous driving scenario identifying method and apparatus. The solution comprises: obtaining a feature sequence of a second vehicle according to driving data of a first vehicle in a current time period; then, according to the obtained feature sequence, identifying a driving behavior of the second vehicle around the first vehicle by using a dynamic time warping algorithm; and finally, matching a driving scenario of the second vehicle by using a scenario library. According to the present solution, the efficiency and accuracy of identifying a driving behavior of a vehicle are improved, and the cost of identifying the driving behavior of the vehicle is reduced. Moreover, the driving scenario of a second vehicle around a first vehicle is actively identified, a reference basis is provided for a driving operation at the next stage of the first vehicle, and the driving safety of the first vehicle is improved.

Description

车辆驾驶场景识别方法及装置Vehicle driving scene recognition method and device 技术领域technical field
本申请涉及智能驾驶技术领域,尤其涉及一种车辆驾驶场景识别方法及装置。The present application relates to the technical field of intelligent driving, and in particular, to a method and device for recognizing a driving scene of a vehicle.
背景技术Background technique
随着自动驾驶和辅助驾驶技术的发展,越来越多的公司在研发或已经量产,可在实际车道中控制车辆自行行驶的驾驶系统,从而使得安装该驾驶系统的车辆具备无人干预的自动驾驶功能。With the development of autonomous driving and assisted driving technologies, more and more companies are developing or mass-producing a driving system that can control the vehicle's own driving in the actual lane, so that the vehicle equipped with the driving system has the ability to operate without human intervention. Autopilot function.
当前对车辆驾驶行为进行识别的智能驾驶解决方案,通常是通过车队收集车辆驾驶数据预先训练深度学习模型,再利用训练后的深度学习模型对车辆的驾驶行为进行识别。例如在确定是否发生前方车辆切入动作时,现有技术通常是先捕捉自身车辆自身及前向所有车道上车辆的驾驶数据,数据包括车辆位置和车辆速度,通过预先训练的识别模块进行动作识别。如果识别出发生了切入动作,则将切入动作完整的数据,包括:本车和切入车辆的驾驶数据以及道路情况发送至云端服务器,云端根据上传的数据识别是否发生前方车辆切入的动作。The current intelligent driving solutions for identifying vehicle driving behaviors usually collect vehicle driving data from the fleet to pre-train a deep learning model, and then use the trained deep learning model to identify the driving behavior of the vehicle. For example, when determining whether the front vehicle cuts in, the prior art usually first captures the driving data of the own vehicle itself and vehicles in all the forward lanes, including vehicle position and vehicle speed, and uses a pre-trained recognition module to perform action recognition. If it is recognized that a cut-in action has occurred, the complete data of the cut-in action, including: driving data of the vehicle and the cut-in vehicle and road conditions, is sent to the cloud server, and the cloud identifies whether the cut-in action of the preceding vehicle has occurred according to the uploaded data.
随着机器学习模型变得越来越复杂,例如更深层次的神经网络,对训练数据集量级的必要性也相应增加。因此,在处理相邻车道的车辆“切入”动作判断/识别时,如果想更加准确地模仿驾驶员的预判和分析水准,就需要提供大规模的实际道路数据训练模型,训练数据集积累较困难且成本偏高。同时,在训练模型时,没有区分区域,本地相关性差,识别出的结果准确度不高。As machine learning models become more complex, such as deeper neural networks, the need for training datasets increases accordingly. Therefore, if you want to more accurately imitate the driver's prediction and analysis level when dealing with the judgment/recognition of the "cut-in" action of the vehicle in the adjacent lane, it is necessary to provide a large-scale actual road data training model, and the accumulation of training data sets is relatively high. difficult and expensive. At the same time, when training the model, there is no distinction between regions, the local correlation is poor, and the accuracy of the identified results is not high.
发明内容SUMMARY OF THE INVENTION
本申请提供了一种车辆驾驶场景识别方法及装置,采用动态时间规整算法计算相似度和识别车辆行驶区域相结合的方法获得车辆的驾驶行为,以解决现有技术中通过模型识别驾驶行为时存在的训练数据集积累较困难且成本高的问题,以及在场景库匹配车辆的驾驶场景,实现车辆驾驶场景识别的目的。The present application provides a method and device for recognizing a driving scene of a vehicle, which uses a dynamic time warping algorithm to calculate the similarity and recognizes the driving area of the vehicle to obtain the driving behavior of the vehicle, so as to solve the problem in the prior art when the driving behavior is recognized by the model. It is difficult and costly to accumulate the training data set of the vehicle, and the driving scene of the vehicle is matched in the scene library to realize the purpose of vehicle driving scene recognition.
第一方面,本申请提供一种车辆驾驶场景识别方法,该方法包括:In a first aspect, the present application provides a method for identifying a vehicle driving scene, the method comprising:
获取第一车辆在当前时段的行驶数据;obtain the driving data of the first vehicle in the current period;
根据所述行驶数据获得第二车辆的特征序列,所述第二车辆用于指示所述第一车辆周围的车辆,所述特征序列包括:速度序列、所述第二车辆与参考车道线的距离序列以及所述第二车辆与参考车道线的夹角序列;A feature sequence of a second vehicle is obtained according to the driving data, the second vehicle is used to indicate vehicles around the first vehicle, and the feature sequence includes: a speed sequence, a distance between the second vehicle and a reference lane line a sequence and a sequence of angles between the second vehicle and the reference lane line;
利用动态时间规整算法,获得所述第二车辆的特征序列与预设的驾驶行为基准序列之间的相似度;Using a dynamic time warping algorithm to obtain the similarity between the feature sequence of the second vehicle and a preset driving behavior reference sequence;
根据所述相似度确定所述第二车辆在所述当前时段的驾驶行为;determining the driving behavior of the second vehicle in the current period according to the similarity;
根据所述第二车辆在当前时段的驾驶行为,利用预先构建的场景库匹配所述第二车辆在所述当前时段的驾驶场景。According to the driving behavior of the second vehicle in the current period, a pre-built scene library is used to match the driving scene of the second vehicle in the current period.
由上,本申请采用动态时间规整算法计算相似度的方法识别第二车辆的驾驶行为,相比通过训练后的模型进行识别,提高了识别驾驶行为的准确度;基于动态时间规整算法可以高效处理大量数据流且时间短的特点,提高了识别驾驶行为的效率;此外,本申请通过第二车辆的驾驶行为主动匹配该车辆的驾驶场景,可为第一车辆的驾驶操作提供参考,在一定程度上可提高第一车辆的安全性。From the above, the present application adopts the method of calculating the similarity of the dynamic time warping algorithm to identify the driving behavior of the second vehicle, which improves the accuracy of identifying the driving behavior compared to the recognition through the trained model; the dynamic time warping algorithm can efficiently process the driving behavior. The characteristics of a large number of data streams and short time improve the efficiency of identifying driving behavior; in addition, the present application actively matches the driving scene of the second vehicle through the driving behavior of the vehicle, which can provide a reference for the driving operation of the first vehicle, to a certain extent can improve the safety of the first vehicle.
在一个可能的实施例中,在所述利用动态时间规整算法,获得所述第二车辆的特征序列与驾驶行为基准序列之间的相似度之前,所述方法还包括:In a possible embodiment, before obtaining the similarity between the feature sequence of the second vehicle and the driving behavior reference sequence by using a dynamic time warping algorithm, the method further includes:
识别所述第一车辆行驶的区域;identifying the area in which the first vehicle travels;
在数据库中提取所述第一车辆行驶的区域对应的所述预设的驾驶行为基准序列,所述数据库包括不同区域对应的驾驶行为基准序列。The preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from a database, where the database includes driving behavior reference sequences corresponding to different areas.
由上,本申请识别第一车辆行驶的区域可进一步提高识别其周围第二车辆驾驶行为的速度,进一步节约识别驾驶行为的时间成本。From the above, the present application can further improve the speed of identifying the driving behavior of the second vehicle around it by identifying the area where the first vehicle is traveling, and further save the time cost of identifying the driving behavior.
在一个可能的实施例中,在根据所述行驶数据获得第二车辆的特征序列之前,所述方法还包括:In a possible embodiment, before obtaining the feature sequence of the second vehicle according to the driving data, the method further includes:
对所述行驶数据做降噪处理和平滑处理。Noise reduction processing and smoothing processing are performed on the driving data.
由上,本申请对行驶数据进行处理,可减少数据采集误差对识别驾驶行为的影响,提高识别的准确度。From the above, by processing the driving data in the present application, the influence of the data collection error on the recognition of driving behavior can be reduced, and the recognition accuracy can be improved.
在一个可能的实施例中,所述参考车道线为所述第一车辆行驶的车道的一个车道线。In a possible embodiment, the reference lane line is a lane line of the lane in which the first vehicle travels.
在一个可能的实施例中,所述第一车辆行驶的车道为曲线车道,所述方法还包括:In a possible embodiment, the lane in which the first vehicle travels is a curved lane, and the method further includes:
按照所述曲线车道的曲率对所述距离序列进行曲率补偿。The distance sequence is curvature compensated according to the curvature of the curved lane.
由上,第一车辆行驶在曲线车道时,本申请对距离序列进行曲率补偿,可有效降低路 面特征对采集的行驶数据的影响,提高识别驾驶行为的准确度。From the above, when the first vehicle is driving in a curved lane, the present application performs curvature compensation on the distance sequence, which can effectively reduce the influence of road surface features on the collected driving data and improve the accuracy of identifying driving behavior.
在一个可能的实施例中,所述根据所述相似度确定所述第二车辆在所述当前时段的驾驶行为包括:In a possible embodiment, the determining, according to the similarity, the driving behavior of the second vehicle in the current period includes:
判断所述相似度是否满足预设条件;Judging whether the similarity satisfies a preset condition;
将满足所述预设条件的相似度对应的驾驶行为基准序列表征的驾驶行为,作为所述第二车辆在所述当前时段的驾驶行为。The driving behavior represented by the driving behavior reference sequence corresponding to the degree of similarity satisfying the preset condition is used as the driving behavior of the second vehicle in the current period.
在一个可能的实施例中,还包括:In a possible embodiment, it also includes:
根据不同场景对应的历史行驶图像,获得所述不同场景对应的驾驶行为;obtaining driving behaviors corresponding to the different scenarios according to the historical driving images corresponding to the different scenarios;
根据所述不同场景及所述不同场景对应的驾驶行为,构建所述场景库。The scene library is constructed according to the different scenes and the driving behaviors corresponding to the different scenes.
第二方面,本申请还提供一种车辆驾驶场景识别装置,该装置包括:In a second aspect, the present application further provides a vehicle driving scene recognition device, the device comprising:
数据获取单元,用于获取第一车辆在当前时段的行驶数据;a data acquisition unit for acquiring the driving data of the first vehicle in the current period;
特征获取单元,用于根据所述行驶数据获得第二车辆的特征序列,所述第二车辆用于指示所述第一车辆周围的车辆,所述特征序列包括:速度序列、所述第二车辆与参考车道线的距离序列和所述第二车辆与参考车道线的夹角序列;a feature acquisition unit, configured to obtain a feature sequence of a second vehicle according to the driving data, the second vehicle is used to indicate vehicles around the first vehicle, and the feature sequence includes: a speed sequence, the second vehicle a sequence of distances from the reference lane line and a sequence of included angles between the second vehicle and the reference lane line;
相似度计算单元,用于利用动态时间规整算法,获得所述第二车辆的特征序列与预设的多个驾驶行为基准序列之间的相似度;a similarity calculation unit, configured to obtain the similarity between the feature sequence of the second vehicle and a plurality of preset driving behavior reference sequences by using a dynamic time warping algorithm;
行为识别单元,用于根据所述相似度确定所述第二车辆在所述当前时段的驾驶行为;a behavior recognition unit, configured to determine the driving behavior of the second vehicle in the current period according to the similarity;
场景匹配单元,用于根据所述第二车辆在当前时段的驾驶行为,利用预先构建的场景库匹配所述第二车辆在所述当前时段的驾驶场景。A scene matching unit, configured to match the driving scene of the second vehicle in the current period by using a pre-built scene library according to the driving behavior of the second vehicle in the current period.
在一个可能的实施例中,所述相似度计算单元还用于:In a possible embodiment, the similarity calculation unit is further used for:
识别所述第一车辆行驶的区域;identifying the area in which the first vehicle travels;
在数据库中提取所述第一车辆行驶的区域对应的所述预设的驾驶行为基准序列,所述数据库包括不同区域对应的驾驶行为基准序列。The preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from a database, where the database includes driving behavior reference sequences corresponding to different areas.
在一个可能的实施例中,所述特征获取单元还用于:In a possible embodiment, the feature acquisition unit is further configured to:
对所述行驶数据做降噪处理和平滑处理。Noise reduction processing and smoothing processing are performed on the driving data.
在一个可能的实施例中,所述参考车道线为所述第一车辆行驶的车道的一个车道线。In a possible embodiment, the reference lane line is a lane line of the lane in which the first vehicle travels.
在一个可能的实施例中,所述第一车辆行驶的车道为曲线车道,所述装置还包括:In a possible embodiment, the lane in which the first vehicle travels is a curved lane, and the apparatus further includes:
曲率补偿单元,用于按照所述曲线车道的曲率对所述距离序列进行曲率补偿。A curvature compensation unit, configured to perform curvature compensation on the distance sequence according to the curvature of the curved lane.
在一个可能的实施例中,所述行为识别单元具体用于:In a possible embodiment, the behavior identification unit is specifically used for:
判断所述相似度是否满足预设条件;Judging whether the similarity satisfies a preset condition;
将满足所述预设条件的相似度对应的驾驶行为基准序列表征的驾驶行为,作为所述第二车辆在所述当前时段的驾驶行为。The driving behavior represented by the driving behavior reference sequence corresponding to the degree of similarity satisfying the preset condition is used as the driving behavior of the second vehicle in the current period.
在一个可能的实施例中,还包括场景库构建单元,用于:In a possible embodiment, it also includes a scene library construction unit for:
根据不同场景对应的历史行驶图像,获得所述不同场景对应的驾驶行为;obtaining driving behaviors corresponding to the different scenarios according to the historical driving images corresponding to the different scenarios;
根据所述不同场景及所述不同场景对应的驾驶行为,构建所述场景库。The scene library is constructed according to the different scenes and the driving behaviors corresponding to the different scenes.
第三方面,本申请还提供一种计算机存储介质,所述计算机存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如本申请第一方面所述的一种车辆驾驶场景识别方法。In a third aspect, the present application also provides a computer storage medium, where instructions are stored in the computer storage medium, and when the instructions are executed on the computer, the computer is made to execute the vehicle driving according to the first aspect of the present application. Scene recognition method.
第四方面,本申请还提供一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如本申请第一方面所述的一种车辆驾驶场景识别方法。In a fourth aspect, the present application further provides a computer program product including instructions, which, when the instructions are executed on a computer, cause the computer to execute the method for recognizing a vehicle driving scene as described in the first aspect of the present application.
附图说明Description of drawings
图1是本申请提供的应用场景系统结构图;Fig. 1 is the application scene system structure diagram provided by this application;
图2是本申请实施例提供的一种车辆驾驶场景识别方法的流程图;2 is a flowchart of a method for identifying a vehicle driving scene provided by an embodiment of the present application;
图3是本申请实施例提供的经降噪和平滑处理后的效果示意图;3 is a schematic diagram of the effect after noise reduction and smoothing processing provided by an embodiment of the present application;
图4a是本申请实施例提供的第一车辆和第二车辆在直线车道的位置关系示意图;4a is a schematic diagram of the positional relationship between the first vehicle and the second vehicle in a straight lane provided by an embodiment of the present application;
图4b是本申请实施例提供的第一车辆和第二车辆在曲线车道的位置关系示意图;FIG. 4b is a schematic diagram of the positional relationship between the first vehicle and the second vehicle in a curved lane provided by an embodiment of the present application;
图5是本申请实施例提供的结合窗函数和动态时间规整算法计算相似度的示意图;5 is a schematic diagram of calculating similarity by combining a window function and a dynamic time warping algorithm provided by an embodiment of the present application;
图6是本申请实施例提供的两个序列的相似性对比图;Fig. 6 is the similarity comparison diagram of two sequences provided in the embodiment of the present application;
图7a是本申请实施例提供的第二车辆左换道时与参考车道线的距离变化示意图;FIG. 7a is a schematic diagram of the distance change between the second vehicle and the reference lane line when the second vehicle changes lanes to the left according to an embodiment of the present application;
图7b是本申请实施例提供的第二车辆右换道时与参考车道线的距离变化示意图;7b is a schematic diagram of the distance change between the second vehicle and the reference lane line when the second vehicle changes lanes to the right according to an embodiment of the present application;
图7c是本申请实施例提供的第一车辆左换道时与参考车道线的距离变化示意图;FIG. 7c is a schematic diagram of the distance change between the first vehicle and the reference lane line when the first vehicle changes lanes to the left according to an embodiment of the present application;
图7d是本申请实施例提供的第一车辆右换道时与参考车道线的距离变化示意图;FIG. 7d is a schematic diagram of the distance change between the first vehicle and the reference lane line when the first vehicle changes lanes to the right according to an embodiment of the present application;
图8是本申请实施例提供的驾驶场景及驾驶行为的对应关系图;FIG. 8 is a corresponding relationship diagram of driving scenarios and driving behaviors provided by an embodiment of the present application;
图9是本申请实施例提供的一种车辆驾驶场景识别方法的流程图;9 is a flowchart of a method for recognizing a vehicle driving scene provided by an embodiment of the present application;
图10是本申请提供的两个区域中第二车辆插入第一车辆行驶车道的概率分布图;10 is a probability distribution diagram of the second vehicle inserting into the driving lane of the first vehicle in the two areas provided by the present application;
图11是本申请实施例提供的车辆驾驶场景识别装置示意图。FIG. 11 is a schematic diagram of a vehicle driving scene recognition device provided by an embodiment of the present application.
具体实施方式Detailed ways
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图,对本申请实施例中的技术方案进行描述。In order to make the objectives, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
在本申请实施例的描述中,“示性的”、“例如”或者“举例来说”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示性的”、“例如”或者“举例来说”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示性的”、“例如”或者“举例来说”等词旨在以具体方式呈现相关概念。In the description of the embodiments of the present application, words such as "exemplary", "for example" or "for example" are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as "exemplary," "such as," or "by way of example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, use of words such as "illustrative," "such as," or "for example," is intended to present the related concepts in a specific manner.
在本申请实施例的描述中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,单独存在B,同时存在A和B这三种情况。另外,除非另有说明,术语“多个”的含义是指两个或两个以上。例如,多个系统是指两个或两个以上的系统,多个屏幕终端是指两个或两个以上的屏幕终端。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。In the description of the embodiments of the present application, the term "and/or" is only an association relationship for describing associated objects, indicating that there may be three relationships, for example, A and/or B, which may indicate: A alone exists, A alone exists There is B, and there are three cases of A and B at the same time. Also, unless stated otherwise, the term "plurality" means two or more. For example, multiple systems refer to two or more systems, and multiple screen terminals refer to two or more screen terminals. In addition, the terms "first" and "second" are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implying the indicated technical features. Thus, a feature defined as "first" or "second" may expressly or implicitly include one or more of that feature. The terms "including", "including", "having" and their variants mean "including but not limited to" unless specifically emphasized otherwise.
图1是本申请提供的车辆驾驶场景识别的应用系统结构图。如图1所示,应用系统包括:高速公路或城区主干道上行驶的车辆和用于计算的云端服务器。车辆上安装有传感器、定位模块(GPS或导航地图HD Map)、以及负责传感器和云端服务器之间数据传输的无线网关T-Box。FIG. 1 is a structural diagram of an application system for vehicle driving scene recognition provided by the present application. As shown in Figure 1, the application system includes: vehicles running on highways or main roads in urban areas and cloud servers for computing. The vehicle is equipped with sensors, a positioning module (GPS or HD Map), and a wireless gateway T-Box responsible for data transmission between the sensors and the cloud server.
可以理解的是,本申请实施例示意的车辆驾驶场景识别的应用系统的结构并不构成对本申请的具体限定。在本申请另一些实施例中,车辆驾驶场景识别的应用系统结构可以包括比图示更多或更少的模块,或者组合某些模块,或者拆分某些模块,或者不同的模块布置。图示的模块可以以硬件,或软件和硬件的组合实现。It can be understood that the structure of the application system for vehicle driving scene recognition illustrated in the embodiments of the present application does not constitute a specific limitation to the present application. In other embodiments of the present application, the application system structure of vehicle driving scene recognition may include more or less modules than shown, or some modules are combined, or some modules are split, or different modules are arranged. The illustrated modules may be implemented in hardware, or a combination of software and hardware.
应用系统中的车辆可以是能够模拟驾驶员操作的智能车辆,也可以是需要人工驾驶的普通车辆。按照消耗的能源来划分,车辆可以是新能源汽车,也可以是普通的燃油汽车。本申请实施例不对车辆的类别做具体限定。The vehicle in the application system can be an intelligent vehicle that can simulate the operation of a driver, or an ordinary vehicle that requires manual driving. According to the energy consumption, the vehicle can be a new energy vehicle or an ordinary fuel vehicle. The embodiments of the present application do not specifically limit the types of vehicles.
在实际行驶中,车辆的驾驶行为会受到周围车辆的影响,对周围车辆的驾驶行为及驾驶场景进行快速识别,可以辅助驾驶员或者驾驶操作系统做出本车的驾驶操作。在本申请实施例中,为方便表述,将应用系统中某一车辆标记为第一车辆,第一车辆周围的车辆标记为第二车辆。本领域技术人员应知晓,第一车辆和第二车辆的设定是相对的,第一车辆和第二车辆可以互为彼此的周围车辆。In actual driving, the driving behavior of the vehicle will be affected by the surrounding vehicles. Quickly identify the driving behavior and driving scenes of the surrounding vehicles, which can assist the driver or the driving operating system to make the driving operation of the vehicle. In the embodiment of the present application, for the convenience of description, a certain vehicle in the application system is marked as the first vehicle, and the vehicles around the first vehicle are marked as the second vehicle. Those skilled in the art should know that the settings of the first vehicle and the second vehicle are relative, and the first vehicle and the second vehicle may be each other's surrounding vehicles.
当对第一车辆周围的第二车辆进行场景识别时,先利用第一车辆上安装的各种传感器采集自身及周围车辆的行驶数据,例如摄像头和雷达;然后利用第一车辆安装的T-Box将 传感器采集的行驶数据上传至云端服务器;云端服务器则根据上传的数据识别第二车辆的驾驶行为,将其作为第一车辆下一步执行驾驶操作的参考依据。When performing scene recognition on the second vehicle around the first vehicle, first use various sensors installed on the first vehicle to collect the driving data of itself and surrounding vehicles, such as cameras and radars; then use the T-Box installed on the first vehicle The driving data collected by the sensor is uploaded to the cloud server; the cloud server identifies the driving behavior of the second vehicle according to the uploaded data, and uses it as a reference for the next driving operation of the first vehicle.
在实际驾驶中,当第二车辆距离第一车辆太远时,不会对第一车辆的驾驶操作造成影响。因此,可设置一个距离阈值,只识别距离阈值内的第二车辆的驾驶场景。设置距离阈值,一方面可以减少数据处理量,另一方面可以节省为第一车辆布置传感器产生的成本。本申请实施例对第二车辆是否安装传感器不做具体限定。In actual driving, when the second vehicle is too far away from the first vehicle, the driving operation of the first vehicle will not be affected. Therefore, a distance threshold may be set, and only the driving scene of the second vehicle within the distance threshold is identified. Setting the distance threshold can reduce the amount of data processing on the one hand, and save the cost of arranging sensors for the first vehicle on the other hand. This embodiment of the present application does not specifically limit whether a sensor is installed on the second vehicle.
摄像头用来记录第二车辆的驾驶场景对应的图像数据。摄像头的工作原理是通过镜头采集图像,然后由内部的感光组件及控制组件对采集的图像进行处理,进而转换成其他系统可以识别的数字信号;其他系统通过摄像头的传输端口获得数字信号,接着进行图像还原就可以得到与实际场景一致的图像。在实际应用中,摄像头采集图像数据的视野范围以及摄像头的安装数量和安装位置可以进一步根据实际需要设计可行的方案。本申请实施例不对摄像头的视野范围、安装数量和安装位置做具体限定。The camera is used to record image data corresponding to the driving scene of the second vehicle. The working principle of the camera is to collect images through the lens, and then process the collected images by the internal photosensitive components and control components, and then convert them into digital signals that can be recognized by other systems; other systems obtain digital signals through the transmission port of the camera, and then proceed. Image restoration can get images that are consistent with the actual scene. In practical applications, the field of view of the camera to collect image data and the installation number and installation position of the camera can further design a feasible solution according to actual needs. The embodiments of the present application do not specifically limit the field of view, installation quantity, and installation position of the cameras.
摄像头的类型可以根据用户的不同需求进行选择,只要能实现是视频摄像、传播和静态图像捕捉等基本功能即可。例如,摄像头可以是双目摄像头和单目摄像头等常用的车载摄像头中的一种或多种类型。The type of camera can be selected according to the different needs of users, as long as it can realize basic functions such as video camera, communication and still image capture. For example, the camera may be one or more types of commonly used in-vehicle cameras, such as a binocular camera and a monocular camera.
如果按信号类别选取,摄像头还可以是数字摄像头和模拟摄像头中的一种或两种类型,二者区别在于,对镜头采集的图像处理过程不同。数字摄像头是将采集的模拟信号转换成数字信号进行存储的,而模拟摄像头是利用特定的视频捕捉卡将模拟信号转换成数字模式,加以压缩后存储的。如果按摄像头中图像传感器类别划分,摄像头还可以是互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)类别的摄像头和电荷耦合器件(charge-coupled device,CCD)类别的摄像头中的一种或两种。If selected according to the signal type, the camera can also be one or two types of digital camera and analog camera, the difference between the two is that the image processing process of the lens is different. The digital camera converts the collected analog signal into a digital signal for storage, while the analog camera uses a specific video capture card to convert the analog signal into a digital mode, compress it and store it. If classified according to the type of image sensor in the camera, the camera can also be one or both of the complementary metal oxide semiconductor (CMOS) type camera and the charge-coupled device (CCD) type camera. kind.
如果按接口类型划分,摄像头还可以是串口、并口、通用串行总线(Universal Serial Bus,USB)和火线接口(IEEE1394)中的一种或多种类型。本申请实施例对摄像头类型同样不做具体限定。If divided by interface type, the camera can also be one or more types of serial port, parallel port, Universal Serial Bus (Universal Serial Bus, USB) and FireWire interface (IEEE1394). The embodiment of the present application also does not specifically limit the type of the camera.
雷达(radio detection and ranging,Radar)可以用来测量路况中第一车辆与不同目标之间的距离,还可以用来测量第二车辆的行驶速度。雷达是通过发出电磁波探测目标的电子设备,雷达的工作原理是通过发射电磁波对目标进行照射并接收其回波,由此获得探测的目标至电磁波发射点的距离、距离变化率(径向速度)、方位和高度等信息。Radar (radio detection and ranging, Radar) can be used to measure the distance between the first vehicle and different targets in road conditions, and can also be used to measure the speed of the second vehicle. Radar is an electronic device that detects the target by emitting electromagnetic waves. The working principle of the radar is to irradiate the target by emitting electromagnetic waves and receive its echoes, thereby obtaining the distance from the detected target to the electromagnetic wave emission point, and the rate of change of the distance (radial velocity). , bearing and altitude.
雷达在类型选取上可以根据实际应用场景的需求进行选择,可以是测速雷达、激光雷达、毫米波雷达和超声波雷达等常用车载雷达中的一种或多种。其中,激光雷达由于发送 激光束进行探测,具有高精度、高分辨率和可以建立周边3D模型的优势,常用于车辆自动驾驶系统或辅助驾驶系统中;例如自适应巡航控制(Adaptive cruise control,ACC)、前车碰撞警示(Forward Collision Warning,FCW)、车道保持系统(Lane keep assistance,LKA)和自动泊车系统(Automatic parking,AP)。The type of radar can be selected according to the needs of the actual application scenario, which can be one or more of commonly used vehicle-mounted radars such as speed radar, lidar, millimeter-wave radar, and ultrasonic radar. Among them, LiDAR has the advantages of high precision, high resolution and the ability to build a 3D model of the surrounding because it sends a laser beam for detection, and is often used in vehicle automatic driving systems or assisted driving systems; for example, adaptive cruise control (Adaptive cruise control, ACC) ), Forward Collision Warning (FCW), Lane Keeping Assist (LKA) and Automatic Parking (AP).
除此之外,第一车辆上安装的雷达还可以是,能满足本申请实施例涉及的功能的其他类型的雷达。当利用雷达检测左车道线的位置时,采用的是基于雷达扫描点密度的车道线检测方法,该方法通过获取雷达扫描点的坐标并转换成栅格图,用原始数据映射栅格图,可以是直接坐标栅格图也可以是极坐标栅格图。按照后期处理需要进行选择,极坐标栅格图被直接用于车道线识别,即有多个点映射的栅格就被认为是车道线点。Besides, the radar installed on the first vehicle may also be other types of radars that can satisfy the functions involved in the embodiments of the present application. When using radar to detect the position of the left lane line, a lane line detection method based on the density of radar scanning points is used. This method obtains the coordinates of the radar scanning points and converts them into a grid map, and maps the grid map with the original data. It is a direct coordinate grid map or a polar coordinate grid map. According to the needs of post-processing, the polar coordinate grid map is directly used for lane line recognition, that is, a grid with multiple point mappings is regarded as a lane line point.
雷达探测目标的范围与其安装数量、安装位置和设定的探测距离有关,在具体应用中,可以根据实际需要对雷达的安装的数量、安装位置和探测距离进行部署。同样地,本申请实施例不对影响雷达探测范围的因素做具体限定。The range of radar detection targets is related to its installation quantity, installation position and set detection distance. In specific applications, the installation quantity, installation position and detection distance of radars can be deployed according to actual needs. Likewise, the embodiments of the present application do not specifically limit the factors affecting the radar detection range.
在一些实施例中,第一车辆还可以安装加速度传感器。加速度传感器用来测量第一车辆行进时的加速度和向心加速度。加速度传感器通常由质量块、阻尼器、弹性元件、敏感元件和适调电路等部分组成。在加速过程中,通过对质量块所受惯性力的测量,利用牛顿第二定律获得加速度值。汽车上常用的加速度传感器是压阻式加速度传感器,还可以是其他类型的传感器,例如电容式、电感式、应变式、压电式等。In some embodiments, the first vehicle may also be equipped with an acceleration sensor. The acceleration sensor is used to measure acceleration and centripetal acceleration when the first vehicle is traveling. Acceleration sensors are usually composed of mass blocks, dampers, elastic elements, sensitive elements and adaptive circuits. During the acceleration process, the acceleration value is obtained by using Newton's second law by measuring the inertial force on the mass block. Acceleration sensors commonly used in automobiles are piezoresistive acceleration sensors, and can also be other types of sensors, such as capacitive, inductive, strain, and piezoelectric.
此外,第一车辆还可以安装用于采集环境数据的传感器,环境数据可以用来对第一车辆下一步的驾驶场景进行判断。环境数据包括但不限于温度、湿度、气压、天气状况、车道数量、距离交通指示灯(红绿灯)的距离、匝道位置、禁行区域、行人位置、交通指示灯状态信息等。与环境数据相对应,传感器还可以包括但不限于定位传感器、惯性测量单元(Inertial measurement unit,IMU)、温度传感器、湿度传感器、气体检测传感器、环境传感、或者其他用于采集环境数据的传感器等,本申请实施例中不做限定。In addition, the first vehicle may also be installed with sensors for collecting environmental data, and the environmental data may be used to judge the next driving scene of the first vehicle. Environmental data includes but is not limited to temperature, humidity, air pressure, weather conditions, number of lanes, distance from traffic lights (traffic lights), ramp locations, prohibited areas, pedestrian locations, traffic light status information, etc. Corresponding to the environmental data, the sensor may also include, but is not limited to, a positioning sensor, an inertial measurement unit (IMU), a temperature sensor, a humidity sensor, a gas detection sensor, an environmental sensor, or other sensors for collecting environmental data etc., which are not limited in the embodiments of the present application.
T-Box具有远程无线通讯和CAN通讯功能,为整车提供远程通讯接口。具体是将传感器采集的各种数据上传至云端服务器,还将云端服务器下发的数据反馈给车辆安装的车载终端或反馈给车辆的控制系统,实现对驾驶员的辅助驾驶或车辆控制系统控制车辆自动驾驶。车载终端可以是用户使用的手机、平板电脑或其他可移动智能终端,还可以是安装在车辆上车载电脑。T-Box has the functions of long-distance wireless communication and CAN communication, and provides long-distance communication interface for the whole vehicle. Specifically, the various data collected by the sensor are uploaded to the cloud server, and the data sent by the cloud server is fed back to the vehicle terminal installed in the vehicle or to the control system of the vehicle, so as to realize the assisted driving of the driver or the vehicle control system to control the vehicle. Autopilot. The in-vehicle terminal may be a mobile phone, a tablet computer or other mobile intelligent terminal used by the user, or an in-vehicle computer installed on the vehicle.
T-Box还可以通过CAN总线接口获取车辆的状态数据,例如:车辆信息、整车控制器信息、电机控制器信息、电池管理系统BMS、车载充电机以及行驶里程、平均车速、燃油 使用量、平均油耗等数据。T-box还可以提供计算或者存储功能。T-Box can also obtain vehicle status data through CAN bus interface, such as: vehicle information, vehicle controller information, motor controller information, battery management system BMS, on-board charger and mileage, average vehicle speed, fuel usage, Average fuel consumption, etc. T-box can also provide computing or storage functions.
定位模块用于实现车辆位置信息的采集。定位模块可以是基于全球卫星定位系统(GPS)的定位模块,通过接收GPS信号来定位车辆;还可以是基于其他卫星定位系统的定位模块,如基于北斗卫星定位系统的定位模块、基于格洛纳斯(GLONASS)全球定位系统的定位模块,以及基于伽利略(Galileo)全球定位系统的定位模块。The positioning module is used to realize the collection of vehicle position information. The positioning module can be a positioning module based on the global satellite positioning system (GPS), which can locate the vehicle by receiving GPS signals; it can also be a positioning module based on other satellite positioning systems, such as a positioning module based on the Beidou satellite positioning system, a positioning module based on the GLONER The positioning module of the GLONASS GPS and the positioning module based on the Galileo GPS.
图2是本申请实施例提供的一种车辆驾驶场景识别方法的流程图。如图2所示,该方法应用于图1所示系统中的第一车辆,识别第一车辆周围的第二车辆的驾驶场景的具体过程包括下述的步骤S1~步骤S4。FIG. 2 is a flowchart of a method for recognizing a driving scene of a vehicle provided by an embodiment of the present application. As shown in FIG. 2 , the method is applied to the first vehicle in the system shown in FIG. 1 , and the specific process of identifying the driving scene of the second vehicle around the first vehicle includes the following steps S1 to S4 .
步骤S1.根据第一车辆在当前时段的行驶数据,获得第二车辆在当前时段的特征序列。Step S1. Obtain a feature sequence of the second vehicle in the current period according to the driving data of the first vehicle in the current period.
在本申请的实施例中,上述行驶数据由第一车辆安装的传感器采集。其中,传感器按照预设的采样频率采集第一车辆在当前时段的行驶数据。In the embodiment of the present application, the above-mentioned driving data is collected by a sensor installed on the first vehicle. Wherein, the sensor collects the driving data of the first vehicle in the current period according to the preset sampling frequency.
行驶数据包括:第一车辆与参考车道线的距离、第一车辆与第二车辆的相对距离、第一车辆的速度、第一车辆的向心加速度、第二车辆的速度以及第二车辆偏离第一车辆的竖直方向的夹角。The driving data includes: the distance between the first vehicle and the reference lane line, the relative distance between the first vehicle and the second vehicle, the speed of the first vehicle, the centripetal acceleration of the first vehicle, the speed of the second vehicle, and the deviation of the second vehicle from the first vehicle. The included angle of the vertical direction of a vehicle.
上述第一车辆与参考车道线的距离、第一车辆与第二车辆的相对距离、第一车辆的速度可由雷达测量得到,第一车辆的向心加速度可由加速度传感器测量得到,第二车辆偏离第一车辆的竖直方向的夹角可由摄像头采集的图像计算获得。The distance between the first vehicle and the reference lane line, the relative distance between the first vehicle and the second vehicle, and the speed of the first vehicle can be measured by radar, and the centripetal acceleration of the first vehicle can be measured by the acceleration sensor. The included angle in the vertical direction of a vehicle can be obtained by calculating the image collected by the camera.
在传感器采集到行驶数据之后,传送至第一车辆的T-Box。第一车辆的T-Box将行驶数据上传至云端服务器。云端服务器接收上传的行驶数据后,首先对行驶数据做降噪和平滑处理。在一些实施例中,可以采用预先设计好的均值滤波器、中值滤波器、高斯滤波器或双边滤波器对行驶数据做降噪和平滑处理。图3示出了两种数据未进行降噪和平滑处理的曲线图。从图3可看出,由于环境对传感器的干扰,处理前的数据序列存在很多的毛刺现象。对其做降噪和平滑处理以滤除序列中的毛刺现象,提高数据的准确度。最后,云端服务器在处理后的行驶数据中获得第二车辆的特征序列。After the sensor collects the driving data, it is transmitted to the T-Box of the first vehicle. The T-Box of the first vehicle uploads the driving data to the cloud server. After the cloud server receives the uploaded driving data, it first performs noise reduction and smoothing processing on the driving data. In some embodiments, a pre-designed mean filter, median filter, Gaussian filter or bilateral filter can be used to perform noise reduction and smoothing processing on the driving data. Figure 3 shows a graph of the two types of data without noise reduction and smoothing. As can be seen from Figure 3, due to the interference of the environment to the sensor, there are many glitches in the data sequence before processing. Noise reduction and smoothing are performed to filter out the glitches in the sequence and improve the accuracy of the data. Finally, the cloud server obtains the feature sequence of the second vehicle in the processed driving data.
在本申请的实施例中,第二车辆的特征序列包括:第二车辆与参考车道线的距离序列、第二车辆与参考车道线的夹角序列、和第二车辆的速度序列。In the embodiment of the present application, the feature sequence of the second vehicle includes: a distance sequence between the second vehicle and the reference lane line, an angle sequence between the second vehicle and the reference lane line, and a speed sequence of the second vehicle.
进一步地,第二车辆与参考车道线的夹角可以根据第一车辆安装的摄像头采集的图像获得;第二车辆的速度可由第一车辆的测速雷达获得。Further, the angle between the second vehicle and the reference lane line can be obtained according to the image collected by the camera installed on the first vehicle; the speed of the second vehicle can be obtained by the speed measuring radar of the first vehicle.
在本申请的实施例中,将第一车辆行驶的车道的左侧车道线作为参考车道线,在其他实施例中,可将第一车辆行驶的车道的右侧车道线作为参考车道线In the embodiments of the present application, the left lane line of the lane where the first vehicle travels is used as the reference lane line. In other embodiments, the right side lane line of the lane where the first vehicle travels may be used as the reference lane line
接下来,结合附图,示性的介绍本申请实施例是如何获得第二车辆与参考车道线的距离序列的。Next, with reference to the accompanying drawings, how to obtain the distance sequence between the second vehicle and the reference lane line in the embodiment of the present application is schematically introduced.
首先,根据第一车辆的向心加速度,获得第一车辆行驶的车道的类别。其中,车道的类别一般分为直线车道和曲线车道。First, according to the centripetal acceleration of the first vehicle, the category of the lane in which the first vehicle travels is obtained. Among them, the categories of lanes are generally divided into straight lanes and curved lanes.
在本申请实施例中,当第一车辆的向心加速度为零时,则认为第一车辆行驶的车道的类别是直线车道,不为零时,第一车辆行驶的车道的类别是曲线车道。In the embodiment of the present application, when the centripetal acceleration of the first vehicle is zero, it is considered that the type of the lane in which the first vehicle travels is a straight lane, and when it is not zero, the type of the lane in which the first vehicle travels is a curved lane.
然后,根据获得的行驶数据确定第二车辆与参考车道线的距离序列。按照第一车辆行驶车道的不同,下面分别介绍两种情况的具体方案。Then, a sequence of distances between the second vehicle and the reference lane line is determined according to the obtained driving data. According to different driving lanes of the first vehicle, specific solutions for the two situations are introduced below.
其一,当第一车辆处于直线车道时,第一车辆和第二车辆在直线车道的位置关系示意图如图4a所示。First, when the first vehicle is in the straight lane, a schematic diagram of the positional relationship between the first vehicle and the second vehicle in the straight lane is shown in FIG. 4a.
将第一车辆与第二车辆的相对距离d S、第二车辆偏离第一车辆的竖直方向的夹角α代入公式(1)中,获得在第一方向上第一车辆和第二车辆之间的距离、以及在第二方向上第一车辆和第二车辆之间的距离,其中,第一方向为第一车辆的车身竖直方向,第二方向为第一车辆的车身水平方向。 Substitute the relative distance d S between the first vehicle and the second vehicle and the included angle α of the second vehicle deviates from the vertical direction of the first vehicle into formula (1) to obtain the difference between the first vehicle and the second vehicle in the first direction. and the distance between the first vehicle and the second vehicle in a second direction, wherein the first direction is the vertical direction of the body of the first vehicle, and the second direction is the horizontal direction of the body of the first vehicle.
Figure PCTCN2021070939-appb-000001
Figure PCTCN2021070939-appb-000001
公式(1)中,d Sx为在第一方向上第一车辆和第二车辆之间的距离,d Sy为在第二方向上第一车辆和第二车辆之间的距离。 In formula (1), d Sx is the distance between the first vehicle and the second vehicle in the first direction, and d Sy is the distance between the first vehicle and the second vehicle in the second direction.
然后,按照公式(2)将第一车辆与参考车道线的距离Ego_distoleft、第一车辆与第二车辆在第二方向上距离d Sy做差,即可获得第二车辆与参考车道线的距离序列Obj_distoleft。 Then, according to formula (2), the distance Ego_distoleft between the first vehicle and the reference lane line, and the distance d Sy between the first vehicle and the second vehicle in the second direction, can be obtained to obtain the distance sequence between the second vehicle and the reference lane line Obj_distoleft.
Obj_distoleft=Ego_distoleft-d Sy       (2) Obj_distoleft=Ego_distoleft-d Sy (2)
其二,当第一车辆处于曲线车道时,第一车辆和第二车辆在曲线车道的位置关系示意图如图4b所示。Second, when the first vehicle is in the curved lane, a schematic diagram of the positional relationship between the first vehicle and the second vehicle in the curved lane is shown in FIG. 4b.
区别于直线车道,由于曲线车道本身的曲率会额外增加两车之间的相对距离,为此,需对上面获得的距离序列进行曲率补偿,消除路面特征对距离序列造成的干扰。Different from the straight lane, the curvature of the curved lane itself will increase the relative distance between the two vehicles. Therefore, it is necessary to perform curvature compensation on the distance sequence obtained above to eliminate the interference caused by the road surface characteristics to the distance sequence.
具体地,先将第一车辆的速度v、第一车辆的向心加速度a y、第一车辆与第二车辆在第一方向上的距离d Sx代入公式(3)中,获得第二车辆与参考车道线的距离补偿序列。 Specifically, firstly substitute the speed v of the first vehicle, the centripetal acceleration a y of the first vehicle, and the distance d Sx between the first vehicle and the second vehicle in the first direction into formula (3), to obtain the difference between the second vehicle and the second vehicle. The distance compensation sequence for the reference lane lines.
Figure PCTCN2021070939-appb-000002
Figure PCTCN2021070939-appb-000002
公式(3)中,y off为第二车辆与参考车道线的距离补偿序列,r为第一车辆行驶的车道的半径,
Figure PCTCN2021070939-appb-000003
符号函数sign(a y)的意义在于,根据第一车辆的向心加速度确定第二车辆在第一车辆的左侧还是右侧,从而得到的距离y off的符号,当第二车辆在第一车辆的右侧时,y off的符号为负,否则,y off的符号为正。图4b中d为中间量,
Figure PCTCN2021070939-appb-000004
In formula (3), y off is the distance compensation sequence between the second vehicle and the reference lane line, r is the radius of the lane where the first vehicle travels,
Figure PCTCN2021070939-appb-000003
The significance of the sign function sign(a y ) is to determine whether the second vehicle is on the left or right side of the first vehicle according to the centripetal acceleration of the first vehicle, so as to obtain the sign of the distance y off , when the second vehicle is on the first vehicle When the vehicle is to the right, the sign of y off is negative, otherwise, the sign of y off is positive. In Figure 4b, d is the intermediate quantity,
Figure PCTCN2021070939-appb-000004
然后按照公式(4),利用距离补偿序列对参考序列进行曲率补偿,获得补偿后的第二车辆与参考车道线的距离的时间序列。Then, according to formula (4), the curvature compensation is performed on the reference sequence by using the distance compensation sequence, and the time sequence of the distance between the second vehicle and the reference lane line after compensation is obtained.
Obj_distoleft=Ego_distoleft-d′ Sy       (4) Obj_distoleft=Ego_distoleft-d′ Sy (4)
公式(4)中,d′ Sy为参考序列与距离补偿序列的差值,d′ Sy=d Sy-y offIn formula (4), d' Sy is the difference between the reference sequence and the distance compensation sequence, d' Sy =d Sy -y off .
步骤S2.计算第二车辆在当前时段的特征序列与预设的驾驶行为基准序列之间的相似度。Step S2. Calculate the similarity between the feature sequence of the second vehicle in the current period and the preset driving behavior reference sequence.
在本申请的实施例中,本步骤首先利用预设的窗函数对第二车辆的特征序列进行预处理,然后并利用动态时间规整算法计算特征序列与预设的驾驶行为基准序列之间的相似度。In the embodiment of the present application, this step firstly uses a preset window function to preprocess the feature sequence of the second vehicle, and then uses the dynamic time warping algorithm to calculate the similarity between the feature sequence and the preset driving behavior reference sequence Spend.
具体地,首先按照如图5所示的结合窗函数和动态时间规整算法计算相似度的示意图,先采用预设的窗函数将特征序列离散化。常见的窗函数有矩形窗、三角窗、汉宁窗、海明窗和高斯窗等类型。本申请的实施例采用的窗函数是矩形窗。图5中winlen表示扫描窗宽度,shift表示扫描偏移量。扫描窗宽度与参考信号长度正相关,与计算时间正相关;扫描偏移量,与采样频率正相关,与计算时间负相关。两个参数的具体取值会影响识别驾驶行为的精度和运算时间成本。实际应用中,可以根据车辆驾驶行为的识别精度和运算时间成本的实际需求,在表1列出的组合值中选择设置两个参数的取值。Specifically, first, according to the schematic diagram of calculating the similarity by combining the window function and the dynamic time warping algorithm as shown in FIG. 5 , the feature sequence is first discretized by using a preset window function. Common window functions include rectangular window, triangular window, Hanning window, Hamming window and Gaussian window. The window function adopted in the embodiments of the present application is a rectangular window. In Figure 5, winlen represents the width of the scan window, and shift represents the scan offset. The scanning window width is positively related to the length of the reference signal and positively related to the calculation time; the scanning offset is positively related to the sampling frequency and negatively related to the calculation time. The specific values of the two parameters will affect the accuracy of identifying driving behavior and the cost of computing time. In practical applications, the values of the two parameters can be selected and set from the combined values listed in Table 1 according to the identification accuracy of the vehicle driving behavior and the actual requirements of the computing time cost.
表1窗函数的扫描窗宽度和扫描偏移量组合取值表Table 1. The combination value table of the scan window width and scan offset of the window function
Figure PCTCN2021070939-appb-000005
Figure PCTCN2021070939-appb-000005
Figure PCTCN2021070939-appb-000006
Figure PCTCN2021070939-appb-000006
然后将特征序列与预设的驾驶行为基准序列输入动态时间规整算法,获得动态时间规整算法输出的特征序列与该预设驾驶行为基准序列之间的相似度。图6示出了的两个序列的相似性对比图。图6中深色曲线表示基准序列,浅色曲线表示待对比的序列。从序列的变化趋势上可以识别出两个序列之间的相似性。将图6的左侧图中的序列分离放大后得到图6右侧的对比图。如图6的右侧图所示,从四个箭头所指示的部分,可以看出两个序列之间存在一定的相似性,采用动态时间规整算法可将这种相似性数量化,获得两者之间的相似度。Then, the feature sequence and the preset driving behavior reference sequence are input into the dynamic time warping algorithm to obtain the similarity between the feature sequence output by the dynamic time warping algorithm and the preset driving behavior reference sequence. Figure 6 shows the similarity comparison of the two sequences. The dark curve in Figure 6 represents the reference sequence, and the light curve represents the sequence to be compared. The similarity between two sequences can be identified from the change trend of the sequences. The sequence on the left side of FIG. 6 is separated and enlarged to obtain the comparison diagram on the right side of FIG. 6 . As shown on the right side of Figure 6, from the parts indicated by the four arrows, it can be seen that there is a certain similarity between the two sequences. The dynamic time warping algorithm can be used to quantify this similarity and obtain the two sequences. similarity between.
在本申请的实施例中,预设的驾驶行为基准序列是通过对采集的历史数据进行分析,提取到的每个驾驶行为的标准特征序列。In the embodiment of the present application, the preset driving behavior reference sequence is a standard feature sequence of each driving behavior extracted by analyzing the collected historical data.
通过动态时间规整算法获得相似度,提高了识别车辆驾驶行为的速度;窗函数的使用解决了动态时间规整算法只能衡量两个离散时间序列之间的相似度,无法处理连续的时间序列的问题。The similarity is obtained by the dynamic time warping algorithm, which improves the speed of identifying the driving behavior of the vehicle; the use of the window function solves the problem that the dynamic time warping algorithm can only measure the similarity between two discrete time series and cannot handle continuous time series. .
在本申请的实施例中,预设的驾驶行为包括:左换道、右换道、左转向、右转向、左转掉头、右转掉头、加速、减速和匀速。In the embodiment of the present application, the preset driving behaviors include: left lane change, right lane change, left turn, right turn, left turn U-turn, right turn U-turn, acceleration, deceleration, and constant speed.
进一步地,第二车辆是否在执行左转向、右转向、左转掉头或右转掉头的驾驶行为,可通过第二车辆与参考车道线的夹角序列来识别。Further, whether the second vehicle is performing the driving behavior of turning left, turning right, turning left, or turning right can be identified by the sequence of included angles between the second vehicle and the reference lane line.
其中,以左转向和左转掉头为例,左转掉头的基准序列中会出现大于90°的夹角直至等180°,则左转向的变化趋势是由一个大于0小于90的初值趋近90。进而,可以此区分第二车辆的驾驶行为是左转向还是左转掉头。此外,区分左右方向时,需预先规定一个 正方向。比如,以参考车道线左侧为正时,当第二车辆与参考车道线的夹角序列中未出现大于90°的夹角且未出现小于0°的夹角,即可认定第二车辆的驾驶行为是左转向;当第二车辆与参考车道线的夹角序列中未出现大于90°,但部分时段出现小于0°的夹角,即认定第二车辆的驾驶行为是右转向。第二车辆是否在执行加速、减速或匀速的驾驶行为,可通过第二车辆的速度序来列识别。Among them, taking left turn and left turn as an example, in the reference sequence of left turn, there will be an included angle greater than 90° until equal to 180°, then the change trend of left turn is from an initial value greater than 0 and less than 90. 90. Further, it is possible to distinguish whether the driving behavior of the second vehicle is a left turn or a left turn. In addition, when distinguishing left and right directions, it is necessary to specify a positive direction in advance. For example, when the left side of the reference lane line is positive, when the angle sequence between the second vehicle and the reference lane line does not have an included angle greater than 90° and does not appear an included angle less than 0°, it can be determined that the second vehicle has an angle of less than 0°. The driving behavior is left steering; when the included angle sequence between the second vehicle and the reference lane line does not appear to be greater than 90°, but an included angle less than 0° appears in some time periods, the driving behavior of the second vehicle is determined to be right steering. Whether the second vehicle is performing acceleration, deceleration or constant speed driving behavior can be identified by the speed sequence of the second vehicle.
第二车辆是否在执行左换道或右换道的驾驶行为,可通过第二车辆与参考车道线的距离序列来识别。Whether the second vehicle is performing the driving behavior of changing lanes to the left or changing lanes can be identified through a sequence of distances between the second vehicle and the reference lane line.
接下来,以参考车道线为第一车辆的左侧车道线为例,说明第二车辆左换道和右换道时与参考车道线的距离变化的不同点。在本申请之外的其他实施例中,也可以将第一车辆行驶的车道的右侧车道线作为参考车道线。Next, taking the reference lane line as the left lane line of the first vehicle as an example, the difference in the distance change from the reference lane line when the second vehicle changes lanes left and right is described. In other embodiments other than the present application, the right lane line of the lane where the first vehicle travels may also be used as the reference lane line.
下面以第二车辆位于第一车辆的右向车道为例,结合附图,示性的说明第二车辆左换道和右换道时与参考车道线的距离变化趋势。Taking the second vehicle located in the right lane of the first vehicle as an example, and with reference to the accompanying drawings, the variation trend of the distance from the reference lane line when the second vehicle changes lanes left and right is illustrated schematically.
图7a示出了第二车辆左换道时与参考车道线的距离变化示意图。图7a横轴表示时间,纵轴表示第二车辆与参考车道线的距离。如图7a所示,第二车辆左换道时,当参考车道线未发生变化时,第二车辆会逐渐靠近参考车道线,进而,第二车辆与参考车道线的距离先由一定初值逐渐变小。参考车道线未发生变化表明第一车辆为发生换道行为。Fig. 7a shows a schematic diagram of the distance change from the reference lane line when the second vehicle changes lanes left. The horizontal axis of FIG. 7a represents time, and the vertical axis represents the distance between the second vehicle and the reference lane line. As shown in Figure 7a, when the second vehicle changes lanes to the left, when the reference lane line does not change, the second vehicle will gradually approach the reference lane line, and then the distance between the second vehicle and the reference lane line will gradually change from a certain initial value. become smaller. The fact that the reference lane line does not change indicates that the first vehicle is changing lanes.
图7b示出了第二车辆右换道时与参考车道线的距离变化示意图。图7b横轴表示时间,纵轴表示第二车辆与参考车道线的距离。如图7b所示,第二车辆右换道时,当参考车道线未发生变化时,与左换道的变化趋势则相反,第二车辆会逐渐远离参考车道线,进而第二车辆与参考车道线的距离先由一定初值逐渐增大。Fig. 7b shows a schematic diagram of the distance change from the reference lane line when the second vehicle changes lanes right. The horizontal axis of FIG. 7b represents time, and the vertical axis represents the distance between the second vehicle and the reference lane line. As shown in Figure 7b, when the second vehicle changes lanes to the right, when the reference lane line does not change, the change trend is opposite to the change trend of the left lane change. The second vehicle gradually moves away from the reference lane line, and the second vehicle and the reference lane The distance of the line first increases gradually from a certain initial value.
下面结合附图,示性的说明第一车辆左换道右换道时与参考车道线的距离变化趋势。In the following, a variation trend of the distance between the first vehicle and the reference lane line when the first vehicle changes lanes to the right when changing lanes to the right will be schematically described with reference to the accompanying drawings.
图7c示出了第一车辆左换道时与参考车道线的距离变化示意图。图7c横轴表示时间,纵轴表示第一车辆与参考车道线的距离。如图7c所示,第一车辆左换道时,第一车辆与参考车道线的距离先是由一定初值逐渐变小至0,然后由于行驶的车道发生变化,对应的参考车道线也发生变化,所以第一车辆与参考车道线的距离再由0突变至最大值,接着呈现逐渐变小的趋势。图7a中的距离最大值与车辆行驶的车道的宽度有关。Fig. 7c shows a schematic diagram of the change of the distance from the reference lane line when the first vehicle changes lanes to the left. The horizontal axis of FIG. 7c represents time, and the vertical axis represents the distance between the first vehicle and the reference lane line. As shown in Figure 7c, when the first vehicle changes lanes to the left, the distance between the first vehicle and the reference lane line first gradually decreases from a certain initial value to 0, and then the corresponding reference lane line also changes due to the change of the driving lane. , so the distance between the first vehicle and the reference lane line suddenly changes from 0 to the maximum value, and then shows a gradually decreasing trend. The distance maximum in Figure 7a is related to the width of the lane in which the vehicle travels.
图7d示出了第一车辆右换道时与参考车道线的距离变化示意图。同样地,图7d横轴表示时间,纵轴表示第一车辆与参考车道线的距离。如图7d所示,第一车辆右换道时,第一车辆与参考车道线的距离先是由一定初值逐渐增大至最大值,然后由于行驶的车道发生变化,对应的参考车道线也发生变化,第一车辆与参考车道线的距离再由最大值突变至 0,接着呈现逐渐增大的趋势。图7d的距离最大值同样与车辆行驶的车道的宽度有关。FIG. 7d shows a schematic diagram of the distance change from the reference lane line when the first vehicle changes lanes right. Similarly, the horizontal axis of FIG. 7d represents time, and the vertical axis represents the distance between the first vehicle and the reference lane line. As shown in Figure 7d, when the first vehicle changes lanes to the right, the distance between the first vehicle and the reference lane line first gradually increases from a certain initial value to a maximum value, and then due to the change of the driving lane, the corresponding reference lane line also occurs. changes, the distance between the first vehicle and the reference lane line suddenly changes from the maximum value to 0, and then shows a gradually increasing trend. The distance maximum of Figure 7d is also related to the width of the lane in which the vehicle travels.
步骤S3.根据相似度获得第二车辆的驾驶行为。Step S3. Obtain the driving behavior of the second vehicle according to the similarity.
在本申请的实施例中,当获得的相似度大于预设的相似度阈值时,说明计算该相似度的基准序列对应的预设驾驶行为即为第二车辆的驾驶行为,据此可获得当前时段内第二车辆执行的驾驶行为。In the embodiment of the present application, when the obtained similarity is greater than the preset similarity threshold, it is indicated that the preset driving behavior corresponding to the reference sequence for calculating the similarity is the driving behavior of the second vehicle, according to which the current driving behavior can be obtained. The driving behavior performed by the second vehicle during the time period.
例如,当获得的第二车辆与参考车道线的夹角序列与左转向基准序列、右转向基准序列、左转掉头基准序列或右转掉头基准序列的相似度大于阈值,则认为第二车辆在执行相应的驾驶行为。For example, when the obtained similarity between the angle sequence between the second vehicle and the reference lane line and the left-turn reference sequence, the right-turn reference sequence, the left-turn U-turn reference sequence, or the right-turn turn reference sequence is greater than the threshold, it is considered that the second vehicle is in the Perform the appropriate driving behavior.
与此类似的,可以判断第二车辆是否执行加速、减速或匀速三类驾驶行为,以及左换道或右换道驾驶行为。Similarly, it can be determined whether the second vehicle performs three types of driving behaviors of acceleration, deceleration or constant speed, as well as left lane change or right lane change driving behavior.
步骤S4.根据第二车辆的驾驶行为,在预设的场景库中匹配第二车辆的驾驶场景。Step S4. Match the driving scene of the second vehicle in the preset scene library according to the driving behavior of the second vehicle.
在本申请的实施例中,预设的场景库中不同驾驶场景及其对应的驾驶行为。本步骤是利用场景库对步骤S3获得的第二车辆的驾驶行为进行归类,获得第二车辆的驾驶场景。In the embodiment of the present application, different driving scenarios and their corresponding driving behaviors are in the preset scenario library. This step is to use the scene library to classify the driving behavior of the second vehicle obtained in step S3 to obtain the driving scene of the second vehicle.
构建场景库时,首先需对采集车辆的多个行驶图像数据样本进行场景标记,然后从各场景对应的行驶图像数据中甄别并标记各场景对应的驾驶行为;最后根据驾驶场景和驾驶场景对应的驾驶行为组建场景库。When building a scene library, firstly, it is necessary to mark multiple driving image data samples of the collected vehicle, and then identify and mark the driving behavior corresponding to each scene from the driving image data corresponding to each scene; Driving behaviors form a scene library.
图8示出了驾驶场景与驾驶行为的对应关系图。如图8所示,驾驶场景包括:车道保持、转向、掉头、车道变更以及超车。FIG. 8 shows a corresponding relationship diagram between driving scenarios and driving behaviors. As shown in Figure 8, the driving scenarios include: lane keeping, steering, U-turn, lane change, and overtaking.
在本申请的实施例中,识别的驾驶场景与驾驶行为的对应关系如下:In the embodiment of the present application, the corresponding relationship between the identified driving scene and the driving behavior is as follows:
车道保持场景对应的驾驶行为包括:车辆加速行驶、车辆减速行驶和车辆匀速行驶;The driving behaviors corresponding to the lane keeping scenario include: vehicle acceleration, vehicle deceleration, and vehicle driving at a constant speed;
转向场景对应的驾驶行为包括:车辆左转向和车辆右转向;The driving behavior corresponding to the steering scene includes: the vehicle turns left and the vehicle turns right;
掉头场景对应的驾驶行为包括:车辆左转掉头和车辆右转掉头;The driving behaviors corresponding to the U-turn scene include: the vehicle turns left and the vehicle turns right;
车道变更场景对应的驾驶行为包括:车辆左换道和车辆右换道;The driving behaviors corresponding to the lane change scenarios include: vehicle left lane change and vehicle right lane change;
超车场景对应的驾驶行为包括:车辆加速行驶、车辆左换道、车辆匀速行驶、车辆右换道和车辆减速行驶。The driving behaviors corresponding to the overtaking scenario include: the vehicle accelerates, the vehicle changes lanes to the left, the vehicle drives at a constant speed, the vehicle changes lanes to the right, and the vehicle decelerates.
此外,在其他实施例中,对于启动和停车场景,可通过判断车辆的速度的变化情况获得。比如,当车辆的速度从0逐渐增大,可确认车辆执行了启动场景;当车辆从一定速度逐渐降低至0,可确认车辆执行了停车场景。In addition, in other embodiments, the starting and parking scenarios can be obtained by judging the change of the speed of the vehicle. For example, when the speed of the vehicle gradually increases from 0, it can be confirmed that the vehicle executes the start-up scenario; when the vehicle gradually decreases from a certain speed to 0, it can be confirmed that the vehicle executes the parking scenario.
在本申请的另一实施例中,上述车辆驾驶场景识别方法中的步骤S2计算第二车辆的特征序列与预设的驾驶行为基准序列之间的相似度还可以包括:如图9所示的步骤S21~ 步骤S23。In another embodiment of the present application, calculating the similarity between the feature sequence of the second vehicle and the preset driving behavior reference sequence in step S2 in the above-mentioned vehicle driving scene recognition method may further include: as shown in FIG. 9 Step S21 to Step S23.
步骤S21.识别第一车辆在当前时段的行驶区域。Step S21. Identify the driving area of the first vehicle in the current time period.
在一种可能的实施方式中,本步骤可通过第一车辆上的定位模块,确定第一车辆在当前时段的行驶区域。不同区域中车辆的驾驶行为的特性存在一定区别,这与该区域中驾驶员的驾驶习惯和道路拓扑特征存在一定的联系。例如,本申请实施例对两个区域发生车辆插入(cut-in)的历史数据进行统计分析,获得两个区域的第二车辆的驾驶行为的概率分布图,根据概率分布图可知发生车辆插入的概率相同时各区域中车辆之间的时间距离不同。图10示出了区域1和区域2中第二车辆插入第一车辆行驶车道的概率分布图。如图10所示,当发生概率同为84.135%时,区域1中两车之间的时间距离介于μ 11和μ 11之间,区域2中两车之间的时间距离则介于μ 22和μ 22之间。 In a possible implementation manner, in this step, a location module on the first vehicle may be used to determine the driving area of the first vehicle in the current time period. There are certain differences in the characteristics of the driving behavior of vehicles in different areas, which are related to the driving habits and road topology characteristics of drivers in the area. For example, the embodiment of the present application performs statistical analysis on the historical data of vehicle cut-ins in two areas, and obtains a probability distribution diagram of the driving behavior of the second vehicle in the two areas. When the probabilities are the same, the temporal distances between vehicles in each area are different. FIG. 10 shows a probability distribution diagram of the insertion of the second vehicle into the driving lane of the first vehicle in area 1 and area 2. As shown in Figure 10, when the probability of occurrence is the same as 84.135%, the time distance between two vehicles in area 1 is between μ 11 and μ 11 , and the time between two vehicles in area 2 The distance is between μ 22 and μ 22 .
其中,μ 1为区域1的第二车辆发生左换道时第一车辆与第二车辆之间的时间距离的均值,σ 1为区域1的第二车辆发生左换道时第一车辆与第二车辆之间的时间距离的方差,μ 2为区域2的第二车辆发生左换道时第一车辆与第二车辆之间的时间距离的均值,σ 2为区域2的第二车辆发生左换道时第一车辆与第二车辆之间的时间距离的方差。 Among them, μ 1 is the mean value of the time distance between the first vehicle and the second vehicle when the second vehicle in area 1 changes lanes to the left, and σ 1 is the time distance between the first vehicle and the second vehicle when the second vehicle in area 1 changes lanes. The variance of the time distance between the two vehicles, μ 2 is the mean value of the time distance between the first vehicle and the second vehicle when the second vehicle in area 2 changes lanes, σ 2 is the second vehicle in area 2 The variance of the temporal distance between the first vehicle and the second vehicle when changing lanes.
步骤S22.在数据库中提取行驶区域对应的驾驶行为基准序列。Step S22. Extract the driving behavior reference sequence corresponding to the driving area in the database.
在一种可能的实施方式中,数据库中包括不同区域对应的驾驶行为基准序列。不同区域对应的驾驶行为基准序列通过对相应驾驶行为对应的车辆行驶数据进行统计分析获得。In a possible implementation, the database includes driving behavior reference sequences corresponding to different regions. The driving behavior reference sequences corresponding to different regions are obtained by performing statistical analysis on the vehicle driving data corresponding to the corresponding driving behaviors.
步骤S23.结合窗函数和动态时间规整算法获得特征序列与提取的驾驶行为基准序列之间的相似度。Step S23. Combine the window function and the dynamic time warping algorithm to obtain the similarity between the feature sequence and the extracted driving behavior reference sequence.
本申请实施例基于不同区域中车辆驾驶行为的特性存在差异的先决条件,在计算特征序列与基准序列之间的相似度时,只计算第二车辆的特征序列与第一车辆行驶的区域对应的驾驶行为基准序列,可提高相似度的准确性,同时降低了计算量,提高了识别速度。The embodiment of the present application is based on the precondition that the characteristics of the driving behavior of vehicles in different areas are different. When calculating the similarity between the characteristic sequence and the reference sequence, only the characteristic sequence of the second vehicle and the area corresponding to the driving area of the first vehicle are calculated. The driving behavior benchmark sequence can improve the accuracy of similarity, reduce the amount of calculation, and improve the recognition speed.
基于上述实施例的方法,本申请还提供一种车辆驾驶场景识别装置。该识别装置部署在云端服务器中,可配置为与第一车辆上的车载终端进行通信,以向用户反馈识别装置的识别结果。图11示出了本申请实施例提供的一种车辆驾驶场景识别装置。如图11所示,识别装置具体包括:Based on the methods of the above embodiments, the present application further provides a vehicle driving scene recognition device. The identification device is deployed in the cloud server, and can be configured to communicate with the vehicle-mounted terminal on the first vehicle, so as to feed back the identification result of the identification device to the user. FIG. 11 shows a vehicle driving scene recognition device provided by an embodiment of the present application. As shown in Figure 11, the identification device specifically includes:
数据获取单元,用于获取第一车辆在当前时段的行驶数据;a data acquisition unit for acquiring the driving data of the first vehicle in the current period;
特征获取单元,用于根据行驶数据获得第二车辆的特征序列;a feature obtaining unit, configured to obtain a feature sequence of the second vehicle according to the driving data;
相似度计算单元,用于利用动态时间规整算法,获得第二车辆的特征序列与预设的多个驾驶行为基准序列之间的相似度;a similarity calculation unit, configured to obtain the similarity between the feature sequence of the second vehicle and a plurality of preset driving behavior reference sequences by using a dynamic time warping algorithm;
行为识别单元,用于根据相似度确定第二车辆在当前时段的驾驶行为;a behavior recognition unit, configured to determine the driving behavior of the second vehicle in the current period according to the similarity;
场景匹配单元,用于根据第二车辆在当前时段的驾驶行为,利用预先构建的场景库匹配第二车辆在当前时段的驾驶场景。The scene matching unit is configured to use a pre-built scene library to match the driving scene of the second vehicle in the current period according to the driving behavior of the second vehicle in the current period.
在本申请的实施例中,相似度计算单元还用于:In the embodiment of the present application, the similarity calculation unit is also used for:
识别第一车辆在当前时段行驶的区域;Identifying the area where the first vehicle is traveling in the current time period;
在数据库中提取第一车辆行驶的区域对应的预设的驾驶行为基准序列。A preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from the database.
在本申请的实施例中,特征获取单元还用于:In the embodiment of the present application, the feature acquisition unit is also used for:
对行驶数据做降噪处理和平滑处理。Noise reduction and smoothing are performed on the driving data.
在本申请的实施例中,参考车道线为所述第一车辆行驶的车道的一个车道线。In the embodiment of the present application, the reference lane line is one lane line of the lane in which the first vehicle travels.
在本申请的实施例中,第一车辆行驶的车道为曲线车道,装置还包括:In the embodiment of the present application, the lane in which the first vehicle travels is a curved lane, and the device further includes:
曲率补偿单元,用于按照曲线车道的曲率对距离序列进行曲率补偿。The curvature compensation unit is used to perform curvature compensation on the distance sequence according to the curvature of the curved lane.
在本申请的实施例中,行为识别单元具体用于:In the embodiment of this application, the behavior recognition unit is specifically used for:
判断相似度是否满足预设条件;Determine whether the similarity satisfies the preset conditions;
将满足预设条件的相似度对应的驾驶行为基准序列表征的驾驶行为,作为第二车辆在当前时段的驾驶行为。The driving behavior represented by the driving behavior reference sequence corresponding to the similarity satisfying the preset condition is taken as the driving behavior of the second vehicle in the current period.
在本申请的实施例中,装置还包括场景库构建单元,用于:In the embodiment of the present application, the apparatus further includes a scene library construction unit for:
根据不同场景对应的历史行驶图像,获得不同场景对应的驾驶行为;According to the historical driving images corresponding to different scenarios, the driving behaviors corresponding to different scenarios are obtained;
根据不同场景及所述不同场景对应的驾驶行为,构建所述场景库。The scene library is constructed according to different scenes and driving behaviors corresponding to the different scenes.
应当理解的是,上述装置用于执行上述实施例中的方法,装置中相应的程序模块的实现原理和技术效果与上述方法实施例中的描述类似,该装置的工作过程可参考上述方法中的对应过程,此处不再赘述It should be understood that the above-mentioned apparatus is used to execute the method in the above-mentioned embodiment, the implementation principle and technical effect of the corresponding program module in the apparatus are similar to those described in the above-mentioned method embodiment, and the working process of the apparatus may refer to the above-mentioned method. The corresponding process will not be repeated here.
上述车载终端可以是智能手机、平板电脑或车载电脑中的一种。车载终端包括处理器、显示模块和数据接口。该处理器可以是通用处理器或者专用处理器。例如,处理器可以包括中央处理器(central processing unit,CPU)和/或基带处理器。其中,基带处理器可以用于处理通信数据,CPU可以用于实现相应的控制和处理功能,执行软件程序,处理软件程序的数据。The vehicle-mounted terminal may be one of a smart phone, a tablet computer, or a vehicle-mounted computer. The vehicle terminal includes a processor, a display module and a data interface. The processor may be a general purpose processor or a special purpose processor. For example, a processor may include a central processing unit (CPU) and/or a baseband processor. The baseband processor may be used to process communication data, and the CPU may be used to implement corresponding control and processing functions, execute software programs, and process data of software programs.
数据接口用于数据的接收,显示模块用于显示驾驶行为和场景识别结果。该处理器通过数据接口接收数据,经过计算后向所述显示模块发送显示指令,以显示云端下发的结果。The data interface is used to receive data, and the display module is used to display driving behavior and scene recognition results. The processor receives data through a data interface, and sends a display instruction to the display module after calculation, so as to display the result sent by the cloud.
此外,车载终端还可以包括:充电管理模块、电源管理模块、电池、天线、移动通信模块、无线通信模块、音频模块、扬声器、耳机接口、音频蓝牙模块、显示屏、调制解调 器以及基带处理器。In addition, the vehicle terminal may also include: a charging management module, a power management module, a battery, an antenna, a mobile communication module, a wireless communication module, an audio module, a speaker, an earphone interface, an audio Bluetooth module, a display screen, a modem, and a baseband processor.
充电管理模块可以通过USB接口接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块可以通过终端设备的无线充电线圈接收无线充电输入。充电管理模块为电池充电的同时,还可以通过电源管理模块为其他设备供电。The charging management module can receive the charging input of the wired charger through the USB interface. In some wireless charging embodiments, the charging management module may receive wireless charging input through the wireless charging coil of the terminal device. While the charging management module charges the battery, it can also supply power to other devices through the power management module.
电源管理模块用于连接电池、充电管理模块与处理器。电源管理模块接收电池和/或充电管理模块的输入,为处理器、显示屏、和无线通信模块等供电。电源管理模块还可以用于监测电池容量、电池循环次数、电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块也可以设置于处理器中。在另一些实施例中,电源管理模块和充电管理模块也可以设置于同一个器件中。The power management module is used to connect the battery, the charge management module and the processor. The power management module receives input from the battery and/or charging management module, and supplies power to the processor, the display screen, and the wireless communication module. The power management module can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance). In some other embodiments, the power management module may also be provided in the processor. In other embodiments, the power management module and the charging management module may also be provided in the same device.
车载终端的无线通信功能可以通过天线、移动通信模块,无线通信模块、调制解调器以及基带处理器等实现与服务器的通信。The wireless communication function of the vehicle terminal can realize the communication with the server through the antenna, mobile communication module, wireless communication module, modem and baseband processor.
移动通信模块可以提供应用在车载终端上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块可以由至少两根天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调器进行解调。移动通信模块还可以对经调制解调器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块的至少部分功能模块可以与处理器的至少部分模块被设置在同一个器件中。The mobile communication module can provide wireless communication solutions including 2G/3G/4G/5G applied on the vehicle terminal. The mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like. The mobile communication module can receive electromagnetic waves by at least two antennas, filter and amplify the received electromagnetic waves, and transmit them to the modem for demodulation. The mobile communication module can also amplify the signal modulated by the modem and radiate it into electromagnetic waves through the antenna. In some embodiments, at least part of the functional modules of the mobile communication module may be provided in the same device as at least part of the modules of the processor.
调制解调器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。在一些实施例中,调制解调器可以是独立的器件。在另一些实施例中,调制解调器可以独立于处理器,与移动通信模块或其他功能模块设置在同一个器件中。在另一些实施例中,移动通信模块可以是调制解调器中的模块。A modem may include a modulator and a demodulator. Wherein, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and passed to the application processor. In some embodiments, the modem may be a stand-alone device. In other embodiments, the modem may be independent of the processor, and may be provided in the same device as the mobile communication module or other functional modules. In other embodiments, the mobile communication module may be a module in a modem.
无线通信模块可以提供应用在终端设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器。无线通信模块还可以从处理器接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射 出去。The wireless communication module can provide applications on terminal devices including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), global navigation satellite systems (GNSS) , frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module may be one or more devices integrating at least one communication processing module. The wireless communication module receives electromagnetic waves through the antenna, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor. The wireless communication module can also receive the signal to be sent from the processor, frequency-modulate it, amplify it, and radiate it into electromagnetic waves through the antenna.
车载终端可通过移动通信模块、无线通信模块与服务器进行通信,接收服务器下发的识别结果,或向服务器传递数据。The vehicle-mounted terminal can communicate with the server through the mobile communication module and the wireless communication module, receive the identification result issued by the server, or transmit data to the server.
显示屏用于以图像或视频的形式显示识别出的驾驶行为和驾驶场景。显示屏包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,车载终端可以包括一个或多个显示屏。在一个例子中,显示屏还可以用于显示应用程序的界面,显示应用程序的界面中的可视控件。The display screen is used to display the recognized driving behavior and driving scenarios in the form of images or videos. The display includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on. In some embodiments, the vehicle-mounted terminal may include one or more display screens. In one example, the display screen may also be used to display the interface of the application, displaying visual controls in the interface of the application.
音频模块用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块还可以用于对音频信号编码和解码。在一些实施例中,音频模块可以设置于处理器中,或将音频模块的部分功能模块设置于处理器中。在一些实施例中,音频模块用于以语音的形式向用户反馈识别结果。扬声器用于向用户反馈外放的声音。The audio module is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input to digital audio signal. The audio module can also be used to encode and decode audio signals. In some embodiments, the audio module may be provided in the processor, or some functional modules of the audio module may be provided in the processor. In some embodiments, the audio module is used to feed back the recognition result to the user in the form of speech. The loudspeaker is used for feedback of the external sound to the user.
耳机接口用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口、美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。音频蓝牙模块用于连接用户的蓝牙耳机。本申请实施例不对音频蓝牙模块应用蓝牙技术版本做限定,其中的蓝牙芯片可以是应用任何版本蓝牙技术的芯片。The headphone jack is used to connect wired headphones. The headphone interface can be a USB interface, or a 3.5mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface. The audio Bluetooth module is used to connect the user's Bluetooth headset. This embodiment of the present application does not limit the Bluetooth technology version applied to the audio Bluetooth module, and the Bluetooth chip may be a chip applying any version of the Bluetooth technology.
用户可通过耳机接口或音频蓝牙模块,以有线或无线的形式,接收识别结果对应的语音。The user can receive the voice corresponding to the recognition result in wired or wireless form through the headphone interface or the audio Bluetooth module.
基于上述方法实施例,本申请还提供一种计算机存储介质,该计算机存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行如本申请实施例中的一种车辆驾驶场景识别方法。Based on the above method embodiments, the present application further provides a computer storage medium, where instructions are stored in the computer storage medium, and when the instructions are executed on the computer, the computer is made to execute a vehicle driving scene recognition method as in the embodiments of the present application .
基于上述方法实施例,本申请还提供一种包含指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行如本申请实施例中的一种车辆驾驶场景识别方法。Based on the above method embodiments, the present application further provides a computer program product containing instructions, when the instructions are run on a computer, the computer executes a vehicle driving scene recognition method as in the embodiments of the present application.
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(randomaccessmemory,RAM)、闪存、只读存储器(read-onlymemory,ROM)、可编程只读存储器(programmablerom,PROM)、可擦除可编程只读存储器(erasablePROM, EPROM)、电可擦除可编程只读存储器(electricallyEPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。The method steps in the embodiments of the present application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions. The software instructions may be composed of corresponding software modules, and the software modules may be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), Erasable Programmable Read-Only Memory (erasablePROM, EPROM), Electrically Erasable Programmable Read-Only Memory (electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs, or any other form of storage medium known in the art middle. An illustrative storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium can also be an integral part of the processor. The processor and storage medium may reside in an ASIC.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solidstatedisk,SSD))等。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions can be sent from one website site, computer, server, or data center to another website site by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) , computer, server or data center. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media. The usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。It can be understood that, the various numbers and numbers involved in the embodiments of the present application are only for the convenience of description, and are not used to limit the scope of the embodiments of the present application.

Claims (16)

  1. 一种车辆驾驶场景识别方法,其特征在于,所述方法包括:A vehicle driving scene recognition method, characterized in that the method comprises:
    获取第一车辆在当前时段的行驶数据;obtain the driving data of the first vehicle in the current period;
    根据所述行驶数据获得第二车辆的特征序列,所述第二车辆用于指示所述第一车辆周围的车辆,所述特征序列包括:速度序列、所述第二车辆与参考车道线的距离序列以及所述第二车辆与参考车道线的夹角序列;A feature sequence of a second vehicle is obtained according to the driving data, the second vehicle is used to indicate vehicles around the first vehicle, and the feature sequence includes: a speed sequence, a distance between the second vehicle and a reference lane line a sequence and a sequence of angles between the second vehicle and the reference lane line;
    利用动态时间规整算法,获得所述第二车辆的特征序列与预设的驾驶行为基准序列之间的相似度;Using a dynamic time warping algorithm to obtain the similarity between the feature sequence of the second vehicle and a preset driving behavior reference sequence;
    根据所述相似度确定所述第二车辆在所述当前时段的驾驶行为;determining the driving behavior of the second vehicle in the current period according to the similarity;
    根据所述第二车辆在当前时段的驾驶行为,利用预先构建的场景库匹配所述第二车辆在所述当前时段的驾驶场景。According to the driving behavior of the second vehicle in the current period, a pre-built scene library is used to match the driving scene of the second vehicle in the current period.
  2. 根据权利要求1所述的方法,其特征在于,在所述利用动态时间规整算法,获得所述第二车辆的特征序列与驾驶行为基准序列之间的相似度之前,所述方法还包括:The method according to claim 1, characterized in that, before obtaining the similarity between the characteristic sequence of the second vehicle and the driving behavior reference sequence by using a dynamic time warping algorithm, the method further comprises:
    识别所述第一车辆行驶的区域;identifying the area in which the first vehicle travels;
    在数据库中提取所述第一车辆行驶的区域对应的所述预设的驾驶行为基准序列,所述数据库包括不同区域对应的驾驶行为基准序列。The preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from a database, where the database includes driving behavior reference sequences corresponding to different areas.
  3. 根据权利要求1或2所述的方法,其特征在于,在根据所述行驶数据获得第二车辆的特征序列之前,所述方法还包括:The method according to claim 1 or 2, wherein before obtaining the characteristic sequence of the second vehicle according to the driving data, the method further comprises:
    对所述行驶数据做降噪处理和平滑处理。Noise reduction processing and smoothing processing are performed on the driving data.
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述参考车道线为所述第一车辆行驶的车道的一个车道线。The method according to any one of claims 1 to 3, wherein the reference lane line is a lane line of a lane in which the first vehicle travels.
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述第一车辆行驶的车道为曲线车道,所述方法还包括:The method according to any one of claims 1 to 4, wherein the lane in which the first vehicle travels is a curved lane, and the method further comprises:
    按照所述曲线车道的曲率对所述距离序列进行曲率补偿。The distance sequence is curvature compensated according to the curvature of the curved lane.
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述根据所述相似度确定所述第二车辆在所述当前时段的驾驶行为包括:The method according to any one of claims 1 to 5, wherein the determining the driving behavior of the second vehicle in the current period according to the similarity comprises:
    判断所述相似度是否满足预设条件;Judging whether the similarity satisfies a preset condition;
    将满足所述预设条件的相似度对应的驾驶行为基准序列表征的驾驶行为,作为所述第二车辆在所述当前时段的驾驶行为。The driving behavior represented by the driving behavior reference sequence corresponding to the degree of similarity satisfying the preset condition is used as the driving behavior of the second vehicle in the current period.
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1 to 6, further comprising:
    根据不同场景对应的历史行驶图像,获得所述不同场景对应的驾驶行为;obtaining driving behaviors corresponding to the different scenarios according to the historical driving images corresponding to the different scenarios;
    根据所述不同场景及所述不同场景对应的驾驶行为,构建所述场景库。The scene library is constructed according to the different scenes and the driving behaviors corresponding to the different scenes.
  8. 一种车辆驾驶场景识别装置,其特征在于,所述装置包括:A vehicle driving scene recognition device, characterized in that the device comprises:
    数据获取单元,用于获取第一车辆在当前时段的行驶数据;a data acquisition unit for acquiring the driving data of the first vehicle in the current period;
    特征获取单元,用于根据所述行驶数据获得第二车辆的特征序列,所述第二车辆用于指示所述第一车辆周围的车辆,所述特征序列包括:速度序列、所述第二车辆与参考车道线的距离序列和所述第二车辆与参考车道线的夹角序列;a feature acquisition unit, configured to obtain a feature sequence of a second vehicle according to the driving data, the second vehicle is used to indicate vehicles around the first vehicle, and the feature sequence includes: a speed sequence, the second vehicle a sequence of distances from the reference lane line and a sequence of included angles between the second vehicle and the reference lane line;
    相似度计算单元,用于利用动态时间规整算法,获得所述第二车辆的特征序列与预设的多个驾驶行为基准序列之间的相似度;a similarity calculation unit, configured to obtain the similarity between the feature sequence of the second vehicle and a plurality of preset driving behavior reference sequences by using a dynamic time warping algorithm;
    行为识别单元,用于根据所述相似度确定所述第二车辆在所述当前时段的驾驶行为;a behavior recognition unit, configured to determine the driving behavior of the second vehicle in the current period according to the similarity;
    场景匹配单元,用于根据所述第二车辆在当前时段的驾驶行为,利用预先构建的场景库匹配所述第二车辆在所述当前时段的驾驶场景。A scene matching unit, configured to match the driving scene of the second vehicle in the current period by using a pre-built scene library according to the driving behavior of the second vehicle in the current period.
  9. 根据权利要求8所述的装置,其特征在于,所述相似度计算单元还用于:The device according to claim 8, wherein the similarity calculation unit is further configured to:
    识别所述第一车辆行驶的区域;identifying the area in which the first vehicle travels;
    在数据库中提取所述第一车辆行驶的区域对应的所述预设的驾驶行为基准序列,所述数据库包括不同区域对应的驾驶行为基准序列。The preset driving behavior reference sequence corresponding to the area where the first vehicle travels is extracted from a database, where the database includes driving behavior reference sequences corresponding to different areas.
  10. 根据权利要求8或9所述的装置,其特征在于,所述特征获取单元还用于:The device according to claim 8 or 9, wherein the feature acquisition unit is further configured to:
    对所述行驶数据做降噪处理和平滑处理。Noise reduction processing and smoothing processing are performed on the driving data.
  11. 根据权利要求8至10中任一项所述的装置,其特征在于,所述参考车道线为所述第一车辆行驶的车道的一个车道线。The device according to any one of claims 8 to 10, wherein the reference lane line is a lane line of a lane in which the first vehicle travels.
  12. 根据权利要求8至11中任一项所述的装置,其特征在于,所述第一车辆行驶的车道为曲线车道,所述装置还包括:The device according to any one of claims 8 to 11, wherein the lane in which the first vehicle travels is a curved lane, and the device further comprises:
    曲率补偿单元,用于按照所述曲线车道的曲率对所述距离序列进行曲率补偿。A curvature compensation unit, configured to perform curvature compensation on the distance sequence according to the curvature of the curved lane.
  13. 根据权利要求8至12中任一项所述的装置,其特征在于,所述行为识别单元具体用于:The device according to any one of claims 8 to 12, wherein the behavior recognition unit is specifically configured to:
    判断所述相似度是否满足预设条件;Judging whether the similarity satisfies a preset condition;
    将满足所述预设条件的相似度对应的驾驶行为基准序列表征的驾驶行为,作为所述第二车辆在所述当前时段的驾驶行为。The driving behavior represented by the driving behavior reference sequence corresponding to the degree of similarity satisfying the preset condition is used as the driving behavior of the second vehicle in the current period.
  14. 根据权利要求8至13中任一项所述的装置,其特征在于,还包括场景库构建单元, 用于:The device according to any one of claims 8 to 13, further comprising a scene library construction unit for:
    根据不同场景对应的历史行驶图像,获得所述不同场景对应的驾驶行为;obtaining driving behaviors corresponding to the different scenarios according to the historical driving images corresponding to the different scenarios;
    根据所述不同场景及所述不同场景对应的驾驶行为,构建所述场景库。The scene library is constructed according to the different scenes and the driving behaviors corresponding to the different scenes.
  15. 一种计算机存储介质,所述计算机存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1-7任一项所述的方法。A computer storage medium in which instructions are stored, and when the instructions are executed on a computer, cause the computer to execute the method according to any one of claims 1-7.
  16. 一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1-7任一项所述的方法。A computer program product comprising instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1-7.
PCT/CN2021/070939 2021-01-08 2021-01-08 Autonomous driving scenario identifying method and apparatus WO2022147785A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/070939 WO2022147785A1 (en) 2021-01-08 2021-01-08 Autonomous driving scenario identifying method and apparatus
CN202180000124.8A CN112805724B (en) 2021-01-08 2021-01-08 Vehicle driving scene recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/070939 WO2022147785A1 (en) 2021-01-08 2021-01-08 Autonomous driving scenario identifying method and apparatus

Publications (1)

Publication Number Publication Date
WO2022147785A1 true WO2022147785A1 (en) 2022-07-14

Family

ID=75811472

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/070939 WO2022147785A1 (en) 2021-01-08 2021-01-08 Autonomous driving scenario identifying method and apparatus

Country Status (2)

Country Link
CN (1) CN112805724B (en)
WO (1) WO2022147785A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114047003B (en) * 2021-12-22 2023-07-14 吉林大学 Human-vehicle difference data trigger record control method based on dynamic time warping algorithm
CN115293301B (en) * 2022-10-09 2023-01-31 腾讯科技(深圳)有限公司 Estimation method and device for lane change direction of vehicle and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996298A (en) * 2014-06-09 2014-08-20 百度在线网络技术(北京)有限公司 Driving behavior monitoring method and device
CN108229304A (en) * 2017-11-17 2018-06-29 清华大学 A kind of driving behavior recognition methods based on Clustering of systematization
CN109155107A (en) * 2016-03-22 2019-01-04 德尔福技术有限公司 Sensory perceptual system for automated vehicle scene perception
CN109878530A (en) * 2019-02-28 2019-06-14 中国第一汽车股份有限公司 Identify the method and system of the lateral driving cycle of vehicle
WO2020061603A1 (en) * 2018-09-24 2020-04-02 Avl List Gmbh Method and device for analyzing a sensor data stream and method for guiding a vehicle
US20200133269A1 (en) * 2018-10-30 2020-04-30 The Regents Of The University Of Michigan Unsurpervised classification of encountering scenarios using connected vehicle datasets

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728842B (en) * 2019-10-23 2021-10-08 江苏智通交通科技有限公司 Abnormal driving early warning method based on reasonable driving range of vehicles at intersection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996298A (en) * 2014-06-09 2014-08-20 百度在线网络技术(北京)有限公司 Driving behavior monitoring method and device
CN109155107A (en) * 2016-03-22 2019-01-04 德尔福技术有限公司 Sensory perceptual system for automated vehicle scene perception
CN108229304A (en) * 2017-11-17 2018-06-29 清华大学 A kind of driving behavior recognition methods based on Clustering of systematization
WO2020061603A1 (en) * 2018-09-24 2020-04-02 Avl List Gmbh Method and device for analyzing a sensor data stream and method for guiding a vehicle
US20200133269A1 (en) * 2018-10-30 2020-04-30 The Regents Of The University Of Michigan Unsurpervised classification of encountering scenarios using connected vehicle datasets
CN109878530A (en) * 2019-02-28 2019-06-14 中国第一汽车股份有限公司 Identify the method and system of the lateral driving cycle of vehicle

Also Published As

Publication number Publication date
CN112805724B (en) 2022-05-13
CN112805724A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
WO2019006743A1 (en) Method and device for controlling travel of vehicle
JP2020115322A (en) System and method for vehicle position estimation
CN109828571A (en) Automatic driving vehicle, method and apparatus based on V2X
WO2022147785A1 (en) Autonomous driving scenario identifying method and apparatus
CN113064153B (en) Method and device for determining target object tracking threshold
JP6048246B2 (en) Inter-vehicle distance measuring device and inter-vehicle distance measuring method
CN111754798A (en) Method for realizing detection of vehicle and surrounding obstacles by fusing roadside laser radar and video
US20220207997A1 (en) Traffic Flow Estimation Apparatus, Traffic Flow Estimation Method, Traffic Flow Estimation Program, And Storage Medium Storing Traffic Flow Estimation Program
CN111516690A (en) Control method and device of intelligent automobile and storage medium
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
JP2024026588A (en) Image recognition device and image recognition method
US20210048819A1 (en) Apparatus and method for determining junction
CN111650604A (en) Method for realizing accurate detection of self-vehicle and peripheral obstacles by using accurate positioning
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN113614732A (en) Information processing apparatus and information processing method
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN115223122A (en) Method and device for determining three-dimensional information of object, vehicle and storage medium
CN114973178A (en) Model training method, object recognition method, device, vehicle and storage medium
JP2011214961A (en) Reference pattern information generating device, method, program and general vehicle position specifying device
CN111161540A (en) Driving guide method and device of intelligent automobile, terminal and storage medium
US20180362050A1 (en) Mobile object management apparatus, mobile object management method, and storage medium
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN111932883B (en) Method for guiding unmanned driving by utilizing broadcast communication of road side equipment
CN114821511B (en) Rod body detection method and device, vehicle, storage medium and chip

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21916840

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21916840

Country of ref document: EP

Kind code of ref document: A1