WO2021057134A1 - Scenario identification method and computing device - Google Patents

Scenario identification method and computing device Download PDF

Info

Publication number
WO2021057134A1
WO2021057134A1 PCT/CN2020/097886 CN2020097886W WO2021057134A1 WO 2021057134 A1 WO2021057134 A1 WO 2021057134A1 CN 2020097886 W CN2020097886 W CN 2020097886W WO 2021057134 A1 WO2021057134 A1 WO 2021057134A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
state
state sequence
sequence
driving
Prior art date
Application number
PCT/CN2020/097886
Other languages
French (fr)
Chinese (zh)
Inventor
李登宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021057134A1 publication Critical patent/WO2021057134A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • This application relates to the field of autonomous driving, and more specifically, to a method and computing device for scene recognition.
  • the industry usually uses simulation test methods to verify the functions of the autonomous driving system.
  • simulation software By using simulation software, the real traffic environment is generated or reproduced in the simulation software in the form of simulation, so as to test whether the automatic driving system can correctly identify the surrounding environment. , And whether it can make timely and accurate responses to the surrounding environment and take appropriate driving behaviors.
  • the data required to build a simulation scene in the simulation software consists of high-precision map data and simulated traffic flow data.
  • the high-precision map data provides roads, static traffic information (for example, traffic lights, road signs, etc.), and static object models (for example, Information such as buildings, trees), and simulated traffic flow data provide dynamic traffic flow (for example, traffic participants such as vehicles, pedestrians, etc.) information.
  • the simulation software realizes the function of projecting the real world to the virtual world by loading and running this information, and copies the real scene in the automatic driving to the simulation software.
  • Real vehicle driving data and road information related to vehicle driving data are one of the main sources of data required to build a simulation scene.
  • road information related to vehicle driving data for example, lane line information, traffic sign information, Red street light information, static object information
  • vehicle driving data is restored to simulated traffic flow data, so that vehicle driving data and road information related to vehicle driving data are restored to what is needed to build a simulation scene data.
  • the driving scene needs to be recognized first.
  • the present application provides a method and computing device for scene recognition, which can automatically recognize a driving scene according to the driving data of a vehicle.
  • a method for scene recognition including: determining a first state sequence according to driving data of a vehicle, the first state sequence representing the first state of the vehicle at different times; In the first state sequence, the recognition rule is determined according to the target scene; according to the detection result, it is determined whether the driving scene corresponding to the driving data includes the target scene.
  • the first state sequence is acquired from the driving data of the vehicle, and the first state sequence is detected using the recognition rule determined according to the target scene to be recognized, and finally the driving scene corresponding to the driving data is determined according to the detection result. Whether to include the target scene, so as to realize the purpose of automatically identifying the driving scene according to the driving data of the vehicle.
  • the determining the first state sequence according to the driving data of the vehicle includes: determining the first state sequence according to the driving data of the vehicle and the target scene.
  • the first state sequence is determined according to the two factors of the driving data of the vehicle and the target scene to be recognized, so that the determined first state sequence can be more matched with the target scene, thereby improving the recognition of the target according to the first state sequence.
  • the efficiency of the scene is determined according to the two factors of the driving data of the vehicle and the target scene to be recognized, so that the determined first state sequence can be more matched with the target scene, thereby improving the recognition of the target according to the first state sequence.
  • the determining whether the driving scene corresponding to the driving data includes the target scene according to the detection result includes: if the first state sequence includes a first subsequence, determining The driving scene corresponding to the driving data includes the target scene, and the first subsequence is a subsequence that satisfies the recognition rule; or, if the first state sequence does not include the first subsequence, it is determined The driving scene corresponding to the driving data does not include the target scene, and the first subsequence is a subsequence that satisfies the recognition rule.
  • the first sub-sequence is determined according to the recognition rule, thereby determining whether the first state sequence meets the recognition rule by detecting whether the first sub-sequence is included in the first state sequence, so as to determine whether the driving scene corresponding to the driving data is Including the target scene, and then realize the driving scene is automatically recognized according to the driving data of the vehicle.
  • the determining the first state sequence according to the driving data of the vehicle and the target scene includes: determining the second state sequence according to the driving data of the vehicle and the target scene Unlike the third state sequence, the second state sequence represents the second state of the vehicle at a different time, and the third state sequence represents the third state of the vehicle at a different time; according to the second state sequence and The third state sequence generates the first state sequence.
  • At least two state sequences can be determined first, and then the first state sequence is generated according to the at least two state sequences, namely ,
  • the first state sequence is generated according to the at least two state sequences, namely .
  • the second state sequence and the third state sequence may be state sequences that describe different information.
  • the target scene is a left-turning scene at a traffic light intersection
  • the second state sequence may describe whether the vehicle is at a traffic light at different times.
  • the third state sequence can describe the turning state of the vehicle at different moments.
  • the first state sequence is an m ⁇ n matrix, and the element in the i-th row and j-th column in the matrix represents the first state of the vehicle with index i at time j, m Is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1 and less than m, and j is an integer greater than or equal to 1 and less than n.
  • the driving scene corresponding to the driving data includes the target scene
  • the method further includes: determining the second state sequence in the fourth state sequence according to the moment corresponding to the first sub-sequence Sub-sequence, the fourth state sequence represents the associated state of the vehicle, the fourth state sequence is determined according to the driving data of the vehicle; according to the second sub-sequence, the complexity of the target scene is determined .
  • the determining the complexity of the target scene according to the second subsequence includes: determining the complexity of the associated state of the vehicle according to the second subsequence; The complexity of the associated state of the vehicle is weighted to determine the complexity of the target scene.
  • the method further includes: determining a third sub-sequence in a fifth state sequence according to a time corresponding to the first sub-sequence, and the fifth state sequence indicates that the vehicle is in a different state.
  • the position information at the time, the fifth state sequence is determined according to the driving data of the vehicle; the determining according to the detection result whether the target scene is included in the driving scene corresponding to the driving data includes: according to the The detection result and the third sub-sequence determine whether the driving scene corresponding to the driving data includes the target scene.
  • the present application provides a computing device, including: a determining module, configured to determine a first state sequence according to the driving data of the vehicle, the first state sequence representing the first state of the vehicle at different times; processing The module is used to detect the first state sequence using a recognition rule, the recognition rule is determined according to the target scene; the determination module is also used to determine whether the driving scene corresponding to the driving data includes The target scene.
  • the determining module when the first state sequence is determined according to the driving data of the vehicle, is specifically configured to determine the first state sequence according to the driving data of the vehicle and the target scene.
  • the determining module when determining whether the driving scene corresponding to the driving data includes the target scene according to the detection result, is specifically configured to: if the first state sequence includes the first state sequence A sub-sequence, it is determined that the driving scene corresponding to the driving data includes the target scene, and the first sub-sequence is a sub-sequence that satisfies the recognition rule; or, if the first state sequence does not include the first state sequence A sub-sequence, it is determined that the driving scene corresponding to the driving data does not include the target scene, and the first sub-sequence is a sub-sequence that satisfies the recognition rule.
  • the determining module when the first state sequence is determined according to the driving data of the vehicle and the target scene, is specifically configured to: according to the driving data of the vehicle and the target scene , Determine a second state sequence and a third state sequence, the second state sequence represents the second state of the vehicle at different moments, and the third state sequence represents the third state of the vehicle at different moments; The second state sequence and the third state sequence are used to generate the first state sequence.
  • the first state sequence is an m ⁇ n matrix, and the element in the i-th row and j-th column in the matrix represents the first state of the vehicle with index i at time j, m Is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1 and less than m, and j is an integer greater than or equal to 1 and less than n.
  • the determining module is further configured to: according to the moment corresponding to the first subsequence, in the fourth state sequence The second sub-sequence is determined in the, the fourth state sequence represents the associated state of the vehicle, the fourth state sequence is determined according to the driving data of the vehicle; the target is determined according to the second sub-sequence The complexity of the scene.
  • the determining module is specifically configured to: determine the association of the vehicle according to the second subsequence The complexity of the state; performing a weighted operation on the complexity of the associated state of the vehicle to determine the complexity of the target scene.
  • the determining module is further configured to: determine a third subsequence in a fifth state sequence according to a time corresponding to the first subsequence, and the fifth state sequence represents the vehicle For location information at different moments, the fifth state sequence is determined according to the driving data of the vehicle; when it is determined whether the target scene is included in the driving scene corresponding to the driving data according to the detection result, the determining The module is specifically configured to determine whether the target scene is included in the driving scene corresponding to the driving data according to the detection result and the third subsequence.
  • a computing device in a third aspect, includes a processor and a memory.
  • the memory is used to store computer execution instructions.
  • the processor executes the computer execution in the memory.
  • the instructions are to execute the method steps in the first aspect and any one of the possible implementation manners of the first aspect through the computing device.
  • a non-transitory readable storage medium including program instructions.
  • the program instructions When the program instructions are executed by a computing device, the computing device executes any one of the first aspect and the first aspect. The method in the implementation.
  • a computer program product which includes program instructions.
  • the program instructions When the program instructions are executed by a computing device, the computing device executes any one of the first aspect and any one of the possible implementation manners of the first aspect. method.
  • FIG. 1 is a schematic block diagram of a scene recognition system provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for scene recognition provided by an embodiment of the present application
  • Figure 3 is a schematic diagram of the formal route of the vehicle in scene #2;
  • Fig. 4 is a schematic diagram of the formal route of the vehicle in scene #4;
  • FIG. 5 is a schematic structural diagram of a computing device 500 provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a computing device 600 provided by an embodiment of the present application.
  • the industry usually uses simulation test methods to verify the functions of the autonomous driving system.
  • simulation software By using simulation software, the real traffic environment is generated or reproduced in the simulation software in the form of simulation, so as to test whether the automatic driving system can correctly identify the surrounding environment. , And whether it can make timely and accurate responses to the surrounding environment and take appropriate driving behaviors.
  • the data required to build a simulation scene in the simulation software consists of high-precision map data and simulated traffic flow data.
  • the high-precision map data provides roads, static traffic information (for example, traffic lights, road signs, etc.), and static object models (for example, Information such as buildings, trees), and simulated traffic flow data provide dynamic traffic flow (for example, traffic participants such as vehicles, pedestrians, etc.) information.
  • the simulation software realizes the function of projecting the real world to the virtual world by loading and running this information, and copies the real scene in the automatic driving to the simulation software.
  • Real vehicle driving data and road information related to vehicle driving data are one of the main sources of data required to build a simulation scene.
  • road information related to vehicle driving data for example, lane line information, traffic sign information, Red street light information, static object information
  • vehicle driving data is restored to simulated traffic flow data, so that vehicle driving data and road information related to vehicle driving data are restored to what is needed to build a simulation scene data.
  • the automatic driving system Since the function of the automatic driving system corresponds to the driving scene, the corresponding driving scene needs to be used when verifying the function of the automatic driving system.
  • the automatic driving system provides the automatic braking system (Autonomous Emergency Braking, AEB) function.
  • AEB Automatic Emergency Braking
  • the driving data of the vehicle and the road information related to the driving data of the vehicle build a simulation scene, so as to verify the function of the automatic driving system in the simulation scene in the simulation software.
  • the method collects road data, analyzes the collected data, and uses a semi-automatic labeling method to recognize the scene. For example, most scenes require manual observation of video data to be recognized, and the recognition of driving scenes cannot be fully automated.
  • the video data appearing here and in the following may refer to the video data acquired by the camera installed on the vehicle during the driving of the vehicle.
  • the present application provides a method for scene recognition, which can automatically recognize the driving scene according to the driving data of the vehicle.
  • the following describes in detail the method for scene recognition provided in the present application with reference to FIGS. 1 to 4.
  • FIG. 1 is a schematic block diagram of a scene recognition system 100 provided by the present application.
  • the system 100 may include a collection device 101 and a computing device 102.
  • the collection device 101 is mainly responsible for collection functions.
  • the collection device 101 may be a vehicle or an urban traffic monitoring device, and the vehicle here may be a vehicle equipped with an automatic driving system.
  • road acquisition data the data obtained during the driving of the vehicle
  • urban traffic flow monitoring data the road traffic data obtained by the urban traffic monitoring device
  • sensors can be installed on the vehicle.
  • This application does not specifically limit the sensors installed on the vehicle, which may include but are not limited to: several cameras, at least one radar, at least one positioning system, and at least one inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • At least one camera can be respectively deployed around the vehicle and collect environmental parameters around the vehicle.
  • at least one camera may be installed on the front and rear bumpers, side-view mirrors, and windshield of the vehicle.
  • the radar may include at least one of ultrasonic radar, laser radar, and millimeter wave radar.
  • the radar can measure parameter information such as the distance and speed of the vehicle. Radar can also use radio signals to sense objects in the surrounding environment of the vehicle. Optionally, in some embodiments, in addition to sensing the object, the radar can also be used to sense the forward direction of the object.
  • the positioning system may be a global positioning system (GPS), Beidou system or other positioning systems, which are used to receive satellite signals and locate the current position of the vehicle.
  • GPS global positioning system
  • Beidou system or other positioning systems, which are used to receive satellite signals and locate the current position of the vehicle.
  • the IMU can sense the position and orientation changes of the vehicle based on the inertial acceleration.
  • the IMU may be a combination of an accelerometer and a gyroscope, and is used to measure the angular velocity and acceleration of the vehicle.
  • the collection device 101 and the computing device 102 can communicate with each other through a network or storage medium.
  • the collection device 101 can transmit road collection data and/or urban traffic flow monitoring data to the computing device 102 through a transmission method such as a network or storage medium.
  • the computing device 102 recognizes the driving scene based on road acquisition data and/or urban traffic flow monitoring data.
  • road mining data can include but not limited to: camera, millimeter wave radar, lidar, ultrasonic radar, GPS, IMU and other sensor data, high-precision map data, algorithm output data, vehicle-to-everything communication (vehicle-to-everything) , V2X) data and vehicle control data and other data that can be collected;
  • urban traffic flow monitoring data can include but not limited to: vehicle trajectory information and vehicle information in the traffic flow.
  • Fig. 2 is a schematic flowchart of a method for scene recognition provided by the present application. The method includes steps 210-230, and steps 210-230 are described in detail below.
  • Step 210 Determine the state sequence #1 (ie, an example of the first state sequence) according to the driving data of the vehicle.
  • the state sequence #1 represents the state #1 of the vehicle at different times (ie, an example of the first state).
  • the driving data of the vehicle here can be road collection data and/or urban traffic flow monitoring data.
  • the computing device 102 After the computing device 102 obtains the driving data of the vehicle, it can determine the state #1 of the vehicle at different times according to the driving data of the vehicle.
  • the state #1 may be the driving speed of the vehicle, and the driving speed of the vehicle at different times constitutes the state. Sequence #1.
  • the driving data of the vehicle may include the positioning information of the vehicle obtained by the positioning system on the vehicle, and the computing device 102 may calculate the driving speed of the vehicle according to the positioning information of the vehicle.
  • Step 220 Use the recognition rule to detect the state sequence #1, and the recognition rule is determined according to the target scene.
  • the computing device 102 may determine a recognition rule according to the target scene to be recognized, and use the recognition rule to detect the state sequence #1.
  • Step 230 According to the detection result, it is determined whether the driving scene corresponding to the driving data includes the target scene.
  • the computing device 102 may determine whether the target scene is included in the driving scene corresponding to the driving data according to the detection result obtained after detecting the state sequence #1 using the recognition rule.
  • step 210 can also be replaced with: determining the state sequence #1 according to the driving data of the vehicle and the target scene.
  • the computing device 102 When the computing device 102 acquires the state #1 of the vehicle at different moments, it can acquire it in combination with the target scene.
  • the target scene can be the scene where the vehicle in front of the left cuts into the scene.
  • the computing device can use the target scene from the driving of other vehicles.
  • the data and the driving data of the own vehicle obtain the position of other vehicles relative to the own vehicle, and use the obtained position of other vehicles relative to the own vehicle as the state #1 of other vehicles at different times, and according to the state of other vehicles at different times #1, confirm status sequence #1.
  • step 230 may be specifically implemented in the following manner:
  • subsequence #1 is a subsequence that meets the recognition rules.
  • the target scene is a scene where the driving speed of the vehicle is greater than or equal to 30km/h, and the driving speed of the same vehicle at different times is recorded in the state sequence #1, then the recognition rule corresponding to the target scene can be in the state sequence #1 Including continuously occurring driving speeds greater than or equal to 30km/h, that is, subsequence #1 includes continuously occurring driving speeds greater than or equal to 30km/h. If the computing device 101 can detect continuously occurring driving speeds greater than or equal to 30km/h in the state sequence #1, it can be determined that the driving scene corresponding to the driving data includes the target scene; otherwise, the driving scene corresponding to the driving data is determined Does not include the target scene.
  • the identification can also be used to indicate the speed of the vehicle at different times. For example, “1” means that the speed of the vehicle at a certain moment is greater than or equal to 30km/h, and “0” means other conditions, for example, “0”. "It means that the speed of the vehicle at a certain moment is less than 30km/h or the vehicle is parked at a certain moment.
  • the recognition rule corresponding to the target scene can be that the state sequence #1 includes consecutive "1"s, that is, a sub-sequence #1 can be a "1" that appears consecutively. If the computing device 101 can detect successive occurrences of 1s in the state sequence #1, it can be determined that the driving scene corresponding to the driving data includes the target scene; otherwise, it is determined that the driving scene corresponding to the driving data does not include the target scene.
  • the state sequence can be stored in the form of a matrix (hereinafter referred to as “state matrix” for short).
  • state matrix a matrix with m rows and n columns (ie, m ⁇ n), where the i-th row
  • the element in the jth column of can represent the state of the vehicle with index i (hereinafter referred to as "vehicle #i") at time j, where m is an integer greater than or equal to 1, n is an integer greater than or equal to 2, and i is greater than Or an integer equal to 1 and less than or equal to m, and j is an integer greater than or equal to 1 and less than or equal to n.
  • the state matrix is taken as an example, combining several specific scenarios to illustrate the method of scene recognition provided in this application.
  • Scenario #1 A scene where the vehicle is driving at a speed greater than or equal to 50km/h.
  • the driving speed of the own vehicle may be obtained first.
  • the computing device 102 may calculate the driving speed of the own vehicle according to the positioning information of the vehicle obtained by the positioning system on the own vehicle.
  • the state matrix #1 (corresponding to the state sequence #1) may be a matrix with a size of 1 ⁇ n, and n represents the total recorded time length for the own vehicle in the state matrix #1.
  • the elements e 1, j of the state matrix #1 represent the traveling speed of the own vehicle at time j.
  • “1” means that the driving speed of the vehicle at a certain moment is greater than or equal to 50km/h
  • "0" means other conditions
  • the value of e 1, j can be expressed as follows:
  • the above other conditions may indicate that the driving speed of the own vehicle at time j is less than 50 km/h or the own vehicle is in a parking state at a certain time.
  • the computing device 102 determines the corresponding identification rule for the scene #1 as: identifying all consecutive "1"s in the state matrix #1, and then the computing device 102 uses the identification rule to detect the state matrix #1.
  • state matrix #1 can be:
  • the computing device 102 can identify 3 scenes #1 from the state matrix #1, where the time corresponding to each scene #1 is: time #5-time #15, time #18-time #40, time #44- ⁇ #47, the computing device 102 may associate the driving data of the own vehicle corresponding to time#5-time#15, time#18-time#40, and time#44-time#47 with scene #1, thereby In the simulation software, the driving data of the own vehicle corresponding to time #5-time #15, time #18-time #40, and time #44-time #47 and road information related to the driving data of the vehicle are used to build scene #1, Thus, the function of the autonomous driving system of the vehicle in scenario #1 is verified in the simulation software.
  • the computing device 102 needs to determine the location of the own vehicle and the location of other vehicles, so as to determine the location of other vehicles relative to the own vehicle.
  • the location of the own vehicle can be based on the positioning system or IMU on the own vehicle.
  • the location information of the obtained vehicle is obtained, and the location of other vehicles can be obtained from the video data, or can also be obtained from the scanning information of the radar.
  • the computing device 102 can calculate the position of other vehicles relative to the own vehicle at each time according to the position of the own vehicle at each time and the position of other vehicles, and generate a state matrix of size m ⁇ n according to the position of other vehicles relative to the own vehicle at each time.
  • #1, m represents the total number of other vehicles recorded in the state matrix #1, and n represents the total recorded time length for each vehicle in the state matrix #1.
  • the element p i, j of the state matrix #1 represents the relative position of the vehicle #i (that is, an example of other vehicles) with respect to the own vehicle at time j. For example, “1" indicates that vehicle #i is located in the front left direction of the host vehicle at time j, "2" indicates that vehicle #i is located in the front direction of the host vehicle at time j, and "3" indicates that vehicle #i is located in front of the host vehicle at time j The front right direction. For the sake of brevity, I will not list them one by one here , and the values of p i,j can be expressed as follows:
  • the recognition rule determined by the computing device 102 for the scenario #2 is: the element of the state matrix #1 changes from “1” to "2", and then the computing device 102 uses the recognition rule to detect the state matrix #1.
  • state matrix #1 can be:
  • the computing device 102 can recognize scene #2 from the state matrix #1, and the computing device 102
  • the driving data of the host vehicle and other vehicles corresponding to time #6-time #7 can be associated with scene #2, so that the driving data of the host vehicle corresponding to time #6-time #7 and the driving data of the host vehicle can be used in the simulation software.
  • the road information related to the driving data, the driving data of other vehicles, and the road information related to the driving data of other vehicles build scene #2, so that the function of the vehicle's automatic driving system under scene #2 (for example, the deceleration function) can be compared in the simulation software. )authenticating.
  • the computing device 102 may first determine whether the own vehicle is in a red intersection at different times (ie, an example of the second state), and determine the steering state of the same vehicle at different times (ie, the third An example of status).
  • the computing device 102 can generate state sequence #2 (that is, an example of the second state sequence) according to multiple states of whether the vehicle is in a traffic light intersection at different times, and generate states based on multiple steering states of the same vehicle at different times
  • state sequence #1 is generated based on state sequence #2 and state sequence #3.
  • the state sequence #2 and the state sequence #3 are both stored in the form of a matrix, that is, the state sequence #2 corresponds to the state matrix #2, and the state sequence #3 corresponds to the state matrix #3.
  • the computing device 102 generates a state matrix #2 of size m ⁇ n according to whether the vehicle is in multiple states at the traffic light intersection at different times, where m represents the total number of vehicles recorded in the state matrix #2, and n represents the state matrix# The total time recorded in 2 for each vehicle.
  • the element r i,j of the state matrix #2 indicates whether the vehicle #i (that is, an example of the own vehicle) is within the traffic light intersection at time j. For example, "1" indicates that the vehicle #i is in the traffic light intersection at time j, and "0" indicates that the vehicle #i is in another state at time j. Then the value of r i,j can be expressed as follows:
  • the computing device 102 generates a state matrix #3 with a size of m ⁇ n according to multiple steering states of the vehicle at different times, where m represents the total number of vehicles recorded in the state matrix #3, and n represents the number of vehicles recorded in the state matrix #3 for each The total time recorded by each vehicle.
  • the element s i,j of the state matrix #3 represents the turning state of the vehicle #i at time j. For example, “1" indicates that vehicle #i is turning left at time j, “2" indicates that vehicle #i is turning right at time j, and "0" indicates that vehicle #i is in another state at time j, for example, It means that the vehicle #i did not make a steering operation at time j. Then the value of s i,j can be expressed as follows:
  • the computing device 102 may generate the state matrix #1 according to the state matrix #2 and the state matrix #3.
  • the elements t i,j in the state matrix #1 represent the turning state of the vehicle #i at the traffic light intersection at time j. For example, “1" indicates that vehicle #i turns left at a traffic light intersection at time j, "2" indicates that vehicle #i turns right at a traffic light intersection at time j, and "0" indicates that vehicle #i is in another state at time j.
  • the value of ti,j can be expressed as follows:
  • the recognition rule determined by the computing device 102 for scenario #3 is: recognize whether the state matrix #1 includes 1, and when the state matrix #1 includes 1, the computing device 102 can recognize the scenario #3 from the state matrix, and calculate The device 102 can associate the driving data of the host vehicle at the time corresponding to element 1 with scene #3, so as to construct the road information related to the driving data of the host vehicle at the time corresponding to element 1 and the driving data of the host vehicle in the simulation software.
  • Scene #3 so as to verify the function of the self-driving system of the vehicle in scene #3 in the simulation software.
  • state sequence in the form of a matrix as an example to exemplarily introduce the scenarios #1 to #3, but the application is not limited thereto.
  • the state sequence can also be stored in other forms, for example, it can be stored in the form of a list. Not only that, any other storage form that can reflect the state of the vehicle at different times should fall within the protection scope of this application.
  • the recognition rules in this application can also be described by regular expressions.
  • the recognition rules in scenario #1 can be described by regular expressions "1+", “1+” represents a continuous 1, and in scenario #2
  • the recognition rule can be described by the regular expression “12", "12” means changing from “1” to "2".
  • the foregoing specifically explains how the computing device 102 recognizes a scene.
  • the complexity of the scene can also be determined for the recognized scene, which will be described in detail below.
  • the method 200 may further include: the computing device 102 determines the state sequence #4 (that is, an example of the fourth state sequence) according to the driving data of the vehicle, and the state sequence #4 may represent the associated state of the own vehicle.
  • the associated state of the own vehicle may include at least one of the road environment state, the state of the own vehicle, and the state of other vehicles. Therefore, the state sequence #4 may include the road environment state sequence, the state sequence of the own vehicle, and the state of other vehicles. At least one of the state sequence.
  • the computing device 102 determines the subsequence #2 (ie, an example of the second subsequence) in the state sequence #4 according to the time corresponding to the subsequence #1, and determines the complexity of the target scene according to the subsequence #2.
  • the subsequence #2 ie, an example of the second subsequence
  • the road environment state sequence can describe the road environment where the vehicle is located at different moments.
  • the road environment state sequence can describe the vehicle driving on a curved road at the previous time and driving on a straight road at the next time.
  • the vehicle state sequence can describe this
  • the speed of the vehicle at different times and the state sequence of other vehicles can describe the distance between other vehicles and the vehicle at different times.
  • the computing device 102 determines the complexity of the target scene, it can calculate the same time in the state matrix #4 (corresponding to the state sequence #4) according to the time corresponding to the subsequence #1 of the target scene in the state matrix #1 The element of is determined as sub-sequence #2, and the complexity of the target scene is determined according to sub-sequence #2.
  • the traveling speed of the own vehicle at time #44-time #47 is greater than or equal to 50 km/h.
  • the computing device 102 may determine the element corresponding to the time #44-the time #47 in the state sequence #4 as the subsequence #2.
  • the state sequence #4 includes the road environment state sequence, the state sequence of the own vehicle, and the state sequence of other vehicles.
  • the computing device 120 may determine the element corresponding to the time #44- ⁇ #47 in the road environment state sequence corresponding to the own vehicle. Is the sub-sequence #2 corresponding to the state sequence of the road environment, the element corresponding to time #44-time #47 in the state sequence of the own vehicle is determined as the sub-sequence #2 corresponding to the state sequence of the own vehicle, and the state sequence of other vehicles.
  • the element corresponding to time #44-time #47 in is determined as the sub-sequence #2 corresponding to the state sequence of other vehicles.
  • the computing device 102 may determine the associated state of the vehicle according to at least one of the subsequence #2 corresponding to the road environment state sequence, the subsequence #2 corresponding to the state sequence of the own vehicle, and the subsequence #2 corresponding to the state sequence of other vehicles. Determine the complexity of scene #1 according to the complexity of the associated state of the vehicle.
  • the complexity of the associated state of the host vehicle includes at least one of the complexity of the road environment, the complexity of the host vehicle, and the complexity of other vehicles.
  • the computing device 102 may determine the complexity of the road environment according to the sub-sequence #2 corresponding to the state sequence of the road environment, determine the complexity of the own vehicle according to the sub-sequence #2 corresponding to the state sequence of the own vehicle, and correspond to the state sequence of other vehicles The sub-sequence #2, to determine the complexity of other vehicles.
  • the computing device 102 may perform a weighted operation on at least one of the complexity of the road environment, the complexity of the own vehicle, and the complexity of other vehicles, so as to determine the complexity of the scene #1.
  • the computing device 102 respectively performs a weighted calculation on the road environment complexity, the complexity of the own vehicle and the complexity of other vehicles corresponding to scene #1 and scene #2, and the final scene #1 and The complexity of scenario #2 is shown in the following table:
  • the computing device 102 may obtain map information, and determine multiple roads, multiple lanes, and multiple junctions according to the map information.
  • the road network topology information can be constructed.
  • the road network topology information can be used to determine which lane the vehicle is located on. And the spatial relationship between each road, lane, and junction.
  • the location information of multiple roads, multiple lanes, and multiple intersections may be the coordinates of multiple roads, multiple lanes, and multiple intersections in the map information.
  • the computing device 102 After the computing device 102 obtains the driving data of the vehicle, it can determine the position information of the vehicle at different times according to the driving data of the vehicle. According to the position information of the vehicle at different times, and combined with the road network topology information, it can determine the road on which the vehicle is located. , At least one of lanes and intersections.
  • a comprehensive judgment is made according to the above detection result and at least one of the road, lane, and intersection on which the vehicle is located, so as to improve the accuracy of scene recognition.
  • the method 200 may further include:
  • Step 230 can be replaced with:
  • the detection result and sub-sequence #3 that is, an example of the third sub-sequence
  • the vehicle #2 when the computing device 102 recognizes that the vehicle #2 has moved from the front left direction of the host vehicle to the front direction of the host vehicle at time #6-time #7, the vehicle #2 can also be determined from the state sequence #5 Position information at time #6-time #7 (ie, an example of sub-sequence #3), and based on the position information of vehicle #2 at time #6-time #7 and road network topology information, it is determined that vehicle #2 is at time #6- ⁇ #7 is in the lane, state sequence #5 includes the location information of vehicle #2 at different moments, where the location information of vehicle #2 at different moments can be the location of vehicle #2 at different moments on the map The coordinates in the message.
  • Position information at time #6-time #7 ie, an example of sub-sequence #3
  • state sequence #5 includes the location information of vehicle #2 at different moments, where the location information of vehicle #2 at different moments can be the location of vehicle #2 at different moments on the map The coordinates in the message.
  • the computing device 102 determines that vehicle #2 has the original lane changed to the original lane at time #6-time #7 based on the location information and road network topology information of vehicle #2 at time #6-time #7 The right lane of the lane, the computing device 102 can move from the front left direction of the own vehicle to the front direction of the own vehicle according to the vehicle #2 at time #6-time #7, and the vehicle #2 at time #6-time #7 from the original With the two detection results that the lane becomes the right lane of the original lane, it is determined that the vehicle #2 cuts in from the front left of the own vehicle at time #6-time #7, thereby completing the recognition of scene #2 more accurately.
  • the computing device 102 when the computing device 102 recognizes that the vehicle #i turns left at a traffic light intersection at time j, it can also determine the position information of vehicle #i at time j from the state sequence #5 (ie, sub-sequence #3 Another example), and according to the location information of vehicle #i at time j and the road network topology information, it is determined that vehicle #i is located in the intersection at time j, then the computing device 102 can be left at the traffic light intersection at time j according to vehicle #i Turn, the vehicle #i is located in the intersection at time j, the two detection results, it is determined that the vehicle #i turns left at the traffic light intersection at time j, so as to more accurately complete the recognition of scene #3.
  • the state sequence #5 ie, sub-sequence #3 Another example
  • the state sequence #5 includes the position information of the vehicle #i at different times, where the position information of the vehicle #i at different times may be the coordinates of the position of the vehicle #i at different times in the map information.
  • scenario #4 provided in the embodiment of the present application.
  • Scenario #4 The own vehicle goes straight at a traffic light intersection, and the target vehicle (ie, an example of other vehicles) turns right.
  • the computing device 102 can first determine that the vehicle is going straight at the traffic light intersection at time #t 1 to time #t 2 according to the method described in scene #3, and can further determine the current vehicle from the state sequence #5.
  • the location information of the vehicle at time #t 1 to time #t 2 ie, another example of sub-sequence #3
  • the location information of the vehicle at time #t 1 to time #t 2 and road network topology information are determined
  • the state sequence #5 includes the location information of the host vehicle at different times.
  • the location information of the host vehicle at different times may be the location information of the host vehicle at different times.
  • the computing device 102 according to the vehicle at the time #t #t 4 1 ⁇ time position information and the road network topology information 2, the present vehicle is determined at time #t #t 1 ⁇ 2 by the time the road # 1 Go through the red street light intersection to reach road #3.
  • the computing device 102 can determine that the target vehicle turns right at the traffic light intersection at time #t 1 to time #t 2 according to the method described in scenario #3. After that, the computing device 102 can determine that the target vehicle is at time # from the state sequence #5. t 1 ⁇ time #t 2 location information (ie, another example of subsequence #3), and based on the location information of the target vehicle at time #t 1 ⁇ time #t 2 and road network topology information, determine the target vehicle's location at time #t 1 ⁇ #The road where t 2 is located, the state sequence #5 includes the location information of the target vehicle at different moments, where the location information of the target vehicle at different moments can be the location information of the target vehicle at different moments on the map. In the coordinates.
  • the computing device 102 determines that the target vehicle arrives at the road from the road #2 through the red street light intersection at time #t 1 ⁇ time #t 2 according to the location information and road network topology information of the target vehicle at time #t 1 ⁇ time #t 2 #3.
  • the computing device 102 may #t 2 1 to time in accordance with the present vehicle traffic lights straight at time #t, the target vehicle at time 1 to time #t #t 2 right turn at the traffic light and the target vehicle at time 1 to time #t #t 2 ⁇ #2 Pass the red street light intersection to reach the road #3 These three detection results, it is determined that the vehicle is going straight at the traffic light intersection at time #t 1 ⁇ time #t 2 and the target vehicle is at time #t 1 ⁇ time #t 2 Turn right at the traffic light intersection to more accurately complete the recognition of scene #4.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • FIG. 5 is a schematic structural diagram of a control device 500 provided by an embodiment of the present application.
  • the control device 500 includes:
  • the determining module 510 is configured to determine a first state sequence according to the driving data of the vehicle, where the first state sequence represents the first state of the vehicle at different times;
  • the processing module 520 is configured to detect the first state sequence using a recognition rule, the recognition rule being determined according to a target scene;
  • the determining module 510 is further configured to determine whether the target scene is included in the driving scene corresponding to the driving data according to the detection result.
  • the computing device 500 may be implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD), and the above PLD may be a complex program.
  • Logic device complex programmable logical device, CPLD
  • field-programmable gate array field-programmable gate array
  • FPGA field-programmable gate array
  • GAL general array logic
  • the determining module 510 when the first state sequence is determined according to the driving data of the vehicle, is specifically configured to: determine the first state according to the driving data of the vehicle and the target scene sequence.
  • the determining module 510 when determining whether the target scene is included in the driving scene corresponding to the driving data according to the detection result, is specifically configured to:
  • the first state sequence includes a first subsequence, it is determined that the driving scene corresponding to the driving data includes the target scene, and the first subsequence is a subsequence that satisfies the recognition rule; or,
  • the first state sequence does not include the first subsequence, it is determined that the driving scene corresponding to the driving data does not include the target scene, and the first subsequence is a subsequence that satisfies the recognition rule.
  • the determining module 510 when the first state sequence is determined according to the driving data of the vehicle and the target scene, is specifically configured to:
  • a second state sequence and a third state sequence are determined, the second state sequence represents the second state of the vehicle at different moments, and the third state sequence represents all State the third state of the vehicle at different moments;
  • the first state sequence is generated according to the second state sequence and the third state sequence.
  • the first state sequence is a matrix of size m ⁇ n, and the element in the i-th row and j-th column in the matrix represents the first state of the vehicle with index i at time j, m is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1 and less than m, and j is an integer greater than or equal to 1 and less than n.
  • the determining module 510 is further configured to:
  • a second sub-sequence is determined in the fourth state sequence, the fourth state sequence represents the associated state of the vehicle, and the fourth state sequence is based on the driving of the vehicle Data confirmed;
  • the complexity of the target scene is determined.
  • the determining module 510 when determining the complexity of the target scene according to the second subsequence, is specifically configured to:
  • the determining module 510 is further configured to:
  • a third sub-sequence is determined in the fifth state sequence, the fifth state sequence represents the position information of the vehicle at different moments, and the fifth state sequence is based on the The driving data of the vehicle is determined;
  • the determining module is specifically configured to:
  • the detection result and the third sub-sequence it is determined whether the driving scene corresponding to the driving data includes the target scene.
  • the computing device 500 may correspond to executing the method described in the embodiment of the present application, and the foregoing and other operations and/or functions of each unit in the computing device 500 are to implement the corresponding process of the method in FIG. 2 respectively. , For the sake of brevity, I won’t repeat it here.
  • FIG. 6 is a schematic structural diagram of a computing device 600 provided by an embodiment of the present application.
  • the computing device 600 includes a processor 610, a memory 620, a communication interface 630, and a bus 650.
  • the processor 610 in the computing device 600 shown in FIG. 6 may correspond to the determining module 510 and the processing module 520 of the computing device 500 in FIG. 5, and the communication interface 630 in the computing device 600 may be used to communicate with other devices. To communicate.
  • the processor 610 may be connected to the memory 620.
  • the memory 620 can be used to store the program code and data. Therefore, the memory 620 may be a storage unit inside the processor 610, or an external storage unit independent of the processor 610, or may include a storage unit inside the processor 610 and an external storage unit independent of the processor 610. part.
  • the computing device 600 may further include a bus 650.
  • the memory 620 and the communication interface 630 may be connected to the processor 610 through the bus 650.
  • the bus 650 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus 650 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.
  • the processor 610 may adopt a central processing unit (CPU).
  • the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application specific integrated circuits (ASICs), ready-made programmable gate arrays (field programmable gate arrays, FPGAs) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the processor 610 adopts one or more integrated circuits to execute related programs to implement the technical solutions provided in the embodiments of the present application.
  • the memory 620 may include a read-only memory and a random access memory, and provides instructions and data to the processor 610.
  • a part of the processor 610 may also include a non-volatile random access memory.
  • the processor 610 may also store device type information.
  • the processor 610 executes the computer-executable instructions in the memory 620 to execute the operation steps of the foregoing method.
  • the computing device 600 may correspond to the corresponding main body executing the method shown in FIG. 2 according to the embodiment of the present application, and the foregoing and other operations and/or functions of each module in the computing device 600 are respectively In order to realize the corresponding process of the method in FIG. 2, for the sake of brevity, details are not repeated here.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Abstract

The present application provides a scenario identification method. The method comprises: using an identification rule to perform detection on a state sequence that is determined according to driving data of a vehicle, the identification rule being determined according to a target scenario; and determining, according to a detection result, whether or not driving scenarios corresponding to the driving data comprise the target scenario. The application provides a solution in which driving scenarios can be automatically identified according to driving data of vehicles.

Description

场景识别的方法与计算设备Scene recognition method and computing equipment
本申请要求于2019年9月27日提交中国专利局、申请号为201910927376.X、申请名称为“场景识别的方法与计算设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 201910927376.X, and the application name is "Scene Recognition Method and Computing Equipment" on September 27, 2019. The entire content is incorporated herein by reference. Applying.
技术领域Technical field
本申请涉及自动驾驶领域,并且更具体地,涉及场景识别的方法与计算设备。This application relates to the field of autonomous driving, and more specifically, to a method and computing device for scene recognition.
背景技术Background technique
随着社会对驾驶的智能性、经济性、安全性等各方面需求的提升,自动驾驶技术成为汽车工业的重点发展方向之一,越来越受到互联网公司的重视。As the society's demand for driving intelligence, economy, safety and other aspects increases, autonomous driving technology has become one of the key development directions of the automotive industry, and has received more and more attention from Internet companies.
目前,业界通常采用仿真测试方法来验证自动驾驶系统的功能,通过采用仿真软件,将真实交通环境以仿真的形式在仿真软件中生成或复现,从而测试自动驾驶系统能否正确地识别周边环境,以及能否针对周边环境做出及时准确的反映并采取恰当的驾驶行为。At present, the industry usually uses simulation test methods to verify the functions of the autonomous driving system. By using simulation software, the real traffic environment is generated or reproduced in the simulation software in the form of simulation, so as to test whether the automatic driving system can correctly identify the surrounding environment. , And whether it can make timely and accurate responses to the surrounding environment and take appropriate driving behaviors.
在仿真软件中搭建仿真场景所需的数据由高精度地图数据以及仿真交通流数据构成,其中,高精度地图数据提供道路、静态交通信息(例如,红绿灯、路标等)、静态物体模型(例如,建筑物、树木)等信息,仿真交通流数据提供动态交通流(例如,车辆、行人等交通参与者)信息。仿真软件通过加载运行这些信息,实现真实世界投影到虚拟世界的功能,把自动驾驶中的真实场景复制到仿真软件中。The data required to build a simulation scene in the simulation software consists of high-precision map data and simulated traffic flow data. The high-precision map data provides roads, static traffic information (for example, traffic lights, road signs, etc.), and static object models (for example, Information such as buildings, trees), and simulated traffic flow data provide dynamic traffic flow (for example, traffic participants such as vehicles, pedestrians, etc.) information. The simulation software realizes the function of projecting the real world to the virtual world by loading and running this information, and copies the real scene in the automatic driving to the simulation software.
真实的车辆的行驶数据以及车辆的行驶数据相关的道路信息是搭建仿真场景所需的数据的主要来源之一,通过将车辆的行驶数据相关的道路信息(例如,车道线信息、交通标志信息、红路灯信息、静态物体信息)还原成高精度地图数据,将车辆的行驶数据还原成仿真交通流数据,从而将车辆的行驶数据、车辆的行驶数据相关的道路信息还原成搭建仿真场景所需的数据。在此之前,首先需要进行驾驶场景的识别。Real vehicle driving data and road information related to vehicle driving data are one of the main sources of data required to build a simulation scene. By combining the road information related to vehicle driving data (for example, lane line information, traffic sign information, Red street light information, static object information) is restored to high-precision map data, and vehicle driving data is restored to simulated traffic flow data, so that vehicle driving data and road information related to vehicle driving data are restored to what is needed to build a simulation scene data. Before that, the driving scene needs to be recognized first.
发明内容Summary of the invention
本申请提供一种场景识别的方法、计算设备,能够根据车辆的行驶数据自动识别驾驶场景。The present application provides a method and computing device for scene recognition, which can automatically recognize a driving scene according to the driving data of a vehicle.
第一方面,提供了一种场景识别的方法,包括:根据车辆的行驶数据,确定第一状态序列,所述第一状态序列表示所述车辆在不同时刻的第一状态;使用识别规则检测所述第一状态序列,所述识别规则是根据目标场景确定的;根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。In a first aspect, a method for scene recognition is provided, including: determining a first state sequence according to driving data of a vehicle, the first state sequence representing the first state of the vehicle at different times; In the first state sequence, the recognition rule is determined according to the target scene; according to the detection result, it is determined whether the driving scene corresponding to the driving data includes the target scene.
基于上述技术方案,通过在车辆的行驶数据中获取第一状态序列,并使用根据待识别的目标场景确定的识别规则检测该第一状态序列,最终根据检测结果,确定行驶数据对应的驾驶场景中是否包括目标场景,从而实现根据车辆的行驶数据自动识别驾驶场景的目 的。Based on the above technical solution, the first state sequence is acquired from the driving data of the vehicle, and the first state sequence is detected using the recognition rule determined according to the target scene to be recognized, and finally the driving scene corresponding to the driving data is determined according to the detection result. Whether to include the target scene, so as to realize the purpose of automatically identifying the driving scene according to the driving data of the vehicle.
在一种可能的实现方式中,所述根据车辆的行驶数据,确定第一状态序列,包括:根据车辆的行驶数据与所述目标场景,确定所述第一状态序列。In a possible implementation manner, the determining the first state sequence according to the driving data of the vehicle includes: determining the first state sequence according to the driving data of the vehicle and the target scene.
基于上述技术方案,通过根据车辆的行驶数据与待识别的目标场景这两个因素确定第一状态序列,使得确定的第一状态序列能够与目标场景更加匹配,从而提高根据第一状态序列识别目标场景的效率。Based on the above technical solution, the first state sequence is determined according to the two factors of the driving data of the vehicle and the target scene to be recognized, so that the determined first state sequence can be more matched with the target scene, thereby improving the recognition of the target according to the first state sequence. The efficiency of the scene.
在一种可能的实现方式中,所述根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景,包括:如果所述第一状态序列中包括第一子序列,则确定所述行驶数据对应的驾驶场景中包括所述目标场景,所述第一子序列为满足所述识别规则的子序列;或,如果所述第一状态序列中不包括第一子序列,则确定所述行驶数据对应的驾驶场景中不包括所述目标场景,所述第一子序列为满足所述识别规则的子序列。In a possible implementation manner, the determining whether the driving scene corresponding to the driving data includes the target scene according to the detection result includes: if the first state sequence includes a first subsequence, determining The driving scene corresponding to the driving data includes the target scene, and the first subsequence is a subsequence that satisfies the recognition rule; or, if the first state sequence does not include the first subsequence, it is determined The driving scene corresponding to the driving data does not include the target scene, and the first subsequence is a subsequence that satisfies the recognition rule.
基于上述技术方案,通过根据识别规则确定第一子序列,从而通过检测第一状态序列中是否包括第一子序列来确定第一状态序列是否满足识别规则,从而确定行驶数据对应的驾驶场景中是否包括目标场景,进而实现根据车辆的行驶数据自动识别驾驶场景。Based on the above technical solution, the first sub-sequence is determined according to the recognition rule, thereby determining whether the first state sequence meets the recognition rule by detecting whether the first sub-sequence is included in the first state sequence, so as to determine whether the driving scene corresponding to the driving data is Including the target scene, and then realize the driving scene is automatically recognized according to the driving data of the vehicle.
在一种可能的实现方式中,所述根据车辆的行驶数据与所述目标场景,确定所述第一状态序列,包括:根据所述车辆的行驶数据与所述目标场景,确定第二状态序列与第三状态序列,所述第二状态序列表示所述车辆在不同时刻的第二状态,所述第三状态序列表示所述车辆在不同时刻的第三状态;根据所述第二状态序列与所述第三状态序列,生成所述第一状态序列。In a possible implementation manner, the determining the first state sequence according to the driving data of the vehicle and the target scene includes: determining the second state sequence according to the driving data of the vehicle and the target scene Unlike the third state sequence, the second state sequence represents the second state of the vehicle at a different time, and the third state sequence represents the third state of the vehicle at a different time; according to the second state sequence and The third state sequence generates the first state sequence.
基于上述技术方案,对于某些较复杂目标场景,可以首先确定至少两个状态序列(例如,第二状态序列与第三状态序列),再根据该至少两个状态序列生成第一状态序列,即,通过根据至少两个状态序列生成第一状态序列,对第一状态序列使用识别规则进行检测,以确定行驶数据对应的驾驶场景中是否包括目标场景,从而实现对较复杂目标场景的识别。Based on the above technical solution, for some more complex target scenarios, at least two state sequences (for example, the second state sequence and the third state sequence) can be determined first, and then the first state sequence is generated according to the at least two state sequences, namely , By generating a first state sequence according to at least two state sequences, and detecting the first state sequence using a recognition rule to determine whether the driving scene corresponding to the driving data includes the target scene, thereby realizing the recognition of the more complex target scene.
应理解,在具体实现时,第二状态序列与第三状态序列可以是描述不同信息的状态序列,例如,目标场景为红绿灯路口左转场景,第二状态序列可以描述车辆在不同时刻是否在红绿灯路口内,第三状态序列可以描述车辆在不同时刻的转向状态。It should be understood that in specific implementation, the second state sequence and the third state sequence may be state sequences that describe different information. For example, the target scene is a left-turning scene at a traffic light intersection, and the second state sequence may describe whether the vehicle is at a traffic light at different times. In the intersection, the third state sequence can describe the turning state of the vehicle at different moments.
在一种可能的实现方式中,所述第一状态序列为m×n大小的矩阵,所述矩阵中第i行第j列的元素表示索引为i的车辆在时刻j的第一状态,m为大于或等于1的整数,n为大于或等于2的整数,i为大于或等于1,且小于m的整数,j为大于或等于1,且小于n的整数。In a possible implementation, the first state sequence is an m×n matrix, and the element in the i-th row and j-th column in the matrix represents the first state of the vehicle with index i at time j, m Is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1 and less than m, and j is an integer greater than or equal to 1 and less than n.
在一种可能的实现方式中,所述行驶数据对应的驾驶场景中包括所述目标场景,所述方法还包括:根据所述第一子序列对应的时刻,在第四状态序列中确定第二子序列,所述第四状态序列表示所述车辆的关联状态,所述第四状态序列是根据所述车辆的行驶数据确定的;根据所述第二子序列,确定所述目标场景的复杂度。In a possible implementation manner, the driving scene corresponding to the driving data includes the target scene, and the method further includes: determining the second state sequence in the fourth state sequence according to the moment corresponding to the first sub-sequence Sub-sequence, the fourth state sequence represents the associated state of the vehicle, the fourth state sequence is determined according to the driving data of the vehicle; according to the second sub-sequence, the complexity of the target scene is determined .
在一种可能的实现方式中,所述根据所述第二子序列,确定所述目标场景的复杂度,包括:根据所述第二子序列,确定所述车辆的关联状态的复杂度;对所述车辆的关联状态的复杂度进行加权运算,确定所述目标场景的复杂度。In a possible implementation manner, the determining the complexity of the target scene according to the second subsequence includes: determining the complexity of the associated state of the vehicle according to the second subsequence; The complexity of the associated state of the vehicle is weighted to determine the complexity of the target scene.
在一种可能的实现方式中,所述方法还包括:根据所述第一子序列对应的时刻,在第五状态序列中确定第三子序列,所述第五状态序列表示所述车辆在不同时刻的位置信息, 所述第五状态序列是根据所述车辆的行驶数据确定的;所述根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景,包括:根据所述检测结果与所述第三子序列,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。In a possible implementation manner, the method further includes: determining a third sub-sequence in a fifth state sequence according to a time corresponding to the first sub-sequence, and the fifth state sequence indicates that the vehicle is in a different state. The position information at the time, the fifth state sequence is determined according to the driving data of the vehicle; the determining according to the detection result whether the target scene is included in the driving scene corresponding to the driving data includes: according to the The detection result and the third sub-sequence determine whether the driving scene corresponding to the driving data includes the target scene.
基于上述技术方案,为了进一步提高场景识别的准确程度,可以获取车辆在不同时刻的位置信息(即,第三子序列的一例),根据车辆在不同时刻的位置信息以及路网拓扑信息,确定车辆在不同时刻所处的道路,车道以及交叉路口中的至少一项,最终根据前面得到的检测结果与车辆在不同时刻所处的道路,车道以及交叉路口中的至少一项综合确定行驶数据对应的驾驶场景中是否包括目标场景,从而提高场景识别的准确程度。Based on the above technical solution, in order to further improve the accuracy of scene recognition, it is possible to obtain the position information of the vehicle at different times (ie, an example of the third subsequence), and determine the vehicle according to the position information of the vehicle at different times and road network topology information At least one of the roads, lanes and intersections at different times, and finally according to the detection results obtained in the front and at least one of the roads, lanes and intersections where the vehicle is at different times to comprehensively determine the corresponding driving data Whether the target scene is included in the driving scene, so as to improve the accuracy of scene recognition.
第二方面,本申请提供一种计算设备,包括:确定模块,用于根据车辆的行驶数据,确定第一状态序列,所述第一状态序列表示所述车辆在不同时刻的第一状态;处理模块,用于使用识别规则检测所述第一状态序列,所述识别规则是根据目标场景确定的;所述确定模块,还用于根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。In a second aspect, the present application provides a computing device, including: a determining module, configured to determine a first state sequence according to the driving data of the vehicle, the first state sequence representing the first state of the vehicle at different times; processing The module is used to detect the first state sequence using a recognition rule, the recognition rule is determined according to the target scene; the determination module is also used to determine whether the driving scene corresponding to the driving data includes The target scene.
在一种可能的实现方式中,当根据车辆的行驶数据,确定第一状态序列时,所述确定模块具体用于:根据车辆的行驶数据与所述目标场景,确定所述第一状态序列。In a possible implementation manner, when the first state sequence is determined according to the driving data of the vehicle, the determining module is specifically configured to determine the first state sequence according to the driving data of the vehicle and the target scene.
在一种可能的实现方式中,当根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景时,所述确定模块具体用于:如果所述第一状态序列中包括第一子序列,则确定所述行驶数据对应的驾驶场景中包括所述目标场景,所述第一子序列为满足所述识别规则的子序列;或,如果所述第一状态序列中不包括第一子序列,则确定所述行驶数据对应的驾驶场景中不包括所述目标场景,所述第一子序列为满足所述识别规则的子序列。In a possible implementation manner, when determining whether the driving scene corresponding to the driving data includes the target scene according to the detection result, the determining module is specifically configured to: if the first state sequence includes the first state sequence A sub-sequence, it is determined that the driving scene corresponding to the driving data includes the target scene, and the first sub-sequence is a sub-sequence that satisfies the recognition rule; or, if the first state sequence does not include the first state sequence A sub-sequence, it is determined that the driving scene corresponding to the driving data does not include the target scene, and the first sub-sequence is a sub-sequence that satisfies the recognition rule.
在一种可能的实现方式中,当根据车辆的行驶数据与所述目标场景,确定所述第一状态序列时,所述确定模块具体用于:根据所述车辆的行驶数据与所述目标场景,确定第二状态序列与第三状态序列,所述第二状态序列表示所述车辆在不同时刻的第二状态,所述第三状态序列表示所述车辆在不同时刻的第三状态;根据所述第二状态序列与所述第三状态序列,生成所述第一状态序列。In a possible implementation manner, when the first state sequence is determined according to the driving data of the vehicle and the target scene, the determining module is specifically configured to: according to the driving data of the vehicle and the target scene , Determine a second state sequence and a third state sequence, the second state sequence represents the second state of the vehicle at different moments, and the third state sequence represents the third state of the vehicle at different moments; The second state sequence and the third state sequence are used to generate the first state sequence.
在一种可能的实现方式中,所述第一状态序列为m×n大小的矩阵,所述矩阵中第i行第j列的元素表示索引为i的车辆在时刻j的第一状态,m为大于或等于1的整数,n为大于或等于2的整数,i为大于或等于1,且小于m的整数,j为大于或等于1,且小于n的整数。In a possible implementation, the first state sequence is an m×n matrix, and the element in the i-th row and j-th column in the matrix represents the first state of the vehicle with index i at time j, m Is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1 and less than m, and j is an integer greater than or equal to 1 and less than n.
在一种可能的实现方式中,当所述行驶数据对应的驾驶场景中包括所述目标场景时,所述确定模块还用于:根据所述第一子序列对应的时刻,在第四状态序列中确定第二子序列,所述第四状态序列表示所述车辆的关联状态,所述第四状态序列是根据所述车辆的行驶数据确定的;根据所述第二子序列,确定所述目标场景的复杂度。In a possible implementation manner, when the driving scene corresponding to the driving data includes the target scene, the determining module is further configured to: according to the moment corresponding to the first subsequence, in the fourth state sequence The second sub-sequence is determined in the, the fourth state sequence represents the associated state of the vehicle, the fourth state sequence is determined according to the driving data of the vehicle; the target is determined according to the second sub-sequence The complexity of the scene.
在一种可能的实现方式中,当根据所述第二子序列,确定所述目标场景的复杂度时,所述确定模块具体用于:根据所述第二子序列,确定所述车辆的关联状态的复杂度;对所述车辆的关联状态的复杂度进行加权运算,确定所述目标场景的复杂度。In a possible implementation manner, when the complexity of the target scene is determined according to the second subsequence, the determining module is specifically configured to: determine the association of the vehicle according to the second subsequence The complexity of the state; performing a weighted operation on the complexity of the associated state of the vehicle to determine the complexity of the target scene.
在一种可能的实现方式中,所述确定模块还用于:根据所述第一子序列对应的时刻,在第五状态序列中确定第三子序列,所述第五状态序列表示所述车辆在不同时刻的位置信息,所述第五状态序列是根据所述车辆的行驶数据确定的;当根据检测结果,确定所述行 驶数据对应的驾驶场景中是否包括所述目标场景时,所述确定模块具体用于:根据所述检测结果与所述第三子序列,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。In a possible implementation manner, the determining module is further configured to: determine a third subsequence in a fifth state sequence according to a time corresponding to the first subsequence, and the fifth state sequence represents the vehicle For location information at different moments, the fifth state sequence is determined according to the driving data of the vehicle; when it is determined whether the target scene is included in the driving scene corresponding to the driving data according to the detection result, the determining The module is specifically configured to determine whether the target scene is included in the driving scene corresponding to the driving data according to the detection result and the third subsequence.
第三方面,提供了一种计算设备,所述计算设备包括处理器和存储器,所述存储器用于存储计算机执行指令,所述计算设备运行时,所述处理器执行所述存储器中的计算机执行指令以通过计算设备执行如第一方面及第一方面中任一种可能的实现方式中的方法步骤。In a third aspect, a computing device is provided. The computing device includes a processor and a memory. The memory is used to store computer execution instructions. When the computing device is running, the processor executes the computer execution in the memory. The instructions are to execute the method steps in the first aspect and any one of the possible implementation manners of the first aspect through the computing device.
第四方面,提供了一种非瞬态的可读存储介质,包括程序指令,当所述程序指令被计算设备运行时,所述计算设备执行如第一方面及第一方面中任一种可能的实现方式中的方法。In a fourth aspect, a non-transitory readable storage medium is provided, including program instructions. When the program instructions are executed by a computing device, the computing device executes any one of the first aspect and the first aspect. The method in the implementation.
第五方面,提供了一种计算机程序产品,包括程序指令,当所述程序指令被计算设备运行时,所述计算设备执行如第一方面及第一方面中任一种可能的实现方式中的方法。In a fifth aspect, a computer program product is provided, which includes program instructions. When the program instructions are executed by a computing device, the computing device executes any one of the first aspect and any one of the possible implementation manners of the first aspect. method.
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。On the basis of the implementation manners provided in the above aspects, this application can be further combined to provide more implementation manners.
附图说明Description of the drawings
图1是本申请实施例提供的场景识别系统的示意性框图;FIG. 1 is a schematic block diagram of a scene recognition system provided by an embodiment of the present application;
图2是本申请实施例提供的场景识别的方法的示意性流程图;FIG. 2 is a schematic flowchart of a method for scene recognition provided by an embodiment of the present application;
图3是场景#2中车辆的形式路线示意图;Figure 3 is a schematic diagram of the formal route of the vehicle in scene #2;
图4是场景#4中车辆的形式路线示意图;Fig. 4 is a schematic diagram of the formal route of the vehicle in scene #4;
图5是本申请实施例提供的一种计算设备500的结构性示意图;FIG. 5 is a schematic structural diagram of a computing device 500 provided by an embodiment of the present application;
图6是本申请实施例提供的一种计算设备600的结构性示意图。FIG. 6 is a schematic structural diagram of a computing device 600 provided by an embodiment of the present application.
具体实施方式detailed description
下面将结合附图,对本申请中的技术方案进行描述。The technical solution in this application will be described below in conjunction with the accompanying drawings.
目前,业界通常采用仿真测试方法来验证自动驾驶系统的功能,通过采用仿真软件,将真实交通环境以仿真的形式在仿真软件中生成或复现,从而测试自动驾驶系统能否正确地识别周边环境,以及能否针对周边环境做出及时准确的反映并采取恰当的驾驶行为。At present, the industry usually uses simulation test methods to verify the functions of the autonomous driving system. By using simulation software, the real traffic environment is generated or reproduced in the simulation software in the form of simulation, so as to test whether the automatic driving system can correctly identify the surrounding environment. , And whether it can make timely and accurate responses to the surrounding environment and take appropriate driving behaviors.
在仿真软件中搭建仿真场景所需的数据由高精度地图数据以及仿真交通流数据构成,其中,高精度地图数据提供道路、静态交通信息(例如,红绿灯、路标等)、静态物体模型(例如,建筑物、树木)等信息,仿真交通流数据提供动态交通流(例如,车辆、行人等交通参与者)信息。仿真软件通过加载运行这些信息,实现真实世界投影到虚拟世界的功能,把自动驾驶中的真实场景复制到仿真软件中。The data required to build a simulation scene in the simulation software consists of high-precision map data and simulated traffic flow data. The high-precision map data provides roads, static traffic information (for example, traffic lights, road signs, etc.), and static object models (for example, Information such as buildings, trees), and simulated traffic flow data provide dynamic traffic flow (for example, traffic participants such as vehicles, pedestrians, etc.) information. The simulation software realizes the function of projecting the real world to the virtual world by loading and running this information, and copies the real scene in the automatic driving to the simulation software.
真实的车辆的行驶数据以及车辆的行驶数据相关的道路信息是搭建仿真场景所需的数据的主要来源之一,通过将车辆的行驶数据相关的道路信息(例如,车道线信息、交通标志信息、红路灯信息、静态物体信息)还原成高精度地图数据,将车辆的行驶数据还原成仿真交通流数据,从而将车辆的行驶数据、车辆的行驶数据相关的道路信息还原成搭建仿真场景所需的数据。Real vehicle driving data and road information related to vehicle driving data are one of the main sources of data required to build a simulation scene. By combining the road information related to vehicle driving data (for example, lane line information, traffic sign information, Red street light information, static object information) is restored to high-precision map data, and vehicle driving data is restored to simulated traffic flow data, so that vehicle driving data and road information related to vehicle driving data are restored to what is needed to build a simulation scene data.
由于自动驾驶系统的功能与驾驶场景是对应的,在验证自动驾驶系统的功能时需要使用对应的驾驶场景,例如,自动驾驶系统提供了自动制动系统(Autonomous Emergency Braking,AEB)功能,在对该功能进行验证时,则需要在AEB场景下验证该功能。因此, 可以看出,在将车辆的行驶数据、车辆的行驶数据相关的道路信息还原成搭建仿真场景所需的数据之前,首先需要进行驾驶场景的识别,以便根据与识别出的驾驶场景相关联的车辆的行驶数据、车辆的行驶数据相关的道路信息搭建仿真场景,从而在仿真软件中对该仿真场景下的自动驾驶系统的功能进行验证。Since the function of the automatic driving system corresponds to the driving scene, the corresponding driving scene needs to be used when verifying the function of the automatic driving system. For example, the automatic driving system provides the automatic braking system (Autonomous Emergency Braking, AEB) function. When verifying this function, you need to verify this function in the AEB scenario. Therefore, it can be seen that before the driving data of the vehicle and the road information related to the driving data of the vehicle are restored to the data required to build a simulation scene, the driving scene needs to be identified first, so as to be associated with the identified driving scene. The driving data of the vehicle and the road information related to the driving data of the vehicle build a simulation scene, so as to verify the function of the automatic driving system in the simulation scene in the simulation software.
目前,已知一种场景识别的方法,该方法通过采集路采数据,对采集到的数据进行分析,采用半自动标注的方式识别场景。例如,大部分场景需要人工观察视频数据才能被识别,无法全自动化地实现驾驶场景的识别。此处以及下文中出现的视频数据可以是指车辆上安装的摄像头在车辆行驶过程中获取的视频数据。At present, a method of scene recognition is known. The method collects road data, analyzes the collected data, and uses a semi-automatic labeling method to recognize the scene. For example, most scenes require manual observation of video data to be recognized, and the recognition of driving scenes cannot be fully automated. The video data appearing here and in the following may refer to the video data acquired by the camera installed on the vehicle during the driving of the vehicle.
因此,本申请提供了一种场景识别的方法,能够根据车辆的行驶数据自动识别驾驶场景。下面结合图1至图4,对本申请提供的场景识别的方法进行详细描述。Therefore, the present application provides a method for scene recognition, which can automatically recognize the driving scene according to the driving data of the vehicle. The following describes in detail the method for scene recognition provided in the present application with reference to FIGS. 1 to 4.
图1是本申请提供的场景识别系统100的示意性框图。系统100可以包括采集设备101与计算设备102。FIG. 1 is a schematic block diagram of a scene recognition system 100 provided by the present application. The system 100 may include a collection device 101 and a computing device 102.
采集设备101主要负责采集功能,在具体实现时,采集设备101可以为车辆或者城市交通监控设备,此处的车辆可以是搭载有自动驾驶系统的车辆。以下将车辆行驶过程中获取到的数据称为路采数据,将城市交通监控设备获取到的道路交通数据称为城市交通流监控数据。The collection device 101 is mainly responsible for collection functions. In specific implementation, the collection device 101 may be a vehicle or an urban traffic monitoring device, and the vehicle here may be a vehicle equipped with an automatic driving system. Hereinafter, the data obtained during the driving of the vehicle is referred to as road acquisition data, and the road traffic data obtained by the urban traffic monitoring device is referred to as urban traffic flow monitoring data.
其中,车辆上可以安装各式各样的传感器。本申请对安装在车辆上的传感器不作具体限定,可以包括但不限于:若干个摄像头、至少一个雷达(radar)、至少一个定位系统、至少一个惯性测量单元(inertial measurement unit,IMU)。Among them, a variety of sensors can be installed on the vehicle. This application does not specifically limit the sensors installed on the vehicle, which may include but are not limited to: several cameras, at least one radar, at least one positioning system, and at least one inertial measurement unit (IMU).
其中,若干个摄像头可以分别部署在车辆的四周,并对车辆周围的环境参数进行采集。例如,车辆的前后保险杠、侧视镜、挡风玻璃上可以分别安装至少一个摄像头。Among them, several cameras can be respectively deployed around the vehicle and collect environmental parameters around the vehicle. For example, at least one camera may be installed on the front and rear bumpers, side-view mirrors, and windshield of the vehicle.
雷达可以包括超声波雷达、激光雷达与毫米波雷达中的至少一种,雷达可以测量出车辆的距离和速度等参数信息。雷达还可以利用无线电信号来感测车辆的周边环境内的物体。可选的,在一些实施例中,除了感测物体以外,雷达还可用于感测物体的前进方向。The radar may include at least one of ultrasonic radar, laser radar, and millimeter wave radar. The radar can measure parameter information such as the distance and speed of the vehicle. Radar can also use radio signals to sense objects in the surrounding environment of the vehicle. Optionally, in some embodiments, in addition to sensing the object, the radar can also be used to sense the forward direction of the object.
定位系统可以是全球定位系统(global positioning system,GPS)、北斗系统或者其他定位系统,用于接收卫星信号,并对车辆当前的位置进行定位。The positioning system may be a global positioning system (GPS), Beidou system or other positioning systems, which are used to receive satellite signals and locate the current position of the vehicle.
IMU可以基于惯性加速度来感测车辆的位置和朝向变化。可选的,在一个实施例中,IMU可以是加速度计和陀螺仪的组合,用于测量车辆的角速度、加速度。The IMU can sense the position and orientation changes of the vehicle based on the inertial acceleration. Optionally, in one embodiment, the IMU may be a combination of an accelerometer and a gyroscope, and is used to measure the angular velocity and acceleration of the vehicle.
采集设备101与计算设备102之间可以通过网络或者存储介质进行通信,例如,采集设备101可以通过网络或者存储介质等传输方式将路采数据和/或城市交通流监控数据传输至计算设备102,计算设备102根据路采数据和/或城市交通流监控数据,对驾驶场景进行识别。The collection device 101 and the computing device 102 can communicate with each other through a network or storage medium. For example, the collection device 101 can transmit road collection data and/or urban traffic flow monitoring data to the computing device 102 through a transmission method such as a network or storage medium. The computing device 102 recognizes the driving scene based on road acquisition data and/or urban traffic flow monitoring data.
其中,路采数据可以包括但不限于:摄像头、毫米波雷达、激光雷达、超声波雷达、GPS、IMU等传感器数据、高精度地图数据、算法输出数据、车与任何事物通信(vehicle-to-everything,V2X)数据与车控数据等所有可以采集到的数据;城市交通流监控数据可以包括但不限于:交通流中的车辆轨迹信息与车辆信息等。Among them, road mining data can include but not limited to: camera, millimeter wave radar, lidar, ultrasonic radar, GPS, IMU and other sensor data, high-precision map data, algorithm output data, vehicle-to-everything communication (vehicle-to-everything) , V2X) data and vehicle control data and other data that can be collected; urban traffic flow monitoring data can include but not limited to: vehicle trajectory information and vehicle information in the traffic flow.
图2是本申请提供的一种场景识别的方法的示意性流程图,该方法包括步骤210-230,下面对步骤210-230进行详细描述。Fig. 2 is a schematic flowchart of a method for scene recognition provided by the present application. The method includes steps 210-230, and steps 210-230 are described in detail below.
步骤210,根据车辆的行驶数据,确定状态序列#1(即,第一状态序列的一例),状态序列#1表示车辆在不同时刻的状态#1(即,第一状态的一例)。此处的车辆的行驶数 据可以为路采数据和/或城市交通流监控数据。Step 210: Determine the state sequence #1 (ie, an example of the first state sequence) according to the driving data of the vehicle. The state sequence #1 represents the state #1 of the vehicle at different times (ie, an example of the first state). The driving data of the vehicle here can be road collection data and/or urban traffic flow monitoring data.
计算设备102获取到车辆的行驶数据后,可以根据车辆的行驶数据,确定车辆在不同时刻的状态#1,例如,状态#1可以是车辆的行驶速度,车辆在不同时刻的行驶速度构成了状态序列#1。其中,车辆的行驶数据可以包括车辆上的定位系统获取的车辆的定位信息,计算设备102可以根据车辆的定位信息,计算得到车辆的行驶速度。After the computing device 102 obtains the driving data of the vehicle, it can determine the state #1 of the vehicle at different times according to the driving data of the vehicle. For example, the state #1 may be the driving speed of the vehicle, and the driving speed of the vehicle at different times constitutes the state. Sequence #1. Wherein, the driving data of the vehicle may include the positioning information of the vehicle obtained by the positioning system on the vehicle, and the computing device 102 may calculate the driving speed of the vehicle according to the positioning information of the vehicle.
步骤220,使用识别规则检测状态序列#1,识别规则是根据目标场景确定的。Step 220: Use the recognition rule to detect the state sequence #1, and the recognition rule is determined according to the target scene.
计算设备102可以根据要识别的目标场景,确定识别规则,并利用识别规则检测状态序列#1。The computing device 102 may determine a recognition rule according to the target scene to be recognized, and use the recognition rule to detect the state sequence #1.
步骤230,根据检测结果,确定行驶数据对应的驾驶场景中是否包括目标场景。Step 230: According to the detection result, it is determined whether the driving scene corresponding to the driving data includes the target scene.
计算设备102可以根据使用识别规则检测状态序列#1后获得的检测结果,确定行驶数据对应的驾驶场景中是否包括目标场景。The computing device 102 may determine whether the target scene is included in the driving scene corresponding to the driving data according to the detection result obtained after detecting the state sequence #1 using the recognition rule.
可选的,步骤210还可以替换为:根据车辆的行驶数据与目标场景,确定状态序列#1。Optionally, step 210 can also be replaced with: determining the state sequence #1 according to the driving data of the vehicle and the target scene.
计算设备102在获取车辆在不同时刻的状态#1时,可以结合目标场景进行获取,例如,目标场景可以为左前方车辆切入场景,此时,计算设备可以根据该目标场景,从其他车辆的行驶数据与本车辆的行驶数据中获取其他车辆相对于本车辆的位置,将获取到的其他车辆相对于本车辆的位置作为其他车辆在不同时刻的状态#1,并根据其他车辆在不同时刻的状态#1,确定状态序列#1。When the computing device 102 acquires the state #1 of the vehicle at different moments, it can acquire it in combination with the target scene. For example, the target scene can be the scene where the vehicle in front of the left cuts into the scene. At this time, the computing device can use the target scene from the driving of other vehicles. The data and the driving data of the own vehicle obtain the position of other vehicles relative to the own vehicle, and use the obtained position of other vehicles relative to the own vehicle as the state #1 of other vehicles at different times, and according to the state of other vehicles at different times #1, confirm status sequence #1.
可选的,步骤230具体可以通过以下方式实现:Optionally, step 230 may be specifically implemented in the following manner:
例如,如果状态序列#1中包括子序列#1(即,第一子序列的一例),则确定行驶数据对应的驾驶场景中包括目标场景;或者,如果状态序列#1中不包括子序列#1,则确定行驶数据对应的驾驶场景中不包括目标场景。其中,子序列#1为满足识别规则的子序列。For example, if the state sequence #1 includes the subsequence #1 (ie, an example of the first subsequence), it is determined that the driving scene corresponding to the driving data includes the target scene; or, if the state sequence #1 does not include the subsequence # 1. It is determined that the driving scene corresponding to the driving data does not include the target scene. Among them, subsequence #1 is a subsequence that meets the recognition rules.
例如,目标场景为车辆的行驶速度大于或等于30km/h的场景,状态序列#1中记录的是同一车辆在不同时刻的行驶速度,则该目标场景对应的识别规则可以为状态序列#1中包括连续出现的大于或等于30km/h的行驶速度,即子序列#1中包括连续出现的大于或等于30km/h的行驶速度。如果计算设备101在状态序列#1中能够检测出连续出现的大于或等于30km/h的行驶速度,则可以确定行驶数据对应的驾驶场景中包括目标场景;否则,则确定行驶数据对应的驾驶场景中不包括目标场景。For example, the target scene is a scene where the driving speed of the vehicle is greater than or equal to 30km/h, and the driving speed of the same vehicle at different times is recorded in the state sequence #1, then the recognition rule corresponding to the target scene can be in the state sequence #1 Including continuously occurring driving speeds greater than or equal to 30km/h, that is, subsequence #1 includes continuously occurring driving speeds greater than or equal to 30km/h. If the computing device 101 can detect continuously occurring driving speeds greater than or equal to 30km/h in the state sequence #1, it can be determined that the driving scene corresponding to the driving data includes the target scene; otherwise, the driving scene corresponding to the driving data is determined Does not include the target scene.
此外,还可以通过标识来表示车辆在不同时刻的行驶速度,例如,通过“1”表示车辆在某一时刻的行驶速度大于或等于30km/h,通过“0”表示其他情况,例如,“0”表示车辆在某一时刻的行驶速度小于30km/h或者车辆在某一时刻处于停车状态。此时,状态序列#1中记录的是能够反映同一车辆在不同时刻的行驶速度的标识,则该目标场景对应的识别规则可以为状态序列#1中包括连续出现的“1”,即子序列#1可以为连续出现的“1”。如果计算设备101在状态序列#1中能够检测出连续出现的1,则可以确定行驶数据对应的驾驶场景中包括目标场景;否则,则确定行驶数据对应的驾驶场景中不包括目标场景。In addition, the identification can also be used to indicate the speed of the vehicle at different times. For example, “1” means that the speed of the vehicle at a certain moment is greater than or equal to 30km/h, and “0” means other conditions, for example, “0”. "It means that the speed of the vehicle at a certain moment is less than 30km/h or the vehicle is parked at a certain moment. At this time, what is recorded in the state sequence #1 is an identifier that can reflect the driving speed of the same vehicle at different moments, and the recognition rule corresponding to the target scene can be that the state sequence #1 includes consecutive "1"s, that is, a sub-sequence #1 can be a "1" that appears consecutively. If the computing device 101 can detect successive occurrences of 1s in the state sequence #1, it can be determined that the driving scene corresponding to the driving data includes the target scene; otherwise, it is determined that the driving scene corresponding to the driving data does not include the target scene.
在本申请中,状态序列可以以矩阵的形式存储(以下简称为“状态矩阵”),例如,状态序列可以保存为m行n列(即,m×n)的状态矩阵,其中,第i行的第j列的元素可以表示索引为i的车辆(以下表示为“车辆#i”)在j时刻的状态,m为大于或等于1的整数,n为大于或等于2的整数,i为大于或等于1,且小于或等于m的整数,j为大于或等于1,且小于或等于n的整数。In this application, the state sequence can be stored in the form of a matrix (hereinafter referred to as “state matrix” for short). For example, the state sequence can be stored as a state matrix with m rows and n columns (ie, m×n), where the i-th row The element in the jth column of can represent the state of the vehicle with index i (hereinafter referred to as "vehicle #i") at time j, where m is an integer greater than or equal to 1, n is an integer greater than or equal to 2, and i is greater than Or an integer equal to 1 and less than or equal to m, and j is an integer greater than or equal to 1 and less than or equal to n.
下文中以状态矩阵为例,结合几个具体场景对本申请提供的场景识别的方法进行举例 说明。In the following, the state matrix is taken as an example, combining several specific scenarios to illustrate the method of scene recognition provided in this application.
场景#1 本车辆以大于或等于50km/h的行驶速度行驶的场景。Scenario #1 A scene where the vehicle is driving at a speed greater than or equal to 50km/h.
为了对该场景#1进行识别,首先可以获取本车辆的行驶速度,例如,计算设备102可以根据本车辆上的定位系统获取的车辆的定位信息,计算得到本车辆的行驶速度。In order to identify the scene #1, the driving speed of the own vehicle may be obtained first. For example, the computing device 102 may calculate the driving speed of the own vehicle according to the positioning information of the vehicle obtained by the positioning system on the own vehicle.
此时,状态矩阵#1(与状态序列#1对应)可以为1×n大小的矩阵,n代表状态矩阵#1中针对本车辆总共记录的时长。状态矩阵#1的元素e 1,j代表本车辆在j时刻的行驶速度。例如,“1”表示本车辆在某一时刻的行驶速度大于或等于50km/h,“0”表示其他情况,则e 1,j的取值可以表示如下: At this time, the state matrix #1 (corresponding to the state sequence #1) may be a matrix with a size of 1×n, and n represents the total recorded time length for the own vehicle in the state matrix #1. The elements e 1, j of the state matrix #1 represent the traveling speed of the own vehicle at time j. For example, "1" means that the driving speed of the vehicle at a certain moment is greater than or equal to 50km/h, "0" means other conditions, then the value of e 1, j can be expressed as follows:
Figure PCTCN2020097886-appb-000001
Figure PCTCN2020097886-appb-000001
上述其他情况可以表示本车辆在j时刻的行驶速度小于50km/h或者本车辆在某一时刻处于停车状态。The above other conditions may indicate that the driving speed of the own vehicle at time j is less than 50 km/h or the own vehicle is in a parking state at a certain time.
计算设备102针对场景#1确定相应的识别规则为:在状态矩阵#1中识别出所有连续出现的“1”,进而计算设备102使用该识别规则检测状态矩阵#1。例如,状态矩阵#1可以为:The computing device 102 determines the corresponding identification rule for the scene #1 as: identifying all consecutive "1"s in the state matrix #1, and then the computing device 102 uses the identification rule to detect the state matrix #1. For example, state matrix #1 can be:
0000111111111110011111111111111111111111100011110000001111111111100111111111111111111111111111000111100
从状态矩阵#1中可以看出,本车辆在时刻#5-时刻#15,时刻#18-时刻#40,时刻#44-时刻#47的行驶速度均大于或等于50km/h。因此,计算设备102可以从状态矩阵#1中识别出3个场景#1,其中,每个场景#1对应的时刻分别为:时刻#5-时刻#15,时刻#18-时刻#40,时刻#44-时刻#47,计算设备102可以将时刻#5-时刻#15,时刻#18-时刻#40,时刻#44-时刻#47对应的本车辆的行驶数据与场景#1进行关联,从而在仿真软件中利用时刻#5-时刻#15,时刻#18-时刻#40,时刻#44-时刻#47对应的本车辆的行驶数据以及本车辆的行驶数据相关的道路信息搭建场景#1,从而在仿真软件中对场景#1下本车辆的自动驾驶系统的功能进行验证。It can be seen from the state matrix #1 that the driving speed of the own vehicle at time #5-time #15, time #18-time #40, and time #44-time #47 are all greater than or equal to 50 km/h. Therefore, the computing device 102 can identify 3 scenes #1 from the state matrix #1, where the time corresponding to each scene #1 is: time #5-time #15, time #18-time #40, time #44-时间#47, the computing device 102 may associate the driving data of the own vehicle corresponding to time#5-time#15, time#18-time#40, and time#44-time#47 with scene #1, thereby In the simulation software, the driving data of the own vehicle corresponding to time #5-time #15, time #18-time #40, and time #44-time #47 and road information related to the driving data of the vehicle are used to build scene #1, Thus, the function of the autonomous driving system of the vehicle in scenario #1 is verified in the simulation software.
场景#2 左前方车辆切入场景。Scene #2 The front left vehicle cuts into the scene.
为了对场景#2进行识别,计算设备102需要确定本车辆的位置与其他车辆的位置,从而确定其他车辆相对于本车辆的位置,例如,本车辆的位置可以根据本车辆上的定位系统或者IMU获取的车辆的定位信息得到,其他车辆的位置可以根据视频数据得到,或者,还可以从雷达的扫描信息中得到。In order to recognize scene #2, the computing device 102 needs to determine the location of the own vehicle and the location of other vehicles, so as to determine the location of other vehicles relative to the own vehicle. For example, the location of the own vehicle can be based on the positioning system or IMU on the own vehicle. The location information of the obtained vehicle is obtained, and the location of other vehicles can be obtained from the video data, or can also be obtained from the scanning information of the radar.
计算设备102可以根据各个时刻本车辆的位置与其他车辆的位置计算出各个时刻其他车辆相对于本车辆的位置,根据各个时刻其他车辆相对于本车辆的位置,生成大小为m×n的状态矩阵#1,m代表状态矩阵#1中总共记录的其他车辆的数量,n代表状态矩阵#1中针对每个车辆总共记录的时长。The computing device 102 can calculate the position of other vehicles relative to the own vehicle at each time according to the position of the own vehicle at each time and the position of other vehicles, and generate a state matrix of size m×n according to the position of other vehicles relative to the own vehicle at each time. #1, m represents the total number of other vehicles recorded in the state matrix #1, and n represents the total recorded time length for each vehicle in the state matrix #1.
状态矩阵#1的元素p i,j代表车辆#i(即,其他车辆的一例)在j时刻相对于本车辆的相对位置。例如,“1”表示车辆#i在j时刻位于本车辆的左前方向,“2”表示车辆#i在j时刻位于本车辆的正前方向,“3”表示车辆#i在j时刻位于本车辆的右前方向。为了简洁,此处不再一一列举,则p i,j的取值可以表示如下: The element p i, j of the state matrix #1 represents the relative position of the vehicle #i (that is, an example of other vehicles) with respect to the own vehicle at time j. For example, "1" indicates that vehicle #i is located in the front left direction of the host vehicle at time j, "2" indicates that vehicle #i is located in the front direction of the host vehicle at time j, and "3" indicates that vehicle #i is located in front of the host vehicle at time j The front right direction. For the sake of brevity, I will not list them one by one here , and the values of p i,j can be expressed as follows:
Figure PCTCN2020097886-appb-000002
Figure PCTCN2020097886-appb-000002
计算设备102针对场景#2确定的识别规则为:状态矩阵#1的元素由“1”变为“2”,进而计算设备102使用该识别规则检测状态矩阵#1。例如,状态矩阵#1可以为:The recognition rule determined by the computing device 102 for the scenario #2 is: the element of the state matrix #1 changes from "1" to "2", and then the computing device 102 uses the recognition rule to detect the state matrix #1. For example, state matrix #1 can be:
Figure PCTCN2020097886-appb-000003
Figure PCTCN2020097886-appb-000003
从状态矩阵#1中可以看出,车辆#2在时刻#6-时刻#7从本车辆的左前方切入,因此,计算设备102可以从状态矩阵#1中识别出场景#2,计算设备102可以将时刻#6-时刻#7对应的本车辆以及其他车辆的行驶数据与场景#2进行关联,从而在仿真软件中利用时刻#6-时刻#7对应的本车辆的行驶数据、本车辆的行驶数据相关的道路信息、其他车辆的行驶数据以及其他车辆的行驶数据相关的道路信息搭建场景#2,从而在仿真软件中对场景#2下本车辆的自动驾驶系统的功能(例如,减速功能)进行验证。It can be seen from the state matrix #1 that the vehicle #2 cuts in from the left front of the vehicle at time #6-time #7. Therefore, the computing device 102 can recognize scene #2 from the state matrix #1, and the computing device 102 The driving data of the host vehicle and other vehicles corresponding to time #6-time #7 can be associated with scene #2, so that the driving data of the host vehicle corresponding to time #6-time #7 and the driving data of the host vehicle can be used in the simulation software. The road information related to the driving data, the driving data of other vehicles, and the road information related to the driving data of other vehicles build scene #2, so that the function of the vehicle's automatic driving system under scene #2 (for example, the deceleration function) can be compared in the simulation software. )authenticating.
场景#3 红绿灯路口左转场景。 Scene #3 Turn left at a traffic light intersection.
为了对场景#3进行识别,计算设备102可以首先确定本车辆在不同时刻是否在红路路口内(即,第二状态的一例),并确定同一车辆在不同时刻的转向状态(即,第三状态的一例)。计算设备102可以根据本车辆在不同时刻是否在红绿灯路口内的多个状态,生成状态序列#2(即,第二状态序列的一例),根据同一车辆在不同时刻的多个转向状态,生成状态序列#3(即,第三状态序列的一例),根据状态序列#2与状态序列#3,生成状态序列#1。其中,状态序列#2与状态序列#3均以矩阵的形式存储,即状态序列#2与状态矩阵#2对应,状态序列#3与状态矩阵#3对应。In order to recognize scene #3, the computing device 102 may first determine whether the own vehicle is in a red intersection at different times (ie, an example of the second state), and determine the steering state of the same vehicle at different times (ie, the third An example of status). The computing device 102 can generate state sequence #2 (that is, an example of the second state sequence) according to multiple states of whether the vehicle is in a traffic light intersection at different times, and generate states based on multiple steering states of the same vehicle at different times In sequence #3 (ie, an example of the third state sequence), state sequence #1 is generated based on state sequence #2 and state sequence #3. Among them, the state sequence #2 and the state sequence #3 are both stored in the form of a matrix, that is, the state sequence #2 corresponds to the state matrix #2, and the state sequence #3 corresponds to the state matrix #3.
计算设备102根据本车辆在不同时刻是否在红绿灯路口内的多个状态,生成大小为m×n的状态矩阵#2,m代表状态矩阵#2中总共记录的车辆的数量,n代表状态矩阵#2中针对每个车辆总共记录的时长。The computing device 102 generates a state matrix #2 of size m×n according to whether the vehicle is in multiple states at the traffic light intersection at different times, where m represents the total number of vehicles recorded in the state matrix #2, and n represents the state matrix# The total time recorded in 2 for each vehicle.
状态矩阵#2的元素r i,j表示车辆#i(即,本车辆的一例)在j时刻是否在红绿灯路口内。例如,“1”表示车辆#i在j时刻在红绿灯路口内,“0”表示车辆#i在j时刻处于其他状态的情况。则r i,j的取值可以表示如下: The element r i,j of the state matrix #2 indicates whether the vehicle #i (that is, an example of the own vehicle) is within the traffic light intersection at time j. For example, "1" indicates that the vehicle #i is in the traffic light intersection at time j, and "0" indicates that the vehicle #i is in another state at time j. Then the value of r i,j can be expressed as follows:
Figure PCTCN2020097886-appb-000004
Figure PCTCN2020097886-appb-000004
计算设备102根据本车辆在不同时刻的多个转向状态,生成大小为m×n的状态矩阵#3,m代表状态矩阵#3中总共记录的车辆的数量,n代表状态矩阵#3中针对每个车辆总共记录的时长。The computing device 102 generates a state matrix #3 with a size of m×n according to multiple steering states of the vehicle at different times, where m represents the total number of vehicles recorded in the state matrix #3, and n represents the number of vehicles recorded in the state matrix #3 for each The total time recorded by each vehicle.
状态矩阵#3的元素s i,j表示车辆#i在j时刻的转向状态。例如,“1”表示车辆#i在j时刻处于左转状态,“2”表示车辆#i在j时刻处于右转状态,“0”表示车辆#i在j时刻处于其 他状态的情况,例如,表示车辆#i在j时刻没有做转向操作。则s i,j的取值可以表示如下: The element s i,j of the state matrix #3 represents the turning state of the vehicle #i at time j. For example, "1" indicates that vehicle #i is turning left at time j, "2" indicates that vehicle #i is turning right at time j, and "0" indicates that vehicle #i is in another state at time j, for example, It means that the vehicle #i did not make a steering operation at time j. Then the value of s i,j can be expressed as follows:
Figure PCTCN2020097886-appb-000005
Figure PCTCN2020097886-appb-000005
计算设备102可以根据状态矩阵#2与状态矩阵#3,生成状态矩阵#1,状态矩阵#1中的元素t i,j表示车辆#i在j时刻在红绿灯路口的转向状态。例如,“1”表示车辆#i在j时刻在红绿灯路口左转,“2”表示车辆#i在j时刻在红绿灯路口右转,“0”表示车辆#i在j时刻处于其他状态的情况。则t i,j的取值可以表示如下: The computing device 102 may generate the state matrix #1 according to the state matrix #2 and the state matrix #3. The elements t i,j in the state matrix #1 represent the turning state of the vehicle #i at the traffic light intersection at time j. For example, "1" indicates that vehicle #i turns left at a traffic light intersection at time j, "2" indicates that vehicle #i turns right at a traffic light intersection at time j, and "0" indicates that vehicle #i is in another state at time j. Then the value of ti,j can be expressed as follows:
Figure PCTCN2020097886-appb-000006
Figure PCTCN2020097886-appb-000006
当计算设备102针对场景#3确定的识别规则为:识别状态矩阵#1中是否包括1,当状态矩阵#1中包括1时,则计算设备102可以从状态矩阵中识别出场景#3,计算设备102可以将元素1对应的时刻的本车辆的行驶数据与场景#3进行关联,从而在仿真软件中利用元素1对应的时刻的本车辆的行驶数据与本车辆的行驶数据相关的道路信息搭建场景#3,从而在仿真软件中对场景#3下本车辆的自动驾驶系统的功能进行验证。When the recognition rule determined by the computing device 102 for scenario #3 is: recognize whether the state matrix #1 includes 1, and when the state matrix #1 includes 1, the computing device 102 can recognize the scenario #3 from the state matrix, and calculate The device 102 can associate the driving data of the host vehicle at the time corresponding to element 1 with scene #3, so as to construct the road information related to the driving data of the host vehicle at the time corresponding to element 1 and the driving data of the host vehicle in the simulation software. Scene #3, so as to verify the function of the self-driving system of the vehicle in scene #3 in the simulation software.
应理解,上述仅以状态序列以矩阵的形式存储为例,对场景#1至场景#3进行示例性介绍,但本申请并不限定于此。状态序列还可以以其他形式进行存储,例如,可以以列表的形式存储,不仅如此,其他任何能够反映车辆在不同时刻的状态的存储形式均应落入本申请的保护范围之内。It should be understood that the foregoing only uses the storage of the state sequence in the form of a matrix as an example to exemplarily introduce the scenarios #1 to #3, but the application is not limited thereto. The state sequence can also be stored in other forms, for example, it can be stored in the form of a list. Not only that, any other storage form that can reflect the state of the vehicle at different times should fall within the protection scope of this application.
本申请中的识别规则还可以通过正则表达式来描述,例如,场景#1中的识别规则可以通过正则表达式“1+”来描述,“1+”表示连续的1,场景#2中的识别规则可以通过正则表达式“12”来描述,“12”表示由“1”变为“2”。The recognition rules in this application can also be described by regular expressions. For example, the recognition rules in scenario #1 can be described by regular expressions "1+", "1+" represents a continuous 1, and in scenario #2 The recognition rule can be described by the regular expression "12", "12" means changing from "1" to "2".
上述对计算设备102如何识别场景作了具体说明,在本申请中,还可以对识别出的场景确定场景的复杂度,下面进行详细说明。The foregoing specifically explains how the computing device 102 recognizes a scene. In this application, the complexity of the scene can also be determined for the recognized scene, which will be described in detail below.
此时,方法200还可以包括:计算设备102根据车辆的行驶数据,确定状态序列#4(即,第四状态序列的一例),状态序列#4可以表示本车辆的关联状态。例如,本车辆的关联状态可以包括道路环境状态、本车辆的状态与其他车辆的状态中的至少一种,因此,状态序列#4可以包括道路环境状态序列、本车辆的状态序列与其他车辆的状态序列中的至少一个。计算设备102根据子序列#1对应的时刻,在状态序列#4中确定子序列#2(即,第二子序列的一例),根据子序列#2,确定目标场景的复杂度。At this time, the method 200 may further include: the computing device 102 determines the state sequence #4 (that is, an example of the fourth state sequence) according to the driving data of the vehicle, and the state sequence #4 may represent the associated state of the own vehicle. For example, the associated state of the own vehicle may include at least one of the road environment state, the state of the own vehicle, and the state of other vehicles. Therefore, the state sequence #4 may include the road environment state sequence, the state sequence of the own vehicle, and the state of other vehicles. At least one of the state sequence. The computing device 102 determines the subsequence #2 (ie, an example of the second subsequence) in the state sequence #4 according to the time corresponding to the subsequence #1, and determines the complexity of the target scene according to the subsequence #2.
道路环境状态序列可以描述不同时刻车辆所处的道路环境,例如,道路环境状态序列可以描述本车辆在上一时刻在弯路上行驶,在下一时刻在直路上行驶,本车辆的状态序列可以描述本车辆在不同时刻的行驶速度,其他车辆的状态序列可以描述其他车辆在不同时刻与本车辆之间的距离。The road environment state sequence can describe the road environment where the vehicle is located at different moments. For example, the road environment state sequence can describe the vehicle driving on a curved road at the previous time and driving on a straight road at the next time. The vehicle state sequence can describe this The speed of the vehicle at different times and the state sequence of other vehicles can describe the distance between other vehicles and the vehicle at different times.
计算设备102在确定目标场景的复杂度时,可以根据该目标场景的子序列#1在状态矩阵#1中对应的时刻,将状态矩阵#4(与状态序列#4相对应)中的相同时刻的元素确定为子序列#2,根据子序列#2,确定目标场景的复杂度。When the computing device 102 determines the complexity of the target scene, it can calculate the same time in the state matrix #4 (corresponding to the state sequence #4) according to the time corresponding to the subsequence #1 of the target scene in the state matrix #1 The element of is determined as sub-sequence #2, and the complexity of the target scene is determined according to sub-sequence #2.
例如,在上述场景#1中,本车辆在时刻#44-时刻#47的行驶速度均大于或等于50km/h。 计算设备102在确定场景#1的复杂度时,可以将状态序列#4中的时刻#44-时刻#47对应的元素确定为子序列#2。For example, in the above scenario #1, the traveling speed of the own vehicle at time #44-time #47 is greater than or equal to 50 km/h. When determining the complexity of the scene #1, the computing device 102 may determine the element corresponding to the time #44-the time #47 in the state sequence #4 as the subsequence #2.
例如,状态序列#4包括道路环境状态序列、本车辆的状态序列与其他车辆的状态序列,计算设备120可以将本车辆对应的道路环境状态序列中的时刻#44-时刻#47对应的元素确定为道路环境状态序列对应的子序列#2,将本车辆的状态序列中的时刻#44-时刻#47对应的元素确定为本车辆的状态序列对应的子序列#2,将其他车辆的状态序列中的时刻#44-时刻#47对应的元素确定为其他车辆的状态序列对应的子序列#2。For example, the state sequence #4 includes the road environment state sequence, the state sequence of the own vehicle, and the state sequence of other vehicles. The computing device 120 may determine the element corresponding to the time #44-时间#47 in the road environment state sequence corresponding to the own vehicle. Is the sub-sequence #2 corresponding to the state sequence of the road environment, the element corresponding to time #44-time #47 in the state sequence of the own vehicle is determined as the sub-sequence #2 corresponding to the state sequence of the own vehicle, and the state sequence of other vehicles The element corresponding to time #44-time #47 in is determined as the sub-sequence #2 corresponding to the state sequence of other vehicles.
计算设备102可以根据道路环境状态序列对应的子序列#2、本车辆的状态序列对应的子序列#2与其他车辆的状态序列对应的子序列#2中的至少一个,确定本车辆的关联状态的复杂度,并根据本车辆的关联状态的复杂度,确定场景#1的复杂度。其中,本车辆的关联状态的复杂度包括道路环境复杂度、本车辆复杂度与其他车辆复杂度中的至少一个。The computing device 102 may determine the associated state of the vehicle according to at least one of the subsequence #2 corresponding to the road environment state sequence, the subsequence #2 corresponding to the state sequence of the own vehicle, and the subsequence #2 corresponding to the state sequence of other vehicles. Determine the complexity of scene #1 according to the complexity of the associated state of the vehicle. Wherein, the complexity of the associated state of the host vehicle includes at least one of the complexity of the road environment, the complexity of the host vehicle, and the complexity of other vehicles.
例如,计算设备102可以根据道路环境状态序列对应的子序列#2,确定道路环境复杂度,根据本车辆的状态序列对应的子序列#2,确定本车辆复杂度,根据其他车辆的状态序列对应的子序列#2,确定其他车辆复杂度。For example, the computing device 102 may determine the complexity of the road environment according to the sub-sequence #2 corresponding to the state sequence of the road environment, determine the complexity of the own vehicle according to the sub-sequence #2 corresponding to the state sequence of the own vehicle, and correspond to the state sequence of other vehicles The sub-sequence #2, to determine the complexity of other vehicles.
计算设备102可以对道路环境复杂度、本车辆复杂度与其他车辆复杂度中的至少一个进行加权运算,从而确定场景#1的复杂度。The computing device 102 may perform a weighted operation on at least one of the complexity of the road environment, the complexity of the own vehicle, and the complexity of other vehicles, so as to determine the complexity of the scene #1.
例如,对于场景#1与场景#2,计算设备102分别对场景#1与场景#2对应的道路环境复杂度、本车辆复杂度与其他车辆复杂度进行加权运算,最终得到的场景#1与场景#2的复杂度如下表所示:For example, for scene #1 and scene #2, the computing device 102 respectively performs a weighted calculation on the road environment complexity, the complexity of the own vehicle and the complexity of other vehicles corresponding to scene #1 and scene #2, and the final scene #1 and The complexity of scenario #2 is shown in the following table:
表1Table 1
场景Scenes 道路环境复杂度Road environment complexity 其他车辆复杂度Other vehicle complexity 本车辆复杂度Complexity of own vehicle 场景复杂度Scene complexity
11 0.130.13 0.000.00 0.050.05 0.020.02
22 0.130.13 0.300.30 0.050.05 0.180.18
应理解,上述仅以对道路环境复杂度、本车辆复杂度与其他车辆复杂度中的至少一个进行加权运算确定场景的复杂度的方法作为示例性说明,但这并不对本申请构成特别限定,其他根据道路环境复杂度、本车辆复杂度与其他车辆复杂度中的至少一个确定场景的复杂度的方法均应落入本申请的保护范围之内。It should be understood that the foregoing only uses the method of weighting at least one of the complexity of the road environment, the complexity of the own vehicle and the complexity of other vehicles to determine the complexity of the scene as an exemplary description, but this does not constitute a special limitation to the application. Other methods for determining the complexity of the scene based on at least one of the complexity of the road environment, the complexity of the vehicle, and the complexity of other vehicles should all fall within the scope of protection of this application.
在本申请实施例中,为了进一步提高场景识别的准确程度,计算设备102可以获取地图信息,并根据地图信息确定多条道路(road),多个车道(lane)以及多个交叉路口(junction)的位置信息,并且可以根据多条道路、多个车道以及多个交叉路口的位置信息,构建出路网拓扑信息,路网拓扑信息可以用于判断车辆所处的road或junction位于哪条lane上,以及各个road,lane,junction之间的空间关系。其中,多条道路、多个车道以及多个交叉路口的位置信息可以是多条道路,多个车道以及多个交叉路口在地图信息中的坐标。In the embodiment of the present application, in order to further improve the accuracy of scene recognition, the computing device 102 may obtain map information, and determine multiple roads, multiple lanes, and multiple junctions according to the map information. According to the location information of multiple roads, multiple lanes, and multiple intersections, the road network topology information can be constructed. The road network topology information can be used to determine which lane the vehicle is located on. And the spatial relationship between each road, lane, and junction. Wherein, the location information of multiple roads, multiple lanes, and multiple intersections may be the coordinates of multiple roads, multiple lanes, and multiple intersections in the map information.
计算设备102获取到车辆的行驶数据后,可以根据车辆的行驶数据,确定车辆在不同时刻的位置信息,根据车辆在不同时刻的位置信息,并且结合路网拓扑信息,可以确定车辆所处的道路、车道与交叉路口中的至少一项。After the computing device 102 obtains the driving data of the vehicle, it can determine the position information of the vehicle at different times according to the driving data of the vehicle. According to the position information of the vehicle at different times, and combined with the road network topology information, it can determine the road on which the vehicle is located. , At least one of lanes and intersections.
在确定行驶数据对应的驾驶场景中是否包括目标场景时,根据上文中的检测结果与车辆所处的道路、车道与交叉路口中的至少一项进行综合判断,从而提高场景识别的准确程度。When determining whether the driving scene corresponding to the driving data includes the target scene, a comprehensive judgment is made according to the above detection result and at least one of the road, lane, and intersection on which the vehicle is located, so as to improve the accuracy of scene recognition.
此时,方法200还可以包括:At this time, the method 200 may further include:
根据子序列#1对应的时刻,在状态序列#5(即,第五状态序列的一例)中确定子序列#3(即,第三子序列的一例),状态序列#5表示车辆在不同时刻的位置信息,状态序列#5是根据车辆的行驶数据确定的。步骤230可以替换为:According to the time corresponding to the subsequence #1, the subsequence #3 (that is, an example of the third subsequence) is determined in the state sequence #5 (that is, an example of the fifth state sequence), and the state sequence #5 indicates that the vehicle is at a different time The position information of the state sequence #5 is determined according to the driving data of the vehicle. Step 230 can be replaced with:
根据检测结果与子序列#3(即,第三子序列的一例),确定行驶数据对应的驾驶场景中是否包括目标场景。According to the detection result and sub-sequence #3 (that is, an example of the third sub-sequence), it is determined whether the driving scene corresponding to the driving data includes the target scene.
下面结合该方法对上文中的场景#2~场景#3作进一步地说明。The above scenario #2 to scenario #3 will be further described below in conjunction with this method.
在场景#2中,当计算设备102识别出车辆#2在时刻#6-时刻#7由本车辆的左前方向移动到本车辆的正前方向时,还可以从状态序列#5中确定车辆#2在时刻#6-时刻#7的位置信息(即,子序列#3的一例),并根据车辆#2在时刻#6-时刻#7的位置信息与路网拓扑信息,确定车辆#2在时刻#6-时刻#7所处的车道,状态序列#5中包括车辆#2在不同时刻的位置信息,其中,车辆#2在不同时刻的位置信息可以是车辆#2在不同时刻的位置在地图信息中的坐标。In scenario #2, when the computing device 102 recognizes that the vehicle #2 has moved from the front left direction of the host vehicle to the front direction of the host vehicle at time #6-time #7, the vehicle #2 can also be determined from the state sequence #5 Position information at time #6-time #7 (ie, an example of sub-sequence #3), and based on the position information of vehicle #2 at time #6-time #7 and road network topology information, it is determined that vehicle #2 is at time #6-时间#7 is in the lane, state sequence #5 includes the location information of vehicle #2 at different moments, where the location information of vehicle #2 at different moments can be the location of vehicle #2 at different moments on the map The coordinates in the message.
例如,如图3所示,计算设备102根据车辆#2在时刻#6-时刻#7的位置信息与路网拓扑信息,确定车辆#2在时刻#6-时刻#7有原车道变为原车道的右侧车道,则计算设备102可以根据车辆#2在时刻#6-时刻#7由本车辆的左前方向移动到本车辆的正前方向、车辆#2在时刻#6-时刻#7由原车道变为原车道的右侧车道这两个检测结果,确定车辆#2在时刻#6-时刻#7从本车辆的左前方切入,从而更加准确地完成对场景#2的识别。For example, as shown in FIG. 3, the computing device 102 determines that vehicle #2 has the original lane changed to the original lane at time #6-time #7 based on the location information and road network topology information of vehicle #2 at time #6-time #7 The right lane of the lane, the computing device 102 can move from the front left direction of the own vehicle to the front direction of the own vehicle according to the vehicle #2 at time #6-time #7, and the vehicle #2 at time #6-time #7 from the original With the two detection results that the lane becomes the right lane of the original lane, it is determined that the vehicle #2 cuts in from the front left of the own vehicle at time #6-time #7, thereby completing the recognition of scene #2 more accurately.
在场景#3中,当计算设备102识别出车辆#i在j时刻在红绿灯路口左转时,还可以从状态序列#5中确定车辆#i在j时刻的位置信息(即,子序列#3的另一例),并根据车辆#i在j时刻的位置信息与路网拓扑信息,确定车辆#i在j时刻位于交叉路口内,则计算设备102可以根据车辆#i在j时刻在红绿灯路口左转、车辆#i在j时刻位于交叉路口内这两个检测结果,确定车辆#i在j时刻在红绿灯路口左转,从而更加准确地完成对场景#3的识别。状态序列#5中包括车辆#i在不同时刻的位置信息,其中,车辆#i在不同时刻的位置信息可以是车辆#i在不同时刻的位置在地图信息中的坐标。下面结合本申请实施例提供的场景#4进行详细说明。In scenario #3, when the computing device 102 recognizes that the vehicle #i turns left at a traffic light intersection at time j, it can also determine the position information of vehicle #i at time j from the state sequence #5 (ie, sub-sequence #3 Another example), and according to the location information of vehicle #i at time j and the road network topology information, it is determined that vehicle #i is located in the intersection at time j, then the computing device 102 can be left at the traffic light intersection at time j according to vehicle #i Turn, the vehicle #i is located in the intersection at time j, the two detection results, it is determined that the vehicle #i turns left at the traffic light intersection at time j, so as to more accurately complete the recognition of scene #3. The state sequence #5 includes the position information of the vehicle #i at different times, where the position information of the vehicle #i at different times may be the coordinates of the position of the vehicle #i at different times in the map information. A detailed description will be given below in conjunction with scenario #4 provided in the embodiment of the present application.
场景#4 红绿灯路口本车辆直行,目标车辆(即,其他车辆的一例)右转场景。Scenario #4 The own vehicle goes straight at a traffic light intersection, and the target vehicle (ie, an example of other vehicles) turns right.
为了对场景#4进行识别,计算设备102可以首先根据场景#3中描述的方法,确定本车辆在时刻#t 1~时刻#t 2在红绿灯路口直行,进一步可以从状态序列#5中确定本车辆在时刻#t 1~时刻#t 2的位置信息(即,子序列#3的另一例),并根据本车辆在时刻#t 1~时刻#t 2的位置信息与路网拓扑信息,确定本车辆在时刻#t 1~时刻#t 2所处的道路,状态序列#5中包括本车辆在不同时刻的位置信息,其中,本车辆在不同时刻的位置信息可以是本车辆在不同时刻的位置在地图信息中的坐标。 In order to identify scene #4, the computing device 102 can first determine that the vehicle is going straight at the traffic light intersection at time #t 1 to time #t 2 according to the method described in scene #3, and can further determine the current vehicle from the state sequence #5. The location information of the vehicle at time #t 1 to time #t 2 (ie, another example of sub-sequence #3), and the location information of the vehicle at time #t 1 to time #t 2 and road network topology information are determined On the road where the host vehicle is located at time #t 1 to time #t 2 , the state sequence #5 includes the location information of the host vehicle at different times. The location information of the host vehicle at different times may be the location information of the host vehicle at different times. The coordinates of the location in the map information.
例如,如图4所示,计算设备102根据本车辆在时刻#t 1~时刻#t 2的位置信息与路网拓扑信息,确定本车辆在时刻#t 1~时刻#t 2由道路#1经过红路灯路口到达道路#3。 For example, as shown, the computing device 102 according to the vehicle at the time #t #t 4 1 ~ time position information and the road network topology information 2, the present vehicle is determined at time #t #t 1 ~ 2 by the time the road # 1 Go through the red street light intersection to reach road #3.
计算设备102可以根据场景#3中描述的方法,确定目标车辆在时刻#t 1~时刻#t 2在红绿灯路口右转,之后,计算设备102可以从状态序列#5中确定目标车辆在时刻#t 1~时刻#t 2的位置信息(即,子序列#3的另一例),并根据目标车辆在时刻#t 1~时刻#t 2的位置信息与路网拓扑信息,确定目标车辆在时刻#t 1~时刻#t 2所处的道路,状态序列#5中包括目标车辆在不同时刻的位置信息,其中,目标车辆在不同时刻的位置信息可以是目标车辆在不同时 刻的位置在地图信息中的坐标。 The computing device 102 can determine that the target vehicle turns right at the traffic light intersection at time #t 1 to time #t 2 according to the method described in scenario #3. After that, the computing device 102 can determine that the target vehicle is at time # from the state sequence #5. t 1 ~ time #t 2 location information (ie, another example of subsequence #3), and based on the location information of the target vehicle at time #t 1 ~ time #t 2 and road network topology information, determine the target vehicle's location at time #t 1 ~时间#The road where t 2 is located, the state sequence #5 includes the location information of the target vehicle at different moments, where the location information of the target vehicle at different moments can be the location information of the target vehicle at different moments on the map. In the coordinates.
例如,计算设备102根据目标车辆在时刻#t 1~时刻#t 2的位置信息与路网拓扑信息,确定目标车辆在时刻#t 1~时刻#t 2由道路#2经过红路灯路口到达道路#3。 For example, the computing device 102 determines that the target vehicle arrives at the road from the road #2 through the red street light intersection at time #t 1 ~time #t 2 according to the location information and road network topology information of the target vehicle at time #t 1 ~time #t 2 #3.
则计算设备102可以根据本车辆在时刻#t 1~时刻#t 2在红绿灯路口直行、目标车辆在时刻#t 1~时刻#t 2在红绿灯路口右转以及目标车辆在时刻#t 1~时刻#t 2由道路#2经过红路灯路口到达道路#3这三个检测结果,确定本车辆在时刻#t 1~时刻#t 2在红绿灯路口直行,目标车辆在时刻#t 1~时刻#t 2在红绿灯路口右转,从而更加准确地完成对场景#4的识别。 The computing device 102 may #t 2 1 to time in accordance with the present vehicle traffic lights straight at time #t, the target vehicle at time 1 to time #t #t 2 right turn at the traffic light and the target vehicle at time 1 to time #t #t 2由路#2 Pass the red street light intersection to reach the road #3 These three detection results, it is determined that the vehicle is going straight at the traffic light intersection at time #t 1 ~ time #t 2 and the target vehicle is at time #t 1 ~ time #t 2 Turn right at the traffic light intersection to more accurately complete the recognition of scene #4.
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application. The implementation process constitutes any limitation.
上文结合图1至图4,详细描述了本申请实施例提供的一种场景识别的方法,下面结合图5至图6详细描述本申请的装置的实施例。应理解,方法实施例的描述与装置实施例的描述相互对应,因此,未详细描述的部分可以参见前面方法实施例。The foregoing describes in detail a method for scene recognition provided by an embodiment of the present application with reference to FIGS. 1 to 4, and the following describes in detail an embodiment of the apparatus of the present application with reference to FIGS. 5 to 6. It should be understood that the description of the method embodiment and the description of the device embodiment correspond to each other, and therefore, the parts that are not described in detail may refer to the previous method embodiment.
图5是本申请实施例提供的一种控制设备500的结构性示意性图。该控制设备500包括:FIG. 5 is a schematic structural diagram of a control device 500 provided by an embodiment of the present application. The control device 500 includes:
确定模块510,用于根据车辆的行驶数据,确定第一状态序列,所述第一状态序列表示所述车辆在不同时刻的第一状态;The determining module 510 is configured to determine a first state sequence according to the driving data of the vehicle, where the first state sequence represents the first state of the vehicle at different times;
处理模块520,用于使用识别规则检测所述第一状态序列,所述识别规则是根据目标场景确定的;The processing module 520 is configured to detect the first state sequence using a recognition rule, the recognition rule being determined according to a target scene;
确定模块510,还用于根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。The determining module 510 is further configured to determine whether the target scene is included in the driving scene corresponding to the driving data according to the detection result.
应理解的是,本申请实施例提供的计算设备500可以通过专用集成电路(application-specific integrated circuit,ASIC)实现,或可编程逻辑器件(programmable logic device,PLD)实现,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD),现场可编程门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。也可以通过软件实现图2所示的场景识别的方法时,计算设备500各个模块也可以为软件模块。It should be understood that the computing device 500 provided in the embodiments of the present application may be implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD), and the above PLD may be a complex program. Logic device (complex programmable logical device, CPLD), field-programmable gate array (field-programmable gate array, FPGA), general array logic (generic array logic, GAL) or any combination thereof. When the method of scene recognition shown in FIG. 2 can also be implemented by software, each module of the computing device 500 may also be a software module.
可选的,在一些实施例中,当根据车辆的行驶数据,确定第一状态序列时,所述确定模块510具体用于:根据车辆的行驶数据与所述目标场景,确定所述第一状态序列。Optionally, in some embodiments, when the first state sequence is determined according to the driving data of the vehicle, the determining module 510 is specifically configured to: determine the first state according to the driving data of the vehicle and the target scene sequence.
可选的,在一些实施例中,当根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景时,所述确定模块510具体用于:Optionally, in some embodiments, when determining whether the target scene is included in the driving scene corresponding to the driving data according to the detection result, the determining module 510 is specifically configured to:
如果所述第一状态序列中包括第一子序列,则确定所述行驶数据对应的驾驶场景中包括所述目标场景,所述第一子序列为满足所述识别规则的子序列;或,If the first state sequence includes a first subsequence, it is determined that the driving scene corresponding to the driving data includes the target scene, and the first subsequence is a subsequence that satisfies the recognition rule; or,
如果所述第一状态序列中不包括第一子序列,则确定所述行驶数据对应的驾驶场景中不包括所述目标场景,所述第一子序列为满足所述识别规则的子序列。If the first state sequence does not include the first subsequence, it is determined that the driving scene corresponding to the driving data does not include the target scene, and the first subsequence is a subsequence that satisfies the recognition rule.
可选的,在一些实施例中,当根据车辆的行驶数据与所述目标场景,确定所述第一状态序列时,所述确定模块510具体用于:Optionally, in some embodiments, when the first state sequence is determined according to the driving data of the vehicle and the target scene, the determining module 510 is specifically configured to:
根据所述车辆的行驶数据与所述目标场景,确定第二状态序列与第三状态序列,所述第二状态序列表示所述车辆在不同时刻的第二状态,所述第三状态序列表示所述车辆在不同时刻的第三状态;According to the driving data of the vehicle and the target scene, a second state sequence and a third state sequence are determined, the second state sequence represents the second state of the vehicle at different moments, and the third state sequence represents all State the third state of the vehicle at different moments;
根据所述第二状态序列与所述第三状态序列,生成所述第一状态序列。The first state sequence is generated according to the second state sequence and the third state sequence.
可选的,在一些实施例中,所述第一状态序列为m×n大小的矩阵,所述矩阵中第i行第j列的元素表示索引为i的车辆在时刻j的第一状态,m为大于或等于1的整数,n为大于或等于2的整数,i为大于或等于1,且小于m的整数,j为大于或等于1,且小于n的整数。Optionally, in some embodiments, the first state sequence is a matrix of size m×n, and the element in the i-th row and j-th column in the matrix represents the first state of the vehicle with index i at time j, m is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1 and less than m, and j is an integer greater than or equal to 1 and less than n.
可选的,在一些实施例中,当所述行驶数据对应的驾驶场景中包括所述目标场景时,所述确定模块510还用于:Optionally, in some embodiments, when the driving scene corresponding to the driving data includes the target scene, the determining module 510 is further configured to:
根据所述第一子序列对应的时刻,在第四状态序列中确定第二子序列,所述第四状态序列表示所述车辆的关联状态,所述第四状态序列是根据所述车辆的行驶数据确定的;According to the time corresponding to the first sub-sequence, a second sub-sequence is determined in the fourth state sequence, the fourth state sequence represents the associated state of the vehicle, and the fourth state sequence is based on the driving of the vehicle Data confirmed;
根据所述第二子序列,确定所述目标场景的复杂度。According to the second subsequence, the complexity of the target scene is determined.
可选的,在一些实施例中,当根据所述第二子序列,确定所述目标场景的复杂度时,所述确定模块510具体用于:Optionally, in some embodiments, when determining the complexity of the target scene according to the second subsequence, the determining module 510 is specifically configured to:
根据所述第二子序列,确定所述车辆的关联状态的复杂度;Determine the complexity of the associated state of the vehicle according to the second sub-sequence;
对所述车辆的关联状态的复杂度进行加权运算,确定所述目标场景的复杂度。Perform a weighted operation on the complexity of the associated state of the vehicle to determine the complexity of the target scene.
可选的,在一些实施例中,所述确定模块510还用于:Optionally, in some embodiments, the determining module 510 is further configured to:
根据所述第一子序列对应的时刻,在第五状态序列中确定第三子序列,所述第五状态序列表示所述车辆在不同时刻的位置信息,所述第五状态序列是根据所述车辆的行驶数据确定的;According to the time corresponding to the first sub-sequence, a third sub-sequence is determined in the fifth state sequence, the fifth state sequence represents the position information of the vehicle at different moments, and the fifth state sequence is based on the The driving data of the vehicle is determined;
当根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景时,所述确定模块具体用于:When determining whether the driving scene corresponding to the driving data includes the target scene according to the detection result, the determining module is specifically configured to:
根据所述检测结果与所述第三子序列,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。According to the detection result and the third sub-sequence, it is determined whether the driving scene corresponding to the driving data includes the target scene.
根据本申请实施例的计算设备500可对应于执行本申请实施例中描述的方法,并且计算设备500中的各个单元的上述和其它操作和/或功能分别为了实现图2中的方法的相应流程,为了简洁,在此不再赘述。The computing device 500 according to the embodiment of the present application may correspond to executing the method described in the embodiment of the present application, and the foregoing and other operations and/or functions of each unit in the computing device 500 are to implement the corresponding process of the method in FIG. 2 respectively. , For the sake of brevity, I won’t repeat it here.
图6是本申请实施例提供的一种计算设备600的结构性示意性图。该计算设备600包括:处理器610、存储器620、通信接口630、总线650。FIG. 6 is a schematic structural diagram of a computing device 600 provided by an embodiment of the present application. The computing device 600 includes a processor 610, a memory 620, a communication interface 630, and a bus 650.
应理解,图6所示的计算设备600中的处理器610可以对应于图5中计算设备500的确定模块510、处理模块520,计算设备600中的通信接口630可以用于与其他设备之间进行通信。It should be understood that the processor 610 in the computing device 600 shown in FIG. 6 may correspond to the determining module 510 and the processing module 520 of the computing device 500 in FIG. 5, and the communication interface 630 in the computing device 600 may be used to communicate with other devices. To communicate.
其中,该处理器610可以与存储器620连接。该存储器620可以用于存储该程序代码和数据。因此,该存储器620可以是处理器610内部的存储单元,也可以是与处理器610独立的外部存储单元,还可以是包括处理器610内部的存储单元和与处理器610独立的外部存储单元的部件。Wherein, the processor 610 may be connected to the memory 620. The memory 620 can be used to store the program code and data. Therefore, the memory 620 may be a storage unit inside the processor 610, or an external storage unit independent of the processor 610, or may include a storage unit inside the processor 610 and an external storage unit independent of the processor 610. part.
可选的,计算设备600还可以包括总线650。其中,存储器620、通信接口630可以通过总线650与处理器610连接。总线650可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线650可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。Optionally, the computing device 600 may further include a bus 650. The memory 620 and the communication interface 630 may be connected to the processor 610 through the bus 650. The bus 650 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like. The bus 650 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.
应理解,在本申请实施例中,该处理器610可以采用中央处理单元(central processing  unit,CPU)。该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。或者该处理器610采用一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。It should be understood that, in this embodiment of the present application, the processor 610 may adopt a central processing unit (CPU). The processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application specific integrated circuits (ASICs), ready-made programmable gate arrays (field programmable gate arrays, FPGAs) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. Or, the processor 610 adopts one or more integrated circuits to execute related programs to implement the technical solutions provided in the embodiments of the present application.
该存储器620可以包括只读存储器和随机存取存储器,并向处理器610提供指令和数据。处理器610的一部分还可以包括非易失性随机存取存储器。例如,处理器610还可以存储设备类型的信息。The memory 620 may include a read-only memory and a random access memory, and provides instructions and data to the processor 610. A part of the processor 610 may also include a non-volatile random access memory. For example, the processor 610 may also store device type information.
在计算设备600运行时,所述处理器610执行所述存储器620中的计算机执行指令执行上述方法的操作步骤。When the computing device 600 is running, the processor 610 executes the computer-executable instructions in the memory 620 to execute the operation steps of the foregoing method.
应理解,根据本申请实施例的计算设备600可以对应于执行根据本申请实施例的图2所示方法中的相应主体,并且计算设备600中的各个模块的上述和其它操作和/或功能分别为了实现图2中的方法的相应流程,为了简洁,在此不再赘述。It should be understood that the computing device 600 according to the embodiment of the present application may correspond to the corresponding main body executing the method shown in FIG. 2 according to the embodiment of the present application, and the foregoing and other operations and/or functions of each module in the computing device 600 are respectively In order to realize the corresponding process of the method in FIG. 2, for the sake of brevity, details are not repeated here.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the system, device and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (17)

  1. 一种场景识别的方法,其特征在于,包括:A method for scene recognition, which is characterized in that it includes:
    根据车辆的行驶数据,确定第一状态序列,所述第一状态序列表示所述车辆在不同时刻的第一状态;Determining a first state sequence according to the driving data of the vehicle, the first state sequence representing the first state of the vehicle at different times;
    使用识别规则检测所述第一状态序列,所述识别规则是根据目标场景确定的;Detecting the first state sequence using a recognition rule, the recognition rule being determined according to a target scene;
    根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。According to the detection result, it is determined whether the driving scene corresponding to the driving data includes the target scene.
  2. 根据权利要求1所述的方法,其特征在于,所述根据车辆的行驶数据,确定第一状态序列,包括:The method according to claim 1, wherein the determining the first state sequence according to the driving data of the vehicle comprises:
    根据车辆的行驶数据与所述目标场景,确定所述第一状态序列。The first state sequence is determined according to the driving data of the vehicle and the target scene.
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景,包括:The method according to claim 1 or 2, wherein the determining whether the target scene is included in the driving scene corresponding to the driving data according to the detection result comprises:
    如果所述第一状态序列中包括第一子序列,则确定所述行驶数据对应的驾驶场景中包括所述目标场景,所述第一子序列为满足所述识别规则的子序列;或,If the first state sequence includes a first subsequence, it is determined that the driving scene corresponding to the driving data includes the target scene, and the first subsequence is a subsequence that satisfies the recognition rule; or,
    如果所述第一状态序列中不包括第一子序列,则确定所述行驶数据对应的驾驶场景中不包括所述目标场景,所述第一子序列为满足所述识别规则的子序列。If the first state sequence does not include the first subsequence, it is determined that the driving scene corresponding to the driving data does not include the target scene, and the first subsequence is a subsequence that satisfies the recognition rule.
  4. 根据权利要求2或3所述的方法,其特征在于,所述根据车辆的行驶数据与所述目标场景,确定所述第一状态序列,包括:The method according to claim 2 or 3, wherein the determining the first state sequence according to the driving data of the vehicle and the target scene comprises:
    根据所述车辆的行驶数据与所述目标场景,确定第二状态序列与第三状态序列,所述第二状态序列表示所述车辆在不同时刻的第二状态,所述第三状态序列表示所述车辆在不同时刻的第三状态;According to the driving data of the vehicle and the target scene, a second state sequence and a third state sequence are determined, the second state sequence represents the second state of the vehicle at different moments, and the third state sequence represents all State the third state of the vehicle at different moments;
    根据所述第二状态序列与所述第三状态序列,生成所述第一状态序列。The first state sequence is generated according to the second state sequence and the third state sequence.
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述第一状态序列为m×n大小的矩阵,所述矩阵中第i行第j列的元素表示索引为i的车辆在时刻j的第一状态,m为大于或等于1的整数,n为大于或等于2的整数,i为大于或等于1,且小于m的整数,j为大于或等于1,且小于n的整数。The method according to any one of claims 1 to 4, wherein the first state sequence is a matrix of size m×n, and the element in the i-th row and j-th column of the matrix represents the index of i The first state of the vehicle at time j, m is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1, and less than m, j is greater than or equal to 1, and less than n Integer.
  6. 根据权利要求3至5中任一项所述的方法,其特征在于,所述行驶数据对应的驾驶场景中包括所述目标场景,所述方法还包括:The method according to any one of claims 3 to 5, wherein the driving scene corresponding to the driving data includes the target scene, and the method further comprises:
    根据所述第一子序列对应的时刻,在第四状态序列中确定第二子序列,所述第四状态序列表示所述车辆的关联状态,所述第四状态序列是根据所述车辆的行驶数据确定的;According to the time corresponding to the first sub-sequence, a second sub-sequence is determined in the fourth state sequence, the fourth state sequence represents the associated state of the vehicle, and the fourth state sequence is based on the driving of the vehicle Data confirmed;
    根据所述第二子序列,确定所述目标场景的复杂度。According to the second subsequence, the complexity of the target scene is determined.
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述第二子序列,确定所述目标场景的复杂度,包括:The method according to claim 6, wherein the determining the complexity of the target scene according to the second subsequence comprises:
    根据所述第二子序列,确定所述车辆的关联状态的复杂度;Determine the complexity of the associated state of the vehicle according to the second sub-sequence;
    对所述车辆的关联状态的复杂度进行加权运算,确定所述目标场景的复杂度。Perform a weighted operation on the complexity of the associated state of the vehicle to determine the complexity of the target scene.
  8. 根据权利要求3至7中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 3 to 7, wherein the method further comprises:
    根据所述第一子序列对应的时刻,在第五状态序列中确定第三子序列,所述第五状态序列表示所述车辆在不同时刻的位置信息,所述第五状态序列是根据所述车辆的行驶数据确定的;According to the time corresponding to the first sub-sequence, a third sub-sequence is determined in the fifth state sequence, the fifth state sequence represents the position information of the vehicle at different moments, and the fifth state sequence is based on the The driving data of the vehicle is determined;
    所述根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景,包括:The determining whether the target scene is included in the driving scene corresponding to the driving data according to the detection result includes:
    根据所述检测结果与所述第三子序列,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。According to the detection result and the third sub-sequence, it is determined whether the driving scene corresponding to the driving data includes the target scene.
  9. 一种计算设备,其特征在于,包括:A computing device, characterized in that it comprises:
    确定模块,用于根据车辆的行驶数据,确定第一状态序列,所述第一状态序列表示所述车辆在不同时刻的第一状态;A determining module, configured to determine a first state sequence according to the driving data of the vehicle, the first state sequence representing the first state of the vehicle at different times;
    处理模块,用于使用识别规则检测所述第一状态序列,所述识别规则是根据目标场景确定的;A processing module, configured to detect the first state sequence using a recognition rule, the recognition rule being determined according to a target scene;
    所述确定模块,还用于根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。The determining module is further configured to determine whether the target scene is included in the driving scene corresponding to the driving data according to the detection result.
  10. 根据权利要求9所述的计算设备,其特征在于,当根据车辆的行驶数据,确定第一状态序列时,所述确定模块具体用于:The computing device according to claim 9, wherein when the first state sequence is determined according to the driving data of the vehicle, the determining module is specifically configured to:
    根据车辆的行驶数据与所述目标场景,确定所述第一状态序列。The first state sequence is determined according to the driving data of the vehicle and the target scene.
  11. 根据权利要求9或10所述的计算设备,其特征在于,当根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景时,所述确定模块具体用于:The computing device according to claim 9 or 10, wherein when determining whether the driving scene corresponding to the driving data includes the target scene according to the detection result, the determining module is specifically configured to:
    如果所述第一状态序列中包括第一子序列,则确定所述行驶数据对应的驾驶场景中包括所述目标场景,所述第一子序列为满足所述识别规则的子序列;或,If the first state sequence includes a first subsequence, it is determined that the driving scene corresponding to the driving data includes the target scene, and the first subsequence is a subsequence that satisfies the recognition rule; or,
    如果所述第一状态序列中不包括第一子序列,则确定所述行驶数据对应的驾驶场景中不包括所述目标场景,所述第一子序列为满足所述识别规则的子序列。If the first state sequence does not include the first subsequence, it is determined that the driving scene corresponding to the driving data does not include the target scene, and the first subsequence is a subsequence that satisfies the recognition rule.
  12. 根据权利要求10或11所述的计算设备,其特征在于,当根据车辆的行驶数据与所述目标场景,确定所述第一状态序列时,所述确定模块具体用于:The computing device according to claim 10 or 11, wherein when the first state sequence is determined according to the driving data of the vehicle and the target scene, the determining module is specifically configured to:
    根据所述车辆的行驶数据与所述目标场景,确定第二状态序列与第三状态序列,所述第二状态序列表示所述车辆在不同时刻的第二状态,所述第三状态序列表示所述车辆在不同时刻的第三状态;According to the driving data of the vehicle and the target scene, a second state sequence and a third state sequence are determined, the second state sequence represents the second state of the vehicle at different moments, and the third state sequence represents all State the third state of the vehicle at different moments;
    根据所述第二状态序列与所述第三状态序列,生成所述第一状态序列。The first state sequence is generated according to the second state sequence and the third state sequence.
  13. 根据权利要求9至12中任一项所述的计算设备,其特征在于,所述第一状态序列为m×n大小的矩阵,所述矩阵中第i行第j列的元素表示索引为i的车辆在时刻j的第一状态,m为大于或等于1的整数,n为大于或等于2的整数,i为大于或等于1,且小于m的整数,j为大于或等于1,且小于n的整数。The computing device according to any one of claims 9 to 12, wherein the first state sequence is a matrix of size m×n, and an element in the i-th row and j-th column of the matrix indicates that the index is i The first state of the vehicle at time j, m is an integer greater than or equal to 1, n is an integer greater than or equal to 2, i is an integer greater than or equal to 1, and less than m, j is greater than or equal to 1, and less than An integer of n.
  14. 根据权利要求11至13中任一项所述的计算设备,其特征在于,当所述行驶数据对应的驾驶场景中包括所述目标场景时,所述确定模块还用于:The computing device according to any one of claims 11 to 13, wherein when the driving scene corresponding to the driving data includes the target scene, the determining module is further configured to:
    根据所述第一子序列对应的时刻,在第四状态序列中确定第二子序列,所述第四状态序列表示所述车辆的关联状态,所述第四状态序列是根据所述车辆的行驶数据确定的;According to the time corresponding to the first sub-sequence, a second sub-sequence is determined in the fourth state sequence, the fourth state sequence represents the associated state of the vehicle, and the fourth state sequence is based on the driving of the vehicle Data confirmed;
    根据所述第二子序列,确定所述目标场景的复杂度。According to the second subsequence, the complexity of the target scene is determined.
  15. 根据权利要求14所述的计算设备,其特征在于,当根据所述第二子序列,确定所述目标场景的复杂度时,所述确定模块具体用于:The computing device according to claim 14, wherein when determining the complexity of the target scene according to the second subsequence, the determining module is specifically configured to:
    根据所述第二子序列,确定所述车辆的关联状态的复杂度;Determine the complexity of the associated state of the vehicle according to the second sub-sequence;
    对所述车辆的关联状态的复杂度进行加权运算,确定所述目标场景的复杂度。Perform a weighted operation on the complexity of the associated state of the vehicle to determine the complexity of the target scene.
  16. 根据权利要求11至15中任一项所述的计算设备,其特征在于,所述确定模块还 用于:The computing device according to any one of claims 11 to 15, wherein the determining module is further configured to:
    根据所述第一子序列对应的时刻,在第五状态序列中确定第三子序列,所述第五状态序列表示所述车辆在不同时刻的位置信息,所述第五状态序列是根据所述车辆的行驶数据确定的;According to the time corresponding to the first sub-sequence, a third sub-sequence is determined in the fifth state sequence, the fifth state sequence represents the position information of the vehicle at different moments, and the fifth state sequence is based on the The driving data of the vehicle is determined;
    当根据检测结果,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景时,所述确定模块具体用于:When determining whether the driving scene corresponding to the driving data includes the target scene according to the detection result, the determining module is specifically configured to:
    根据所述检测结果与所述第三子序列,确定所述行驶数据对应的驾驶场景中是否包括所述目标场景。According to the detection result and the third sub-sequence, it is determined whether the driving scene corresponding to the driving data includes the target scene.
  17. 一种计算设备,所述计算设备包括处理器和存储器,所述存储器用于存储计算机执行指令,所述计算设备运行时,所述处理器执行所述存储器中的计算机执行指令以通过所述计算设备执行权利要求1至8中任一项所述的方法中的操作步骤。A computing device, the computing device includes a processor and a memory, the memory is used to store computer execution instructions, when the computing device is running, the processor executes the computer execution instructions in the memory to pass the calculation The device executes the operation steps in the method according to any one of claims 1 to 8.
PCT/CN2020/097886 2019-09-27 2020-06-24 Scenario identification method and computing device WO2021057134A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910927376.XA CN110796007B (en) 2019-09-27 2019-09-27 Scene recognition method and computing device
CN201910927376.X 2019-09-27

Publications (1)

Publication Number Publication Date
WO2021057134A1 true WO2021057134A1 (en) 2021-04-01

Family

ID=69438671

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097886 WO2021057134A1 (en) 2019-09-27 2020-06-24 Scenario identification method and computing device

Country Status (2)

Country Link
CN (1) CN110796007B (en)
WO (1) WO2021057134A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838358A (en) * 2021-10-29 2021-12-24 郑州信大捷安信息技术股份有限公司 Method and system for realizing intelligent traffic sand table
CN113859264A (en) * 2021-09-17 2021-12-31 阿波罗智联(北京)科技有限公司 Vehicle control method, device, electronic device and storage medium
CN114348001A (en) * 2022-01-06 2022-04-15 腾讯科技(深圳)有限公司 Traffic simulation method, device, equipment and storage medium
CN114724370A (en) * 2022-03-31 2022-07-08 阿波罗智联(北京)科技有限公司 Traffic data processing method, traffic data processing device, electronic equipment and medium
CN117422808A (en) * 2023-12-19 2024-01-19 中北数科(河北)科技有限公司 Three-dimensional scene data loading method and electronic equipment
CN114348001B (en) * 2022-01-06 2024-04-26 腾讯科技(深圳)有限公司 Traffic simulation method, device, equipment and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796007B (en) * 2019-09-27 2023-03-03 华为技术有限公司 Scene recognition method and computing device
CN111582018B (en) * 2020-03-24 2024-02-09 北京掌行通信息技术有限公司 Unmanned vehicle dynamic interaction scene judging method, unmanned vehicle dynamic interaction scene judging system, unmanned vehicle dynamic interaction scene judging terminal and storage medium
CN112017438B (en) * 2020-10-16 2021-08-27 宁波均联智行科技股份有限公司 Driving decision generation method and system
CN112380137A (en) * 2020-12-04 2021-02-19 清华大学苏州汽车研究院(吴江) Method, device and equipment for determining automatic driving scene and storage medium
CN112565468B (en) * 2021-02-22 2021-08-31 华为技术有限公司 Driving scene recognition method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105247492A (en) * 2013-04-02 2016-01-13 西部数据技术公司 Detection of user behavior using time series modeling
CN105954040A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Testing method and device for driverless automobiles
US20180188733A1 (en) * 2016-12-29 2018-07-05 DeepScale, Inc. Multi-channel sensor simulation for autonomous control systems
CN109278758A (en) * 2018-08-28 2019-01-29 武汉理工大学 A kind of intelligent vehicle personalized driving learning system based on smart phone
CN110187639A (en) * 2019-06-27 2019-08-30 吉林大学 A kind of trajectory planning control method based on Parameter Decision Making frame
CN110796007A (en) * 2019-09-27 2020-02-14 华为技术有限公司 Scene recognition method and computing device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6693321B2 (en) * 2016-07-26 2020-05-13 株式会社デンソー Ability evaluation system
US10712746B2 (en) * 2016-08-29 2020-07-14 Baidu Usa Llc Method and system to construct surrounding environment for autonomous vehicles to make driving decisions
CN109211575B (en) * 2017-07-05 2020-11-20 百度在线网络技术(北京)有限公司 Unmanned vehicle and site testing method, device and readable medium thereof
CN108334055B (en) * 2018-01-30 2021-10-15 赵兴华 Method, device and equipment for checking vehicle automatic driving algorithm and storage medium
CN109520744B (en) * 2018-11-12 2020-04-21 百度在线网络技术(北京)有限公司 Driving performance testing method and device for automatic driving vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105247492A (en) * 2013-04-02 2016-01-13 西部数据技术公司 Detection of user behavior using time series modeling
CN105954040A (en) * 2016-04-22 2016-09-21 百度在线网络技术(北京)有限公司 Testing method and device for driverless automobiles
US20180188733A1 (en) * 2016-12-29 2018-07-05 DeepScale, Inc. Multi-channel sensor simulation for autonomous control systems
CN109278758A (en) * 2018-08-28 2019-01-29 武汉理工大学 A kind of intelligent vehicle personalized driving learning system based on smart phone
CN110187639A (en) * 2019-06-27 2019-08-30 吉林大学 A kind of trajectory planning control method based on Parameter Decision Making frame
CN110796007A (en) * 2019-09-27 2020-02-14 华为技术有限公司 Scene recognition method and computing device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113859264A (en) * 2021-09-17 2021-12-31 阿波罗智联(北京)科技有限公司 Vehicle control method, device, electronic device and storage medium
CN113859264B (en) * 2021-09-17 2023-12-22 阿波罗智联(北京)科技有限公司 Vehicle control method, device, electronic equipment and storage medium
CN113838358A (en) * 2021-10-29 2021-12-24 郑州信大捷安信息技术股份有限公司 Method and system for realizing intelligent traffic sand table
CN114348001A (en) * 2022-01-06 2022-04-15 腾讯科技(深圳)有限公司 Traffic simulation method, device, equipment and storage medium
CN114348001B (en) * 2022-01-06 2024-04-26 腾讯科技(深圳)有限公司 Traffic simulation method, device, equipment and storage medium
CN114724370A (en) * 2022-03-31 2022-07-08 阿波罗智联(北京)科技有限公司 Traffic data processing method, traffic data processing device, electronic equipment and medium
CN117422808A (en) * 2023-12-19 2024-01-19 中北数科(河北)科技有限公司 Three-dimensional scene data loading method and electronic equipment
CN117422808B (en) * 2023-12-19 2024-03-19 中北数科(河北)科技有限公司 Three-dimensional scene data loading method and electronic equipment

Also Published As

Publication number Publication date
CN110796007A (en) 2020-02-14
CN110796007B (en) 2023-03-03

Similar Documents

Publication Publication Date Title
WO2021057134A1 (en) Scenario identification method and computing device
JP6714513B2 (en) An in-vehicle device that informs the navigation module of the vehicle of the presence of an object
CN108508881B (en) Automatic driving control strategy adjusting method, device, equipment and storage medium
GB2555214A (en) Depth map estimation with stereo images
JP7220169B2 (en) Information processing method, device, storage medium, and program
US11091161B2 (en) Apparatus for controlling lane change of autonomous vehicle and method thereof
Zhao et al. A cooperative vehicle-infrastructure based urban driving environment perception method using a DS theory-based credibility map
US11562556B1 (en) Prediction error scenario mining for machine learning models
US11299169B2 (en) Vehicle neural network training
US20230046410A1 (en) Semantic annotation of sensor data using unreliable map annotation inputs
KR20220054743A (en) Metric back-propagation for subsystem performance evaluation
US20230360379A1 (en) Track segment cleaning of tracked objects
CN117056153A (en) Methods, systems, and computer program products for calibrating and verifying driver assistance systems and/or autopilot systems
US11454977B2 (en) Information processing method and information processing device
CN110784680B (en) Vehicle positioning method and device, vehicle and storage medium
CN117079238A (en) Road edge detection method, device, equipment and storage medium
US11400958B1 (en) Learning to identify safety-critical scenarios for an autonomous vehicle
Ravishankaran Impact on how AI in automobile industry has affected the type approval process at RDW
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle
US20230331256A1 (en) Discerning fault for rule violations of autonomous vehicles for data processing
US20220309693A1 (en) Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation
US20230339517A1 (en) Autonomous driving evaluation system
Bassett et al. Infrastructure-based Detection and Localization of Road Users for Cooperative Autonomous Driving
CN117008574A (en) Intelligent network allies oneself with car advanced auxiliary driving system and autopilot system test platform
WO2023158580A1 (en) Prediction error scenario mining for machine learning models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20868622

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20868622

Country of ref document: EP

Kind code of ref document: A1