WO2022174838A1 - 一种驾驶场景识别方法及其系统 - Google Patents

一种驾驶场景识别方法及其系统 Download PDF

Info

Publication number
WO2022174838A1
WO2022174838A1 PCT/CN2022/077235 CN2022077235W WO2022174838A1 WO 2022174838 A1 WO2022174838 A1 WO 2022174838A1 CN 2022077235 W CN2022077235 W CN 2022077235W WO 2022174838 A1 WO2022174838 A1 WO 2022174838A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
driving
scene
identification
driving scene
Prior art date
Application number
PCT/CN2022/077235
Other languages
English (en)
French (fr)
Inventor
夏子为
朱杰
刘新春
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022174838A1 publication Critical patent/WO2022174838A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control

Definitions

  • the present application relates to the technical field of automatic driving, and in particular, to a driving scene recognition method and system thereof.
  • scenario-based virtual testing technology has flexible scenario configuration, high testing efficiency, strong testing repeatability, safe testing process, and low testing cost. It can realize automatic testing and accelerated testing, saving a lot of manpower and material resources. Therefore, scenario-based virtual testing has become an important part of autonomous vehicle testing.
  • the data sources of autonomous driving test scenarios mainly include real data, simulated data or expert experience.
  • the real data sources mainly include natural driving data, accident data, roadside unit monitoring data, etc.
  • Simulation data sources mainly include driving simulator data and simulation data.
  • Expert experience data refers to the scenario element information obtained by summarizing the experience and knowledge of previous tests. Standard and regulatory test scenarios are typical data sources of expert experience scenarios.
  • test scenarios obtained through real data restoration are not only efficient, but also have high data authenticity in terms of road conditions and vehicle running trajectories, and can obtain test scenarios that are closer to the real situation. Therefore, natural scenes based on real data have become an important source of automatic driving simulation test scene library, and how to extract natural scenes has become an important research direction in the industry.
  • the embodiments of the present application provide a driving scene recognition method and a system thereof. During the driving scene recognition process, less storage resources and computing resources are occupied, thereby reducing the waste of resources.
  • a first aspect of the embodiments of the present application provides a driving scene recognition method, which can be applied to a combined system composed of a cloud service system, a vehicle-mounted system, a cloud service system and a vehicle-mounted system, and any of the foregoing systems includes a service unit and computational units, the method includes:
  • a third identification instruction can be input to the service unit, and the third identification instruction carries the target data collected by the vehicle within a certain period of time.
  • the target data includes data of various driving scenarios experienced by the vehicle during the time period, for example, images of various driving scenarios experienced by the vehicle, and the like.
  • the user can also input the first identification instruction to the service unit, and the first identification instruction is used to indicate the sub-driving scenes (that is, the waiting scenes) that constitute the driving scene specified by the user.
  • the identified sub-driving scene so the service unit can obtain the target data based on the third identification instruction, and determine the sub-driving scene to be identified based on the first identification instruction.
  • the driving scene to be identified that is, the driving scene specified by the user
  • the sub-driving scenes to be identified include a sunny scene, a turning scene, and the like. It can be seen that the driving scene to be identified is constructed based on the multiple sub-scenarios to be identified.
  • the service unit After the service unit obtains the target data and the sub-driving scene to be identified, it sends an identification request (the identification request may include the first identification instruction and the third identification instruction) generated based on the target data and the sub-driving scene to be identified to the computing unit .
  • the computing unit After receiving the identification request, the computing unit obtains a scene identification file corresponding to the sub-driving scene to be identified based on the identification request, and the scene identification file is used to define the identification rules of the sub-driving scene.
  • the computing unit identifies the target data according to the part of the scene identification file, and obtains the identification result of the sub-driving scene.
  • the service unit after the service unit obtains the target data, the sunny scene to be identified, and the turning scene, it generates an identification request based on this part of the information, and sends the identification request to the computing unit. Based on the identification request, the computing unit can obtain the scene identification file corresponding to the sunny scene and the turning scene, and use this part of the scene identification file to identify the target data, and obtain the identification result of the sunny scene and the identification result of the turning scene, that is, the vehicle is in a certain scene. It is in a sunny scene in one time period, and the vehicle is in a turning scene in another time period.
  • the computing unit sends the recognition result of the sub-driving scene to the service unit. So far, the computing unit has completed the identification of the sub-driving scene.
  • the service unit can notify the computing unit of the sub-driving scenarios specified by the user, the computing unit only needs to load the scene recognition files corresponding to the sub-driving scenarios, and does not need to preset the scene recognition files corresponding to all the sub-driving scenarios in advance. file, and use this part of the scene recognition file loaded in real time to recognize the target data, so as to obtain the recognition result of the sub-driving scene, and complete the recognition of the sub-driving scene.
  • the recognition process takes up less storage resources and computing resources of the computing unit. Thereby reducing the waste of resources.
  • the method further includes: the service unit obtains a second identification instruction, where the second identification instruction is used to indicate a driving scene to be identified, and the driving scene to be identified is constructed based on a plurality of sub-driving scenes to be identified; The service unit determines the identification result of the driving scene according to the identification results of the sub-driving scenes.
  • the user can also input a second identification instruction to the service unit, and the second identification instruction is used to indicate the driving scene to be identified, so the service unit can determine the driving scene to be identified (ie, the driving scene specified by the user), The identified driving scene is constructed based on the multiple sub-driving scenes to be identified.
  • the service unit determines the recognition results of the driving scenarios according to the recognition results of the sub-driving scenarios.
  • the service unit determines the recognition results of the driving scenarios according to the recognition results of the sub-driving scenarios.
  • the service unit can convert the target data into driving scenes for testing based on the driving scene recognition results.
  • the method further includes: determining resource requirements according to target data and sub-driving scenarios; and selecting computing units that match the resource requirements in the computing unit cluster.
  • the computing unit cluster includes multiple computing units, and different computing units are allocated with different computing resources and storage resources.
  • the service unit can determine the resource requirements based on the target data and the sub-driving scenarios to be identified, and the resource requirements are used to instruct the identification of the sub-driving scenarios. The size of the resource. Therefore, the service unit can select the computing unit that matches the resource requirements from the computing unit cluster. In this way, since the selected computing unit has sufficient computing resources and storage resources, the identification of the sub-driving scene can be successfully completed.
  • the method further includes: determining resource requirements according to target data and sub-driving scenarios; and creating a computing unit matching the resource requirements.
  • the service unit may determine the resource requirement based on the target data and the sub-driving scene to be identified, and the resource requirement is used to instruct the sub-driving scene to be performed. Identify, the size of the resources required. Therefore, the service unit can create a computing unit that matches the resource requirement. For example, the service unit can directly invoke the cloud service (ie, virtualization technology) to create the computing unit that matches the resource requirement, and so on. In this way, since the created computing unit has sufficient computing resources and storage resources, the identification of the sub-driving scene can be successfully completed.
  • the cloud service ie, virtualization technology
  • the process requires the development of scene recognition files corresponding to various driving scenes, and each driving scene is usually constructed from multiple sub-driving scenes, that is, each driving scene has high complexity, so the scene recognition files corresponding to driving scenes are often developed. A considerable development cost is required.
  • the computing unit only uses the scene recognition files corresponding to some sub-driving scenarios to recognize the target data. Since the complexity of the sub-driving scenarios is low, the development cost required to develop the scene recognition files corresponding to the sub-driving scenarios is high. lower. Then, the service unit may divide the target data into multiple sub-data by time, and mark the sub-driving scenarios included in each sub-data based on the recognition results of the multiple sub-driving scenarios.
  • the service unit can determine which sub-data contains the driving scene specified by the user according to the labeling result, obtain the driving scene recognition result, and complete the driving scene recognition.
  • the service unit can determine which sub-data contains the driving scene specified by the user according to the labeling result, obtain the driving scene recognition result, and complete the driving scene recognition.
  • the driving scenario is a Boolean retrieval formula constructed based on sub-driving scenarios
  • the service unit detecting whether each sub-data has a driving scenario according to the identification results of the multiple sub-driving scenarios includes: Recognition result, check whether each sub-data conforms to the Boolean retrieval formula, and obtain the recognition result of the driving scene.
  • the driving scene to be identified is represented in the form of a Boolean retrieval formula, so the service unit can determine which sub-data conforms to the Boolean retrieval formula according to the labeling result, so as to quickly and accurately determine which sub-data contains the driving specified by the user. scene, so as to obtain the recognition result of the driving scene.
  • acquiring the scene recognition file corresponding to the sub-driving scene by the computing unit includes: the computing unit loads the scene recognition file corresponding to the sub-driving scene in the database into the memory of the computing unit.
  • the computing unit may determine the sub-driving scenario specified by the user based on the identification request. Then, the computing unit loads the scene identification file corresponding to this sub-driving scene into the local memory, so the computing unit can call the scene identification file corresponding to this sub-driving scene to identify the target data, so as to obtain the identification of the sub-driving scene result. It can be seen that the computing unit can dynamically load the scene recognition files corresponding to the sub-driving scenes to be recognized into the local memory. Therefore, the calculation does not need to preset the scene recognition files corresponding to all the sub-driving scenes in the local memory, thereby effectively Save the storage resources of the computing unit.
  • the identification result of the sub-driving scene includes the sub-driving scene, the start time of the sub-driving scene, and the end time of the sub-driving scene.
  • the recognition result of the driving scene includes the driving scene, the start time of the driving scene, and the end time of the driving scene.
  • the service unit is a virtual machine, a container or a bare metal server
  • the computing unit is a virtual machine, a container or a bare metal server.
  • a second aspect of the embodiments of the present application provides a driving scene recognition system, the system includes: a service unit and a computing unit; the computing unit is configured to acquire a first recognition instruction from the service unit, and the first recognition instruction is used to instruct a sub-driving a scene; a computing unit is used to obtain a scene recognition file corresponding to the sub-driving scene according to the first recognition instruction, and the scene recognition file is used to define the recognition rules of the sub-driving scene; the computing unit is also used for recognizing the target data according to the scene recognition file , to get the recognition result of the sub-driving scene.
  • the service unit determines the sub-driving scene to be recognized according to the first recognition instruction input by the user, it can notify the computing unit of the sub-driving scene to be recognized. Therefore, the computing unit only needs to load the scene identification file corresponding to the part of the sub-driving scene, and use the part of the scene identification file to identify the target data, so as to obtain the identification result of the sub-driving scene. Then, the computing unit sends the identification result of the sub-driving scene to the service unit, so as to complete the identification of the sub-driving scene.
  • the service unit can notify the computing unit of the sub-driving scenarios specified by the user, the computing unit only needs to load the scene recognition files corresponding to the sub-driving scenarios, and does not need to preset the scene recognition files corresponding to all the sub-driving scenarios in advance. file, and use this part of the scene recognition file loaded in real time to recognize the target data, so as to obtain the recognition result of the sub-driving scene, and complete the recognition of the sub-driving scene.
  • the recognition process takes up less storage resources and computing resources of the computing unit. Thereby reducing the waste of resources.
  • the service unit is further configured to: determine resource requirements according to target data and sub-driving scenarios; and select a computing unit that matches the resource requirements in a computing unit cluster.
  • the service unit is further configured to: determine resource requirements according to target data and sub-driving scenarios; and create a computing unit that matches the resource requirements.
  • the service unit is further configured to: acquire a second identification instruction, the second identification instruction is used to indicate a driving scene, and the driving scene is constructed based on a plurality of sub-driving scenes; determine the driving according to the identification results of the plurality of sub-driving scenes The recognition result of the scene.
  • the service unit is specifically used to: divide the target data into multiple sub-data in units of time; in the multiple sub-data, determine whether there is driving in each sub-data according to the recognition results of the multiple sub-driving scenarios scene, and get the recognition result of the driving scene.
  • the computing unit is specifically configured to load the scene recognition file corresponding to the sub-driving scene in the database into the memory of the computing unit.
  • the identification result of the sub-driving scene includes the sub-driving scene, the start time of the sub-driving scene, and the end time of the sub-driving scene.
  • the recognition result of the driving scene includes the driving scene, the start time of the driving scene, and the end time of the driving scene.
  • the driving scene is a Boolean retrieval formula constructed based on sub-driving scenarios
  • the service unit is specifically configured to detect whether each sub-data conforms to the Boolean retrieval formula according to the identification results of multiple sub-driving scenarios, and obtain the driving scenario's Identify the results.
  • the service unit is a virtual machine, a container or a bare metal server
  • the computing unit is a virtual machine, a container or a bare metal server.
  • a third aspect of the embodiments of the present application provides an apparatus, the apparatus includes: a processor and a memory;
  • the memory is used to store programs and scene recognition files
  • the processor is configured to execute the program stored in the memory, so that the apparatus implements the method as described in the first aspect or any possible implementation manner of the first aspect.
  • a fourth aspect of the embodiments of the present application provides a vehicle networking communication system, where the system includes an in-vehicle system and the device according to the third aspect, wherein the in-vehicle system is communicatively connected to the device.
  • a fifth aspect of the embodiments of the present application provides a computer storage medium, including computer-readable instructions, when the computer-readable instructions are executed, the implementation is as described in the first aspect or any possible implementation manner of the first aspect Methods.
  • a sixth aspect of the embodiments of the present application provides a computer program product including instructions, which, when run on a computer, causes the computer to execute the method described in the first aspect or any possible implementation manner of the first aspect .
  • a seventh aspect of the embodiments of the present application provides a chip system, where the chip system includes a processor for invoking a computer program or computer instruction stored in a memory, so that the processor executes the first aspect or the first aspect Any one of the possible implementations of the method described.
  • the processor is coupled to the memory through an interface.
  • the chip system further includes a memory, and the memory stores computer programs or computer instructions.
  • An eighth aspect of the embodiments of the present application provides a processor, where the processor is configured to invoke a computer program or computer instructions stored in a memory, so that the processor executes the first aspect or any one of the possible first aspects. implement the method described.
  • the service unit may notify the computing unit of the sub-driving scene to be identified. Therefore, the computing unit only needs to load the scene identification file corresponding to the part of the sub-driving scene, and use the part of the scene identification file to identify the target data, so as to obtain the identification result of the sub-driving scene. Then, the computing unit sends the identification result of the sub-driving scene to the service unit, so as to complete the identification of the sub-driving scene.
  • the service unit can notify the computing unit of the sub-driving scenarios specified by the user, the computing unit only needs to load the scene recognition files corresponding to the sub-driving scenarios, and does not need to preset the scene recognition files corresponding to all the sub-driving scenarios in advance. file, and use this part of the scene recognition file loaded in real time to recognize the target data, so as to obtain the recognition result of the sub-driving scene, and complete the recognition of the sub-driving scene.
  • the recognition process takes up less storage resources and computing resources of the computing unit. Thereby reducing the waste of resources.
  • Fig. 1 is a structural schematic diagram of a vehicle-connected communication system
  • FIG. 2 is a schematic structural diagram of a cloud service system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of parallel execution of multiple recognition tasks provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a vehicle-mounted system provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a driving scene identification method provided by an embodiment of the present application.
  • FIG. 6 is another schematic flowchart of the driving scene recognition method provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of an identification operation performed by a computing unit provided in an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a driving scene recognition device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an apparatus provided by an embodiment of the present application.
  • the embodiments of the present application provide a driving scene recognition method and a system thereof. During the driving scene recognition process, less storage resources and computing resources are occupied, thereby reducing the waste of resources.
  • connection, coupling or communication in this application may be a direct connection, coupling or communication between related objects, or an indirect connection, coupling or communication through other devices, and in addition, the connection, coupling or communication between objects It can be electrical or other similar forms, which are not limited in this application.
  • Modules or sub-modules described independently may or may not be physically separated; they may be implemented by software or by hardware, and some modules or sub-modules may be implemented by software and called by the processor.
  • the software realizes the functions of this part of the modules or sub-modules, and other parts of the templates or sub-modules are realized by hardware, for example, by hardware circuits.
  • some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present application.
  • “Plural” means two or more. "And/or” describes the association relationship between associated objects, indicating that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • autonomous driving technology has received extensive attention.
  • the data collected by the vehicle in the actual environment can be used to construct driving scenarios for autonomous driving tests.
  • FIG. 1 is a schematic structural diagram of a vehicle-connected communication system.
  • the car-connected communication system includes an in-vehicle system and a cloud service system.
  • the on-board system is deployed on the vehicle to collect data of various scenarios that the vehicle passes through during driving, and upload the data to the cloud service system.
  • the cloud service system includes a scene recognition system, which is used to perform scene recognition on the data uploaded by the in-vehicle system, and build a driving scene based on the recognition results. Then, the scene recognition system uses the constructed driving scene to conduct an automatic driving test, thereby updating the automatic driving data, and finally sends the updated automatic driving data (for example, the automatic driving program or the automatic driving file, etc.) to the in-vehicle system.
  • the updated automatic driving data for example, the automatic driving program or the automatic driving file, etc.
  • the real data may contain various scenarios, such as weather scenarios, road scenarios, behavior scenarios, etc.
  • the corresponding scenarios are preset in the system Identify files to identify scenarios for real data. Therefore, after receiving the real data, the above scene recognition system will use all the scene recognition files to identify the target data, so it will occupy a lot of storage resources and computing resources, resulting in a certain waste of resources.
  • FIG. 2 is a schematic structural diagram of a cloud service system provided by an embodiment of the present application.
  • the cloud service system includes: a service unit, a computing unit cluster, and a database. The following briefly describes the service unit, the computing unit, and the database, respectively.
  • the service unit can centrally manage the cloud service system.
  • a service unit can be deployed in various ways.
  • a service unit can be a separate physical machine, or a service unit can also be a virtual machine (VM) loaded on a physical server.
  • VM virtual machine
  • a service unit can also be a container (docker) loaded on a physical server, and so on.
  • the service unit can receive the target data input by the user, the sub-driving scene and/or the information of the driving scene specified by the user to be identified, and the driving scene can also be called a comprehensive scene. Called a unit scene, for example, a sunny day scene or a turning scene, a driving scene can be constructed based on multiple sub-driving scenes.
  • the service unit may call the computing unit to recognize the sub-driving scenarios on the target data (for example, the service unit sends an identification instruction to the computing unit to call the computing unit to recognize the sub-driving scenarios on the target data). And if there is a driving scene identification requirement, the service unit may further obtain the identification result of the sub-driving scene from the computing unit, and determine the identification result of the driving scene based on the identification result of the sub-driving scene, so as to complete the identification of the driving scene.
  • the computing unit can also be deployed in a variety of ways.
  • the computing unit can be a separate physical machine, and the computing unit can also be a virtual machine loaded on a physical machine.
  • a container loaded on a physical machine etc.
  • both the computing unit and the service unit are containers (or virtual machines), they can be deployed on the same physical machine or on different physical machines, which is not limited here.
  • the computing unit After receiving the identification instruction from the service unit, the computing unit can load the scene identification file corresponding to the sub-driving scene to be identified based on the identification instruction, and use the scene identification file to complete the identification of the sub-driving scene on the target data. Further, the computing unit may return the identification result of the sub-driving scene to the service unit, so that the service unit further completes the identification of the driving scene.
  • the database is used to store the files for scene recognition, for example, to store the scene recognition files (also called scene recognition algorithms) corresponding to various sub-driving scenarios (for example, the side-car cut-in scene, the sunny day scene, the continuous turning scene, etc.), so Also known as algorithm warehouse.
  • each scene identification file can be configured with an identification, such as the name of the sub-driving scene configured with the file for identification or the identity document (ID) of the sub-driving scene configured with the file for identification.
  • ID identity document
  • the scene identification file and the sub-driving scene can be in one-to-one correspondence.
  • the file defines the identification rules of the sub-driving scene, for example, the parameter boundary and extraction standard of the sub-driving scene.
  • the computing unit can load one or some scene recognition files (referred to as target scene recognition files) in the algorithm warehouse to the local according to real-time requirements (for example, some sub-driving scenes specified by the user), so as to use the target scene recognition
  • the file identifies the sub-driving scene on the target data.
  • the scene recognition files in the algorithm repository are modifiable (eg, updated, replaced, added or deleted, etc.).
  • a user can upload a scene recognition file corresponding to a new sunny day scene to the algorithm warehouse, thereby replacing the scene recognition file corresponding to the sunny day scene stored in the cloud data.
  • a user may upload the scene recognition file corresponding to the sidecar cut-in scene to the algorithm warehouse, so that the algorithm warehouse stores the scene recognition file corresponding to the side-vehicle cut-in scene.
  • different users can exchange and share various scene recognition files through the algorithm warehouse, improve the richness of scene recognition files, adapt to more rich scenes for scene recognition, and improve the driving experience.
  • the service unit may issue identification instructions to different computing units respectively, so that these computing units perform their respective identification tasks in parallel. For ease of understanding, the foregoing process is described below with reference to FIG. 3 .
  • FIG. 3 is a schematic diagram of parallel execution of multiple recognition tasks provided by an embodiment of the present application. As shown in FIG. 3 , at a certain moment, the service unit delivers target data D and sub-driving scenarios J, K, The identification instruction of L enables the computing unit to complete the identification of the sub-driving scenarios J, K, and L for the target data D, and upload the identification results to the service unit.
  • the service unit may issue an identification instruction including the target data E and sub-driving scenarios X, Y, and Z to another computing unit, so that the computing unit completes the sub-driving scenarios X, Y for the target data E. , Z identification, and upload the identification result to the service unit.
  • the identification operations of the two computing units may be performed in parallel.
  • the service unit can be an on-board unit (OBU) in the on-board system, and the OBU can be presented in the form of an on-board information box (telematics box, T-BOX), and the T-BOX can support at least one communication interface, For example, a direct-connected communication interface (PC5 interface), a cellular communication interface (Uu interface), etc., can communicate with the user's mobile device, for example, receive the information of the sub-driving scene and/or driving scene input by the user to be identified, Or information such as target data obtained from real data.
  • the computing unit can be an in-vehicle computing platform (IVCP) in the in-vehicle system.
  • IVCP in-vehicle computing platform
  • IVCP can be presented in the form of a computing platform in the fields of intelligent driving, intelligent cockpit, or intelligent vehicle control, etc., so it can realize a series of complex Data processing, for example, performing operations such as sub-driving scene recognition on target data.
  • the database may be a storage device called by the computing unit for storing information such as scene identification files, scene identification results, etc. corresponding to the sub-driving scenes uploaded by various users.
  • the cloud service system and the in-vehicle system can also form a combined system.
  • the combined system can be implemented in various ways.
  • the combined system includes a service unit in the vehicle-mounted system (as shown in FIG. 4 ), a computing unit in the cloud service system (as shown in FIG. 2 ), and a cloud service system. cloud database.
  • the combined system includes a service unit in the cloud service system, a computing unit in the vehicle-mounted system, and a database in the vehicle-mounted system.
  • the combined system includes the service unit of the in-vehicle system, the computing unit in the in-vehicle system, and the cloud database in the cloud service system, etc., which are not limited here.
  • FIG. 5 is a schematic flowchart of a driving scene recognition method provided by an embodiment of the present application. As shown in FIG. 5 , the method includes:
  • the first identification instruction is used to indicate the sub-driving scene to be identified, which can be designated by the user.
  • the user can specify multiple sub-driving scenarios, and the multiple sub-driving scenarios constitute the driving scenarios specified by the user.
  • the service unit can generate an identification instruction to instruct the computing unit to complete the identification of the corresponding sub-driving scenario.
  • it can be used to specify a driving scene, which is a comprehensive driving scene.
  • the service unit obtains a plurality of sub-driving scenes that constitute the driving scene according to the driving scene to be identified, and generates an identification instruction for each sub-driving scene to indicate
  • the computing unit completes the identification of the corresponding sub-driving scene.
  • the driving scene to be identified ie, the driving scene specified by the user
  • the sub-driving scenes to be identified include a sunny scene and a turning scene.
  • the user may input an instruction to the service unit, and the instruction may be used to indicate sub-driving scenarios and/or driving scenarios, and may also indicate target data.
  • the vehicle can collect real data during driving and upload it to the cloud service system or on-board system, and the cloud service system or on-board system saves it.
  • Users can specify target data, such as real data collected by the vehicle over a certain period of time.
  • the target data includes real data collected by the vehicle during this time period, for example, images of various driving scenes acquired by the vehicle or an index of the images, and so on.
  • the real data may contain a variety of sub-driving scenarios, and the driving scenarios at different times may also be different.
  • the existing technology will call all possible driving scene recognition algorithms for scene recognition, which takes up a lot of storage resources and computing resources.
  • the embodiment of the present application takes into account the flexibility of the automatic driving test, and can extract test-related scenarios from real data according to the user's needs for testing, which reduces waste of resources, and is more targeted to the test scenarios, thereby improving the testing and testing efficiency. Efficiency of scene extraction.
  • the service unit can obtain the target data and the sub-driving scene to be identified based on the instruction input by the user, and through the first identification instruction, instruct the computing unit to call some scene identification files to perform scene identification of the target sub-driving scene. waste of resources.
  • the service unit After the service unit obtains the information of the target data and the sub-driving scene to be identified, it sends an identification request generated based on the information of the target data and the sub-driving scene to be identified to the computing unit, and the identification request may include the first identification instruction and the first identification instruction. Two identification instructions, the first identification instruction is used to indicate the sub-driving scene to be identified, and the second identification instruction is used to indicate the target data.
  • the computing unit obtains a scene identification file corresponding to the sub-driving scene to be identified based on the first identification instruction, and the scene identification file is used to define the identification rules of the sub-driving scene. The computing unit recognizes the target data according to this part of the scene recognition file, and obtains the recognition result of the sub-driving scene, and the computing unit can retrieve the target data according to the second recognition instruction.
  • the service unit after the service unit obtains the target data, the sunny scene to be identified and the turning scene, it generates an identification request based on this part of the information, and sends the identification request to the computing unit. Based on the identification request, the computing unit can obtain the scene identification file corresponding to the sunny scene and the turning scene, and use this part of the scene identification file to identify the target data, and obtain the identification result of the sunny scene and the identification result of the turning scene, that is, the vehicle is in the sunny scene.
  • the time period in which the vehicle is in the turning scene, and the time period in the sunny scene and the time period in the turning scene may be the same or different, or may overlap.
  • the service unit may notify the computing unit of the sub-driving scene to be recognized. Therefore, the computing unit only needs to load the scene identification file corresponding to the part of the sub-driving scene, and use the part of the scene identification file to identify the target data, so as to obtain the identification result of the sub-driving scene. Then, the computing unit sends the identification result of the sub-driving scene to the service unit, so as to complete the identification of the sub-driving scene.
  • the service unit can notify the computing unit of the sub-driving scenarios specified by the user, the computing unit only needs to load the scene identification files corresponding to the sub-driving scenarios, and does not need to call the preset scenes corresponding to all sub-driving scenarios. Identify the file, and use this part of the scene recognition file loaded in real time to identify the target data, so as to obtain the identification result of the sub-driving scene, and complete the identification of the sub-driving scene.
  • the identification process takes up less storage resources and computing resources of the computing unit. , thereby reducing the waste of resources.
  • the computing unit may send the identification result of the sub-driving scene to the service unit, so that the service unit completes the identification of the driving scene on the target data based on the identification result of the sub-driving scene, so as to obtain the driving scene.
  • the recognition result of the scene may be used to obtain the driving scene.
  • FIG. 6 is another schematic flowchart of a driving scene recognition method provided by an embodiment of the present application. As shown in FIG. 6 , the method includes:
  • the service unit determines the target data, the driving scene to be identified, and the sub-driving scene to be identified, and the driving scene to be identified is constructed based on the sub-driving scene to be identified.
  • the user When the user wants to test the vehicle for autonomous driving, build a driving scenario for the test. Specifically, the user can first obtain the target data (also referred to as automatic driving data) collected by the vehicle during the target time period, wherein the target data includes the data of various driving scenarios experienced by the vehicle during the target time period, the Information such as driving status and vehicle dynamics data.
  • the target data may include images and videos of various driving scenarios experienced by the vehicle during a certain period of time, the driving path of the vehicle, the speed of the vehicle, the location of the vehicle, and so on.
  • driving scene recognition can be performed on the target data to determine which part of the target data corresponds to which driving scene. In this way, the corresponding driving scene can be directly constructed with specific data, thereby effectively reducing the workload required for constructing the driving scene.
  • the target data includes the data of all driving scenes experienced by the vehicle in the target time period, but the user does not necessarily pay attention to all driving scenes, so the user can specify one or some driving scenes to be recognized.
  • a driving scene can be constructed by multiple sub-driving scenes, or can be constructed by one sub-driving scene, which can be understood as the sub-driving scene. Therefore, after the user specifies the driving scene to be recognized, it is equivalent to specifying the sub-driving scene to be recognized.
  • Scenes For example, if the driving scene to be recognized is a turning scene in sunny weather, the multiple sub-driving scenes to be recognized are a sunny weather scene and a turning scene.
  • the driving scene to be identified is a scene of emergency braking when a sidecar cuts in
  • the multiple sub-driving scenes to be identified are a cut-in scene and an emergency braking scene.
  • the driving scene to be identified is a scene in the street where the vehicle behind is overtaking when the vehicle makes a left turn
  • the multiple sub-driving scenes to be identified are the street scene, the scene where the vehicle is turning left, the scene where the vehicle behind is overtaking, etc. .
  • the target data, the driving scene to be identified and/or the sub-driving scene to be identified can be sent or input to the service unit.
  • the service unit may provide the user with an interactive interface for receiving information of target data input by the user and instructions for indicating sub-driving scenarios and/or driving scenarios.
  • the interactive interface may be a visual interface in which an input field for target data, an input field for sub-driving scenarios (or a list of sub-driving scenarios for the user to select) and/or an input field for driving scenarios (or list of driving scenarios for the user to select), the user can manually or through the terminal device to each input field displayed on the visual interface, input the first instruction and/or the second instruction, and the third instruction, where the first instruction is used for The sub-driving scene to be identified is indicated, the second instruction is used to indicate the driving scene to be identified, and the third instruction is used to indicate target data.
  • the first instruction, the second instruction and the third instruction input by the user should meet the specifications of the cloud service system (for example, format and content definition, etc.), the specifications include but are not limited to: the directory structure of the target data, the Name, unit of measure, benchmark value, data collection frequency, data arrangement order, etc.
  • the specifications include but are not limited to: the directory structure of the target data, the Name, unit of measure, benchmark value, data collection frequency, data arrangement order, etc.
  • the service unit After receiving the first instruction and/or the second instruction, and the third instruction, the service unit can determine a plurality of sub-driving scenarios to be identified based on the first instruction, determine the driving scene to be identified based on the second instruction, and obtain the target based on the third instruction data.
  • the user can also only input the first instruction and the third instruction to the service unit, and the service unit can determine multiple sub-driving scenarios to be identified based on the first instruction, and synthesize the multiple sub-driving scenarios to be identified according to preset scene synthesis rules driving scene.
  • the user may only input the second instruction and the third instruction to the service unit, and the service unit may determine the driving scene to be identified based on the second instruction, and split the driving scene into the to-be-identified driving scenes according to the preset scene splitting rules.
  • Multiple sub-driving scenarios, etc. are not limited here.
  • the service unit determines resource requirements according to the target data and the sub-driving scenarios.
  • the service unit determines the resource requirements according to the target data and the sub-driving scenarios to be identified, wherein the resource requirements are used to indicate the identification of the sub-driving scenarios, and the required resources (including storage resources) and/or computing resources).
  • computing resources include, for example, central processing unit (central processing unit, CPU) resources
  • storage resources include, for example, memory and/or hard disk resources.
  • the number of CPU cores and the capacity of memory are positively related to the number of sub-driving scenarios and the complexity of sub-driving scenarios, and the storage space of the hard disk is positively related to the amount of target data.
  • the service unit determines a computing unit that matches the resource requirement.
  • the service unit may invoke cloud services (ie, through virtualization technology) to create a computing unit that matches resource requirements.
  • the computing unit is usually a container or a virtual machine.
  • the service unit is preset with an image, which can be regarded as a special file system and encapsulates the program of the main process.
  • the main process can realize functions such as scene recognition file loading, target data download, algorithm parallel call, and algorithm result integration. .
  • the service unit After the service unit obtains the resource requirements, it can determine the computing resources and storage resources corresponding to the resource requirements, and create a new computing unit (a new container) based on these resources and the preset image.
  • the new computing unit That is, it is a computing unit for performing subsequent identification operations.
  • the service unit selects a computing unit that matches resource requirements in a computing unit cluster, and in this implementation, the computing unit is usually a physical machine or a virtual machine.
  • a computing unit cluster ie, a physical machine cluster or a virtual machine cluster
  • the computing unit cluster includes multiple computing units, and different computing units are based on resources of different sizes (including computing resources and / or storage resources) created. Therefore, after obtaining the resource requirement, the service unit can determine a computing unit that matches the resource requirement among multiple computing units in the computing unit cluster, and the computing unit is the computing unit used for subsequent identification operations.
  • the service unit sends the identification request generated based on the target data and the sub-driving scenario to the computing unit.
  • the service unit After the service unit determines the computing unit that matches the resource requirements, it sends an identification request (the identification request may include a first identification instruction and a second identification instruction) generated based on the target data and the sub-driving scene to be identified to the computing unit. Specifically, the service unit may generate an identification request based on the download address of the target data and a list containing sub-driving scenarios to be identified, and send the identification request to the computing unit. After receiving the identification request from the service unit, the computing unit starts the main process with the identification request as input. In this way, the computing unit completes the identification of the sub-driving scene by running the main process.
  • the identification request may include a first identification instruction and a second identification instruction
  • the computing unit loads the scene identification file corresponding to the sub-driving scene, and identifies the target data according to the scene identification file to obtain the identification result of the sub-driving scene.
  • the main process can first parse the identification request to obtain the download address of the target data and multiple sub-driving scenarios to be identified. Then, the main process can download the target data according to the download address of the target data, and download the scene identification files corresponding to each sub-driving scene to be identified from the algorithm warehouse. Next, the main process starts a sub-process for each sub-driving scenario to be identified. All sub-processes started by the main process can run in parallel. For any sub-process, the sub-process can load the scene recognition file corresponding to the sub-driving scene into the local memory based on its corresponding sub-driving scene, so as to call the sub-process. The scene identification file identifies the target data to obtain the identification result of the sub-driving scene corresponding to the sub-process.
  • FIG. 7 is a schematic diagram of an identification operation performed by a computing unit according to an embodiment of the present application.
  • n is an integer greater than or equal to 1.
  • the main process can start corresponding n sub-processes for the n sub-driving scenes. It can be understood that the n sub-processes correspond to the n scene recognition files one-to-one.
  • the subprocess 1 loads the scene recognition file 1 into the memory through the dynamic loading interface corresponding to the language A. Then, the sub-process 1 needs to check the scene identification file 1 as follows: whether the name, parameters and other information of the scene identification file 1 meet the requirements specified by the cloud service system, and/or, the class written for the scene identification file 1 is the same as that for the cloud service system. Whether there is a class inheritance relationship between the classes written by the service system. If the check passes, subprocess 1 determines that scene recognition file 1 loaded into memory is available.
  • the sub-process 1 can convert the format of the target data into an exclusive format that can be recognized by the scene identification file 1 (because the data formats specified by different scene identification files are different), and use the scene identification file 1 to identify the target data, and obtain The recognition result of scene recognition file 1.
  • the subprocess 1 can convert the format of the recognition result of the scene recognition file 1 into a general format recognized by the system.
  • the recognition result of the scene recognition file 1 after the format conversion includes the sub-driving scene 1, the start time of the sub-driving scene 1, and the end time of the sub-driving scene 1.
  • the recognition result of the scene recognition file 1 can be: The vehicle is at 10:10: 00-10:30 In sub-driving scene 1.
  • sub-process 2 can use its corresponding scene recognition file 2 (as shown in FIG. 7 , the programming language for writing scene recognition file 2 is language B) to recognize the target data, thereby obtaining the recognition result of sub-driving scene 2 .
  • the running process of the sub-process 2 reference may be made to the running process of the above-mentioned sub-process 1, which will not be repeated here. And so on, the rest of the child processes are running as well.
  • the main process may send the recognition results of the n sub-driving scenarios to the service unit. So far, the computing unit has completed the operation of the main process, that is, the computing unit has completed the identification of the sub-driving scene.
  • the cloud service system may further include a sub-driving scene library, and the sub-driving scene library and the algorithm warehouse may be two different cloud databases, or may be deployed in the same cloud database. limit.
  • the sub-driving scene library is used to store the recognition results of the sub-driving scenes. Therefore, the computing unit sending the recognition results of the sub-driving scenes to the service unit specifically includes: after the computing unit obtains the recognition results of the sub-driving scenes, it can transfer the recognition results of the sub-driving scenes. The recognition results are uploaded to the sub-driving scene library. Then, the service unit may obtain the recognition results of the multiple sub-driving scenarios from the sub-driving scenario database, so as to determine the recognition results of the driving scenarios according to the recognition results of the multiple sub-driving scenarios.
  • the computing unit when the computing unit is a container, after the computing unit completes the identification of the sub-driving scene, it can also perform a container exit operation, thereby releasing the computing resources and storage resources occupied by the computing unit.
  • the service unit receives the recognition result of the sub-driving scene from the computing unit.
  • the service unit divides the target data into multiple sub-data in units of time.
  • the service unit determines whether there is a driving scene in each sub-data according to the identification result of the sub-driving scene in the plurality of sub-data, and obtains the identification result of the driving scene.
  • the service unit may determine each sub-driving scenario experienced by the vehicle based on the identification results of the multiple sub-driving scenarios. If the user finally pays attention to the driving scene constructed by these sub-driving scenes, the service unit can also perform scene fusion according to the recognition results of the sub-driving scenes, so as to obtain the recognition result of the driving scene specified by the user.
  • the process of scene fusion performed by the service unit may include:
  • the service unit divides the target data into multiple sub-data in units of time.
  • the target data is the data collected by the vehicle within the target time period, so the service unit may divide the target data equally by time to obtain multiple sub-data.
  • the target data is the data collected by the vehicle during 10:00-11:00.
  • the service unit divides the target data into six sub-data at 10-minute intervals, of which the first sub-data corresponds to 10:00-10 :10, the second sub-data corresponds to 10:10-10:20, the third sub-data corresponds to 10:20-10:30, the fourth sub-data corresponds to 10:30-10:40, and the first sub-data corresponds to 10: 40-10:50, the first sub-data corresponds to 10:50-11:00.
  • the service unit can detect whether each sub-data has a driving scene according to the identification results of the multiple sub-driving scenes, so as to obtain the identification result of the driving scene.
  • the driving scene to be identified specified by the user can be presented in the form of a Boolean search formula, which is constructed based on a plurality of sub-driving scenes to be identified.
  • the service unit can detect whether each sub-data conforms to the Boolean search formula in multiple sub-data according to the identification results of multiple sub-driving scenarios, so as to obtain the identification result of the driving scene.
  • the identification result of the driving scene includes the driving scene and the start of the driving scene. time and the end time of the driving scene.
  • the Boolean search formula sent by the user to the service unit is: (overtaking by the rear car or cut out by the car in front) and driving in a straight line and sunny weather.
  • the Boolean search formula is constructed from four sub-driving scenarios to be identified, so This Boolean search can be equivalent to two driving scenarios to be identified.
  • the driving scene A is: in sunny weather, when the ego vehicle is driving in a straight line, the following car overtakes;
  • the driving scene B is: in sunny weather, the front car cuts out when the ego car is driving in a straight line.
  • the service unit obtains the recognition result of the overtaking scene of the rear vehicle (that is, the vehicle is in the overtaking scene of the rear vehicle at 10:00-10:10), and the recognition result of the cut-out scene of the preceding vehicle (that is, the vehicle is in the front vehicle at 10:10-10:20). cut out scene), the recognition result of the straight driving scene (that is, the vehicle is in the left-turn scene of the vehicle from 10:00-10:40) and the recognition result of the sunny scene (that is, the vehicle is in the sunny scene from 10:00-11:00) Then, based on the recognition results of the four sub-driving scenarios, in the six sub-data, the sub-driving scenarios included in each sub-data can be marked.
  • Table 1 The marking results are shown in Table 1:
  • the service unit can set Vi to be 1, if there is no sub-driving scene in a sub-data segment In scenario i, the service unit can set Vi to be 0. Then, the service unit can detect whether each sub-data conforms to the aforementioned Boolean retrieval formula according to the labeling results shown in Table 1:
  • the Boolean retrieval formula can be obtained based on the user's input, and the service unit can determine that the first subdata and the second subdata conform to the Boolean retrieval formula, that is, the service unit can determine that the first subdata and the second subdata exist.
  • the driving scene specified by the user so as to obtain the recognition result of the driving scene, that is, from 10:00 to 10:10, the vehicle is in driving scene A (in sunny weather, when the vehicle is driving in a straight line, there is a scene where the vehicle behind is overtaking). From 10:10 to 10:20, the vehicle is in driving scene B (in sunny weather, the scene where the vehicle in front cuts out when the vehicle is driving in a straight line).
  • the service unit can convert the target data into a driving scene for testing based on the recognition result of the driving scene. Still as in the above example, based on the recognition result of the driving scene, the service unit may determine that the first sub-data contains driving scene A and the second sub-data contains driving scene B in the target data. Therefore, the service unit can convert the first sub-data into driving scenario A and the second sub-data into driving scenario B, thereby using the driving scenario A and the driving scenario B to conduct an automatic driving test of the vehicle.
  • the service unit may notify the computing unit of the sub-driving scene to be identified. Therefore, the computing unit only needs to load the scene identification file corresponding to the part of the sub-driving scene, and use the part of the scene identification file to identify the target data, so as to obtain the identification result of the sub-driving scene. Then, the computing unit sends the identification result of the sub-driving scene to the service unit, so as to complete the identification of the sub-driving scene.
  • the service unit can notify the computing unit of the sub-driving scenarios specified by the user, the computing unit only needs to load the scene recognition files corresponding to the sub-driving scenarios, and does not need to preset the scene recognition files corresponding to all the sub-driving scenarios in advance. file, and use this part of the scene recognition file loaded in real time to recognize the target data, so as to obtain the recognition result of the sub-driving scene, and complete the recognition of the sub-driving scene.
  • the recognition process takes up less storage resources and computing resources of the computing unit. Thereby reducing the waste of resources.
  • the user's ability to develop scene recognition files can be opened up. Users can customize the scene recognition files corresponding to sub-driving scenarios and upload them to the algorithm warehouse, thereby enriching the algorithms stored in the algorithm warehouse. . Moreover, different users can quickly exchange scene recognition files corresponding to sub-driving scenarios in the algorithm warehouse. For example, a user updates the scene recognition files corresponding to a sub-driving scene in the algorithm warehouse, and other users can also quickly obtain them from the algorithm warehouse. to the latest scene recognition documents, reducing the cost of algorithm use and learning.
  • a conventional method is to identify the target data through the scene recognition file corresponding to the driving scene to obtain the driving scene.
  • the process requires the development of scene identification files corresponding to various driving scenes, and each driving scene is usually constructed from multiple sub-driving scenes, that is, each driving scene has high complexity.
  • Scene recognition files often require considerable development costs.
  • only the scene identification files corresponding to some sub-driving scenes are used to identify the target data. Since the complexity of the sub-driving scenes is low, the development cost required to develop the scene identification files corresponding to the sub-driving scenes lower.
  • the driving scene to be identified is represented in the form of a Boolean search, and the target data is divided into multiple sub-data by time, so that the target data is divided into multiple sub-data based on the multiple sub-driving scenes.
  • the identification result judges whether each sub-data conforms to the Boolean retrieval formula, so as to obtain the identification result of the driving scene.
  • FIG. 8 is a schematic structural diagram of a driving scene recognition system provided by an embodiment of the application, and the system can be deployed in the aforementioned cloud service system, vehicle-mounted system, or a combined system constructed by the cloud service system and the vehicle-mounted system, as shown in FIG. 8 .
  • the system includes: a service unit 801 and a computing unit 802;
  • a service unit 801 configured to obtain a first identification instruction, where the first identification instruction is used to indicate a sub-driving scenario
  • the service unit 801 is further configured to send the first identification instruction to the computing unit 802;
  • the computing unit 802 is configured to obtain a scene identification file corresponding to the sub-driving scene according to the first identification instruction, and the scene identification file is used to define the identification rules of the sub-driving scene;
  • the computing unit 802 is further configured to identify the target data according to the scene identification file to obtain the identification result of the sub-driving scene.
  • the service unit 801 is further configured to: determine resource requirements according to target data and sub-driving scenarios; and select a computing unit 802 that matches the resource requirements in a cluster of computing units 802 .
  • the service unit 801 is further configured to: determine resource requirements according to target data and sub-driving scenarios; and create a computing unit 802 that matches the resource requirements.
  • the service unit 801 is further configured to: obtain a second identification instruction, where the second identification instruction is used to indicate a driving scene, and the driving scene is constructed based on a plurality of sub-driving scenes; determine according to the identification results of the plurality of sub-driving scenes The recognition result of the driving scene.
  • the service unit 801 is specifically configured to: in a unit of time, divide the target data into multiple sub-data; in the multiple sub-data, determine whether each sub-data exists according to the identification results of the multiple sub-driving scenarios The driving scene is obtained, and the recognition result of the driving scene is obtained.
  • the computing unit 802 is specifically configured to load the scene recognition file corresponding to the sub-driving scene in the database into the memory of the computing unit 802 .
  • the identification result of the sub-driving scene includes the sub-driving scene, the start time of the sub-driving scene, and the end time of the sub-driving scene.
  • the recognition result of the driving scene includes the driving scene, the start time of the driving scene, and the end time of the driving scene.
  • the driving scenario is a Boolean retrieval formula constructed based on sub-driving scenarios
  • the service unit 801 is specifically configured to detect whether each sub-data conforms to the Boolean retrieval formula according to the identification results of the multiple sub-driving scenarios, and obtain the driving scenario recognition result.
  • the service unit 801 is a virtual machine, a container or a bare metal server
  • the computing unit 802 is a virtual machine, a container or a bare metal server.
  • FIG. 9 is a schematic structural diagram of an apparatus provided by an embodiment of the present application. As shown in FIG. 9 , an embodiment of the apparatus in this embodiment of the present application may include one or more central processing units 901 , a memory 902 , an input/output interface 903 , a wired or wireless network interface 904 , and a power supply 905 .
  • the memory 902 may be short-lived or persistent storage for storing program and scene recognition files. Further, the central processing unit 901 may be configured to communicate with the memory 902 to execute a series of instruction operations in the memory 902 on the device.
  • the central processing unit 901 may execute the method steps in the foregoing embodiment shown in FIG. 5 or FIG. 6 , and details are not repeated here.
  • the division of specific functional modules in the central processing unit 901 may be similar to the division of service units, computing units and other unit modules described above in FIG. 7 , and details are not repeated here.
  • Embodiments of the present application also relate to a computer storage medium, including computer-readable instructions, when the computer-readable instructions are executed, the method described in FIG. 5 or FIG. 6 is implemented.
  • Embodiments of the present application also relate to a computer program product containing instructions, which, when run on a computer, cause the computer to execute the method described in FIG. 5 or FIG. 6 .
  • the embodiment of the present application also relates to a chip system, where the chip system includes a processor for invoking a computer program or computer instruction stored in a memory, so that the processor executes the method as shown in FIG. 5 or FIG. 6 .
  • the processor is coupled to the memory through an interface.
  • the chip system further includes a memory, and the memory stores computer programs or computer instructions.
  • the embodiments of the present application also relate to a processor, which is configured to invoke a computer program or computer instructions stored in a memory, so that the processor executes the method described in FIG. 5 or FIG. 6 .
  • the processor mentioned in any one of the above can be a general-purpose central processing unit, a microprocessor, an application-specific integrated circuit (ASIC), or one or more processors used to control the above-mentioned Fig. 5 An integrated circuit that executes the program of the driving scene recognition method in the illustrated embodiment.
  • the memory mentioned in any one of the above can be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM), and the like.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请公开了一种驾驶场景识别方法及其系统,在驾驶场景识别过程中,占用较少的存储资源和计算资源,从而减少资源的浪费。本申请的方法包括:获取第一识别指令,第一识别指令用于指示子驾驶场景;获取第一识别指令所指示的子驾驶场景对应的场景识别文件,场景识别文件用于定义子驾驶场景的识别规则;根据场景识别文件对目标数据进行识别,得到子驾驶场景的识别结果。

Description

一种驾驶场景识别方法及其系统
本申请要求于2021年2月22日提交中国专利局、申请号为202110196810.9、申请名称为“一种驾驶场景识别方法及其系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动驾驶技术领域,尤其涉及一种驾驶场景识别方法及其系统。
背景技术
随着自动驾驶等级的提高,车辆系统复杂性进一步增加,多变的天气、复杂的交通环境、多样的驾驶任务和动态的行驶状态等都对自动驾驶测试提出了新的挑战。面向传统汽车的测试工具与测试方法已不能满足自动驾驶汽车测试的需要。基于场景的虚拟测试技术场景配置灵活、测试效率高、测试重复性强、测试过程安全、测试成本低,可实现自动测试和加速测试,节省了大量人力物力。因此,基于场景的虚拟测试已成为自动驾驶汽车测试的重要环节。
自动驾驶测试场景的数据来源主要包括真实数据、模拟数据或专家经验等。真实数据来源主要包括自然驾驶数据、事故数据、路侧单元监控数据等。模拟数据来源主要包括驾驶模拟器数据和仿真数据。专家经验数据是指通过以往测试的经验知识总结得到的场景要素信息,标准法规测试场景是典型的专家经验场景数据来源。
在自动驾驶海量数据需求的背景下,通过真实数据还原得到的测试场景不仅效率较高,而且在路况和车辆运行轨迹等方面的数据真实性较高,能得到更贴近真实情况的测试场景。因此,基于真实数据的自然场景成为自动驾驶仿真测试场景库的重要来源,如何提取自然场景成为业界的一个重要的研究方向。
发明内容
本申请实施例提供了一种驾驶场景识别方法及其系统,在驾驶场景识别过程中,占用较少的存储资源和计算资源,从而减少资源的浪费。
本申请实施例的第一方面提供了一种驾驶场景识别方法,该方法可应用于云服务系统、车载系统、云服务系统和车载系统构成的组合系统中,前述任意一种系统均包含服务单元和计算单元,该方法包括:
当用户需要对车辆进行自动驾驶测试时,可向服务单元输入第三识别指令,第三识别指令携带车辆在某个时间段内采集的目标数据。可以理解的是,目标数据包含车辆在该时间段内所经历的各个驾驶场景的数据,例如,车辆所经历的各个驾驶场景的图像等等。然而,用户在进行自动驾驶测试时,并非关注所有的驾驶场景,故用户还可向服务单元输入第一识别指令,第一识别指令用于指示构成用户指定的驾驶场景的子驾驶场景(即待识别的子驾驶场景),故服务单元可基于第三识别指令得到目标数据,基于第一识别指令确定待识别的子驾驶场景。例如,设待识别的驾驶场景(即用户指定的驾驶场景)为在晴天天 气下转弯的场景,则待识别的子驾驶场景包含晴天场景和转弯场景等等。可见,待识别的驾驶场景基于待识别的多个子场景构建。
服务单元得到目标数据以及待识别的子驾驶场景后,则将基于目标数据以及待识别的子驾驶场景生成的识别请求(该识别请求可包含第一识别指令和第三识别指令)发送至计算单元。计算单元接收到识别请求后,则基于识别请求获取待识别的子驾驶场景对应的场景识别文件,场景识别文件用于定义子驾驶场景的识别规则。接着,计算单元根据这部分场景识别文件对目标数据进行识别,得到子驾驶场景的识别结果。依旧如上述例子,服务单元得到目标数据、待识别的晴天场景和转弯场景后,则基于这部分信息生成识别请求,并将识别请求发送至计算单元。计算单元基于识别请求,可获取晴天场景以及转弯场景对应的场景识别文件,并利用这部分场景识别文件对目标数据进行识别,得到晴天场景的识别结果以及转弯场景的识别结果,即车辆在某一时间段内处于晴天场景,车辆在另一时间段内处于转弯场景。
最后,计算单元将子驾驶场景的识别结果发送至服务单元。至此,计算单元完成子驾驶场景的识别。
从上述方法可以看出:服务单元根据用户输入的第一识别指令确定待识别的子驾驶场景后,可向计算单元通知待识别的子驾驶场景。因此,计算单元仅需加载该部分子驾驶场景对应的场景识别文件,并利用这部分场景识别文件对目标数据进行识别,以得到子驾驶场景的识别结果。然后,计算单元将子驾驶场景的识别结果发送至服务单元,从而完成子驾驶场景的识别。前述过程中,由于服务单元可向计算单元通知用户指定的子驾驶场景,故计算单元仅需加载该部分子驾驶场景对应的场景识别文件,而不需要提前预置所有子驾驶场景对应的场景识别文件,并利用实时加载的这部分场景识别文件对目标数据进行识别,从而得到子驾驶场景的识别结果,完成子驾驶场景的识别,该识别过程中占用计算单元较少的存储资源和计算资源,从而减少资源的浪费。
在一种可能的实现方式中,该方法还包括:服务单元获取第二识别指令,第二识别指令用于指示待识别的驾驶场景,待识别的驾驶场景基于待识别的多个子驾驶场景构建;服务单元根据多个子驾驶场景的识别结果确定驾驶场景的识别结果。前述实现方式中,用户还可向服务单元输入第二识别指令,第二识别指令用于指示待识别的驾驶场景,故服务单元可确定待识别的驾驶场景(即用户指定的驾驶场景),待识别的驾驶场景基于待识别的多个子驾驶场景构建。如此一来,服务单元得到多个子驾驶场景的识别结果后,由于服务单元已确定待识别的驾驶场景,故服务单元根据多个子驾驶场景的识别结果确定驾驶场景的识别结果。依旧如上述例子,计算单元将晴天场景的识别结果以及转弯场景的识别结果发送至服务单元后,由于服务单元已确定待识别的驾驶场景即为车辆同时处于晴天场景和转弯场景下,故服务单元可基于晴天场景的识别结果以及转弯场景的识别结果进行场景融合,从而得到在晴天天气下转弯的场景的识别结果,即车辆在某个时间段内既处于晴天场景,又处于转弯场景。完成驾驶场景识别后,服务单元可基于驾驶场景的识别结果,将目标数据转换成用于测试的驾驶场景。
在一种可能的实现方式中,该方法还包括:根据目标数据以及子驾驶场景确定资源需 求;在计算单元集群中,选择与资源需求匹配的计算单元。前述实现方式中,计算单元集群中包含多个计算单元,不同的计算单元被分配有不同的计算资源和存储资源。服务单元在接收到用户输入的目标数据和待识别的子驾驶场景后,可基于目标数据和待识别的子驾驶场景确定资源需求,该资源需求用于指示进行子驾驶场景的识别,所需要的资源的大小。因此,服务单元可从计算单元集群中,选择与资源需求匹配的计算单元。如此一来,由于选择得到的计算单元具备足够的计算资源和存储资源,故可顺利完成子驾驶场景的识别。
在一种可能的实现方式中,该方法还包括:根据目标数据以及子驾驶场景确定资源需求;创建与资源需求匹配的计算单元。前述实现方式中,服务单元在接收到用户输入的目标数据和待识别的子驾驶场景后,可基于目标数据和待识别的子驾驶场景确定资源需求,该资源需求用于指示进行子驾驶场景的识别,所需要的资源的大小。因此,服务单元可从创建出与资源需求匹配的计算单元,例如,服务单元可直接调用云服务(即虚拟化技术)创建出与资源需求匹配的计算单元等等。如此一来,由于创建得到的计算单元具备足够的计算资源和存储资源,故可顺利完成子驾驶场景的识别。
在一种可能的实现方式中,服务单元根据多个子驾驶场景的识别结果确定驾驶场景的识别结果包括:服务单元以时间为单位,将目标数据划分为多个子数据;服务单元在多个子数据中,根据多个子驾驶场景的识别结果确定每个子数据是否存在驾驶场景,得到驾驶场景的识别结果。在对目标数据进行驾驶场景的识别时,由于目标数据中存在很多个驾驶场景,一种常规的方法是通过驾驶场景对应的场景识别文件对目标数据进行识别,以得到驾驶场景的识别结果,该过程需要开发各种驾驶场景对应的场景识别文件,而每种驾驶场景通常由多个子驾驶场景构建,即每种驾驶场景均具备较高的复杂度,故在开发驾驶场景对应的场景识别文件往往需要付出相当大的开发成本。前述实现方式中,计算单元仅利用了部分子驾驶场景对应的场景识别文件对目标数据进行识别,由于子驾驶场景的复杂度较低,故开发子驾驶场景对应的场景识别文件所需的开发成本较低。然后,服务单元可按时间将目标数据划分为多个子数据,并基于多个子驾驶场景的识别结果标注出每个子数据所包含的子驾驶场景。接着,服务单元可根据标注结果确定哪些子数据中存在用户指定的驾驶场景,得到驾驶场景的识别结果,从而完成驾驶场景的识别。前述过程中,不需要开发驾驶场景对应的场景识别文件,故可以有效降低场景识别文件的开发成本。
在一种可能的实现方式中,驾驶场景为基于子驾驶场景构建的布尔检索式,服务单元根据多个子驾驶场景的识别结果检测每个子数据是否存在驾驶场景包括:服务单元根据多个子驾驶场景的识别结果,检测每个子数据是否符合布尔检索式,得到驾驶场景的识别结果。前述实现方式中,通过布尔检索式的形式来表示待识别的驾驶场景,故服务单元可根据标注结果确定哪些子数据符合布尔检索式,从而快速准确地判断出哪些子数据中存在用户指定的驾驶场景,从而得到驾驶场景的识别结果。
在一种可能的实现方式中,计算单元获取子驾驶场景对应的场景识别文件包括:计算单元将数据库中的子驾驶场景对应的场景识别文件,加载至计算单元的内存中。前述实现方式中,计算单元基于识别请求,可确定用户指定的子驾驶场景。然后,计算单元将这部 分子驾驶场景对应的场景识别文件加载至本地的内存中,故计算单元可调用这部分子驾驶场景对应的场景识别文件对目标数据进行识别,从而得到子驾驶场景的识别结果。由此可见,计算单元可动态加载待识别的子驾驶场景对应的场景识别文件至本地的内存中,因此,计算不需在本地的内存中预置所有子驾驶场景对应的场景识别文件,从而有效节省计算单元的存储资源。
在一种可能的实现方式中,子驾驶场景的识别结果包含子驾驶场景、子驾驶场景的起始时间以及子驾驶场景的结束时间。
在一种可能的实现方式中,驾驶场景的识别结果包括驾驶场景、驾驶场景的起始时间以及驾驶场景的结束时间。
在一种可能的实现方式中,服务单元为虚拟机、容器或裸金属服务器,计算单元为虚拟机、容器或裸金属服务器。
本申请实施例的第二方面提供了一种驾驶场景识别系统,该系统包括:服务单元和计算单元;计算单元,用于从服务单元获取第一识别指令,第一识别指令用于指示子驾驶场景;计算单元,用于根据第一识别指令获取子驾驶场景对应的场景识别文件,场景识别文件用于定义子驾驶场景的识别规则;计算单元,还用于根据场景识别文件对目标数据进行识别,得到子驾驶场景的识别结果。
从上述系统可以看出:服务单元根据用户输入的第一识别指令确定待识别的子驾驶场景后,可向计算单元通知待识别的子驾驶场景。因此,计算单元仅需加载该部分子驾驶场景对应的场景识别文件,并利用这部分场景识别文件对目标数据进行识别,以得到子驾驶场景的识别结果。然后,计算单元将子驾驶场景的识别结果发送至服务单元,从而完成子驾驶场景的识别。前述过程中,由于服务单元可向计算单元通知用户指定的子驾驶场景,故计算单元仅需加载该部分子驾驶场景对应的场景识别文件,而不需要提前预置所有子驾驶场景对应的场景识别文件,并利用实时加载的这部分场景识别文件对目标数据进行识别,从而得到子驾驶场景的识别结果,完成子驾驶场景的识别,该识别过程中占用计算单元较少的存储资源和计算资源,从而减少资源的浪费。
在一种可能的实现方式中,服务单元还用于:根据目标数据以及子驾驶场景确定资源需求;在计算单元集群中,选择与资源需求匹配的计算单元。
在一种可能的实现方式中,服务单元还用于:根据目标数据以及子驾驶场景确定资源需求;创建与资源需求匹配的计算单元。
在一种可能的实现方式中,服务单元还用于:获取第二识别指令,第二识别指令用于指示驾驶场景,驾驶场景基于多个子驾驶场景构建;根据多个子驾驶场景的识别结果确定驾驶场景的识别结果。
在一种可能的实现方式中,服务单元具体用于:以时间为单位,将目标数据划分为多个子数据;在多个子数据中,根据多个子驾驶场景的识别结果确定每个子数据是否存在驾驶场景,得到驾驶场景的识别结果。
在一种可能的实现方式中,计算单元具体用于将数据库中的子驾驶场景对应的场景识别文件,加载至计算单元的内存中。
在一种可能的实现方式中,子驾驶场景的识别结果包含子驾驶场景、子驾驶场景的起始时间以及子驾驶场景的结束时间。
在一种可能的实现方式中,驾驶场景的识别结果包括驾驶场景、驾驶场景的起始时间以及驾驶场景的结束时间。
在一种可能的实现方式中,驾驶场景为基于子驾驶场景构建的布尔检索式,服务单元具体用于根据多个子驾驶场景的识别结果,检测每个子数据是否符合布尔检索式,得到驾驶场景的识别结果。
在一种可能的实现方式中,服务单元为虚拟机、容器或裸金属服务器,计算单元为虚拟机、容器或裸金属服务器。
本申请实施例的第三方面提供了一种装置,该装置包括:处理器和存储器;
存储器用于存储程序以及场景识别文件;
处理器用于执行存储器所存储的程序,以使装置实现如第一方面或第一方面中任意一种可能的实现方式所述的方法。
本申请实施例的第四方面提供了一种车联网通信系统,该系统包含车载系统和如第三方面所述的装置,其中,车载系统与该装置通信连接。
本申请实施例的第五方面提供了一种计算机存储介质,包括计算机可读指令,当计算机可读指令被执行时,实现如第一方面或第一方面中任意一种可能的实现方式所述的方法。
本申请实施例的第六方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如第一方面或第一方面中任意一种可能的实现方式所述的方法。
本申请实施例的第七方面提供了一种芯片系统,该芯片系统包括处理器,用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行如第一方面或第一方面中任意一种可能的实现方式所述的方法。
在一种可能的实现方式中,该处理器通过接口与存储器耦合。
在一种可能的实现方式中,该芯片系统还包括存储器,该存储器中存储有计算机程序或计算机指令。
本申请实施例的第八方面提供了一种处理器,该处理器用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行如第一方面或第一方面中任意一种可能的实现方式所述的方法。
本申请实施例中,服务单元根据用户输入的第一识别指令确定待识别的子驾驶场景后,可向计算单元通知待识别的子驾驶场景。因此,计算单元仅需加载该部分子驾驶场景对应的场景识别文件,并利用这部分场景识别文件对目标数据进行识别,以得到子驾驶场景的识别结果。然后,计算单元将子驾驶场景的识别结果发送至服务单元,从而完成子驾驶场景的识别。前述过程中,由于服务单元可向计算单元通知用户指定的子驾驶场景,故计算单元仅需加载该部分子驾驶场景对应的场景识别文件,而不需要提前预置所有子驾驶场景对应的场景识别文件,并利用实时加载的这部分场景识别文件对目标数据进行识别,从而得到子驾驶场景的识别结果,完成子驾驶场景的识别,该识别过程中占用计算单元较 少的存储资源和计算资源,从而减少资源的浪费。
附图说明
图1为车联通信系统的一个结构示意图;
图2为本申请实施例提供的云服务系统的一个结构示意图;
图3为本申请实施例提供的多识别任务并行执行的一个示意图;
图4为本申请实施例提供的车载系统的一个结构示意图;
图5为本申请实施例提供的驾驶场景识别方法的一个流程示意图;
图6为本申请实施例提供的驾驶场景识别方法的另一流程示意图;
图7为本申请实施例提供的计算单元进行识别操作的一个示意图;
图8为本申请实施例提供的驾驶场景识别装置的一个结构示意图;
图9为本申请实施例提供的装置的一个结构示意图。
具体实施方式
本申请实施例提供了一种驾驶场景识别方法及其系统,在驾驶场景识别过程中,占用较少的存储资源和计算资源,从而减少资源的浪费。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。在本申请中出现的对步骤进行的命名或者编号,并不意味着必须按照命名或者编号所指示的时间/逻辑先后顺序执行方法流程中的步骤,已经命名或者编号的流程步骤可以根据要实现的技术目的变更执行次序,只要能达到相同或者相类似的技术效果即可。本申请中所出现的模块的划分,是一种逻辑上的划分,实际应用中实现时可以有另外的划分方式,例如多个模块可以结合成或集成在另一个系统中,或一些特征可以忽略,或不执行。另外,本申请中的连接、耦合或通信,可以是关联对象之间的直接连接,耦合或通信,或者是通过其他装置的间接连接、耦合或通信,此外,对象之间的连接、耦合或通信可以是电性或其他类似的形式,本申请中均不作限定。独立说明的模块或子模块可以是物理上分离的,也可以不是物理上的分离;可以是软件实现的,也可以是硬件实现的,且可以部分模块或子模块通过软件实现,由处理器调用该软件,实现这部分模块或子模块的功能,且其它部分模板或子模块通过硬件实现,例如通过硬件电路实现。此外,可以根据实际的需要选择其中的部分或全部模块来实现本申请方案的目的。
“多个”是指两个或两个以上。“和/或”描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
目前,自动驾驶技术受到了广泛关注。为了提高自动驾驶的安全性和可靠性,需要反复地对自动驾驶系统进行测试。在此过程中,可利用车辆在实际环境中所采集的数据,构建用于进行自动驾驶测试的驾驶场景。
图1为车联通信系统的一个结构示意图。如图1所示,车联通信系统包括车载系统和 云服务系统。其中,车载系统部署于车辆上,用于采集车辆在行驶过程中所经过的各个场景的数据,并将数据上传至云服务系统。云服务系统包含场景识别系统,场景识别系统用于对车载系统上传的数据进行场景识别,并基于识别结果构建驾驶场景。然后,场景识别系统利用构建的驾驶场景进行自动驾驶测试,从而更新自动驾驶数据,最后将更新后的自动驾驶数据(例如,自动驾驶程序或自动驾驶文件等等)发送至车载系统。
在基于真实数据的子驾驶场景的识别过程中,真实数据中可能蕴含多种场景,例如天气场景,道路场景,行为场景等等,目前,针对所有可能的场景均在系统中预设对应的场景识别文件,以识别出真实数据的场景情况。因此以上场景识别系统在接收到真实数据后,会使用所有的场景识别文件对目标数据进行识别,故会占用大量的存储资源和计算资源,造成了一定的资源浪费。
考虑到上述问题,本申请实施例提供了一种驾驶场景识别方法,该方法可应用于云服务系统中,或应用于车载系统中,或应用于云服务系统和车载系统构成的组合系统中。下文将分别对云服务系统、车载系统、云服务系统和车载系统构成的组合系统进行介绍。图2为本申请实施例提供的云服务系统的一个结构示意图,如图2所示,云服务系统包括:服务单元、计算单元集群以及数据库,下面将对服务单元、计算单元以及数据库分别进行简单的介绍:
服务单元作为云服务系统的管理器,可以对云服务系统进行集中管理。服务单元可通过多种方式进行部署,例如,服务单元可以为单独的一台物理机,又如,服务单元还可以为某台物理服务器上加载的某个虚拟机(virtual machine,VM),再如,服务单元还可以为某台物理服务器上加载的某个容器(docker)等等。服务单元可接收用户输入的目标数据、用户指定待识别的子驾驶场景和/或驾驶场景的信息,驾驶场景也可以称为综合场景,例如,在晴天天气下转弯的场景,子驾驶场景也可以称为单元场景,例如,晴天场景或转弯场景,驾驶场景可以基于多个子驾驶场景构建。服务单元可以调用计算单元对目标数据完成子驾驶场景的识别(例如,服务单元向计算单元发送识别指令,来调用计算单元对目标数据完成子驾驶场景的识别)。且如果有驾驶场景的识别需求,服务单元可以进一步从计算单元获取子驾驶场景的识别结果,并基于子驾驶场景的识别结果确定驾驶场景的识别结果,从而完成驾驶场景的识别。
计算单元也可通过多种方式进行部署,例如,计算单元可以为单独的一台物理机,又如,计算单元还可以为某台物理机上加载的某个虚拟机,再如,计算单元还可以为某台物理机上加载的某个容器等等。当计算单元和服务单元均为容器(或虚拟机)时,二者可部署于同一台物理机上,也可部署于不同的物理机上,此处不做限制。计算单元接收到来自服务单元的识别指令后,可基于识别指令加载待识别的子驾驶场景对应的场景识别文件,并利用场景识别文件对目标数据完成子驾驶场景的识别。进一步的,计算单元可以将子驾驶场景的识别结果返回至服务单元,以使得服务单元进一步完成驾驶场景的识别。
数据库用于存储场景识别的文件,例如,存储各种子驾驶场景(例如,旁车切入场景、晴天场景、连续转弯场景等等)对应的场景识别文件(也可称为场景识别算法),因此又可以称为算法仓库。在算法仓库中,每个场景识别文件可以被配置有标识,例如配置 有该文件用于识别的子驾驶场景的名称或该文件用于识别的子驾驶场景的身份识别号(Identity document,ID),场景识别文件与子驾驶场景可以是一一对应的。对于某一子驾驶场景对应的场景识别文件,该文件定义了该子驾驶场景的识别规则,例如,该子驾驶场景的参数边界和提取标准。计算单元可根据实时需求(如,用户指定的某些子驾驶场景),将算法仓库中的某个或某些场景识别文件(简称为目标场景识别文件)加载至本地中,从而利用目标场景识别文件对目标数据进行子驾驶场景的识别。可选的,算法仓库中的场景识别文件是可修改的(如,更新、替换、增加或删减等等)。例如,某个用户可向算法仓库上传新的晴天场景对应的场景识别文件,从而替换云数据中已存储的晴天场景对应的场景识别文件。又如,某个用户可向算法仓库中上传旁车切入场景对应的场景识别文件,使得算法仓库存有旁车切入场景对应的场景识别文件。可选地,不同用户通过算法仓库,可以交换和共享各种各样的场景识别文件,提升场景识别文件的丰富性,适应更多丰富的场景进行场景识别,提升驾驶体验。
服务单元可分别下发识别指令至不同的计算单元,使得这部分计算单元并行执行各自的识别任务。为了便于理解,下文结合图3对前述过程进行介绍。图3为本申请实施例提供的多识别任务并行执行的一个示意图,如图3所示,在某一时刻,服务单元向某一个计算单元下发包含目标数据D以及子驾驶场景J、K、L的识别指令,使得该计算单元对目标数据D完成子驾驶场景J、K、L的识别,并向服务单元上传识别结果。此外,在相同或不同时刻,服务单元可以向另一个计算单元下发包含目标数据E以及子驾驶场景X、Y、Z的识别指令,使得该计算单元对目标数据E完成子驾驶场景X、Y、Z的识别,并向服务单元上传识别结果。在前述过程中,两个计算单元的识别操作可以并行执行。
图4为本申请实施例提供的车载系统的一个结构示意图。如图4所示,该车载系统可部署于车辆中,该车载系统包括:服务单元、计算单元以及数据库。其中,服务单元可以是车载系统中的车载单元(on-board unit,OBU),OBU可通过车载信息盒(telematics box,T-BOX)等形式呈现,T-BOX可支持至少一种通信接口,例如,直连通信接口(PC5接口)、蜂窝通信接口(Uu接口)等等,故可与用户的移动设备实现通信,例如,接收用户输入待识别的子驾驶场景和/或驾驶场景的信息,或从真实数据中获取的目标数据等信息。计算单元可以是车载系统中的车载计算平台(in-vehicle computing platform,IVCP),IVCP可以通过智能驾驶、智能座舱、或智能车控等领域的计算平台等形式呈现,故可实现一系列复杂的数据处理,例如,对目标数据进行子驾驶场景的识别等操作。数据库可以是供计算单元调用的存储设备,用于存储各类用户上传的子驾驶场景对应的场景识别文件、场景识别结果等等信息。
关于车载系统中服务单元、计算单元以及数据库的功能介绍,可参考前述云服务系统中服务单元、计算单元以及云数据库的相关说明部分,此处不再赘述。
此外,云服务系统和车载系统还可构成一个组合系统。具体地,组合系统可通过多种方式实现,例如,组合系统包含车载系统(如图4所示)中的服务单元、云服务系统(如图2所示)中的计算单元和云服务系统中的云数据库。又如,组合系统包含云服务系统中的服务单元、车载系统中的计算单元和车载系统中的数据库。再如,组合系统包含车载系 统的服务单元、车载系统中的计算单元和云服务系统中的云数据库等等,此处不做限制。
关于组合系统中服务单元、计算单元以及数据库的功能介绍,可参考前述云服务系统中服务单元、计算单元以及云数据库的相关说明部分,此处不再赘述。
为了进一步理解上述各个系统的工作流程,下文以云服务系统为例,并结合图5作进一步的介绍。图5为本申请实施例提供的驾驶场景识别方法的一个流程示意图,如图5所示,该方法包括:
501、获取第一识别指令,第一识别指令用于指示子驾驶场景。
具体的,第一识别指令用于指示待识别的子驾驶场景,其可以由用户指定。用户可以指定多个子驾驶场景,该多个子驾驶场景构成用户指定的驾驶场景,针对每个子驾驶场景,服务单元可以生成识别指令,以指示计算单元完成对应子驾驶场景的识别。或者,可以用于指定驾驶场景,该驾驶场景是一个综合驾驶场景,服务单元根据待识别的驾驶场景,获得构成该驾驶场景的多个子驾驶场景,并针对每个子驾驶场景生成识别指令,以指示计算单元完成对应子驾驶场景的识别。例如,假设待识别的驾驶场景(即用户指定的驾驶场景)为在晴天天气下转弯的场景,则待识别的子驾驶场景包含晴天场景和转弯场景。
可选地,当对车辆进行自动驾驶测试时,用户可向服务单元输入指令,该指令用来指示子驾驶场景和/驾驶场景,还可以指示目标数据。车辆可以在行驶过程中采集真实数据并上传至云服务系统或车载系统,由云服务系统或车载系统进行保存。用户可以指定目标数据,例如车辆在某个时间段内采集的真实数据。目标数据包含车辆在该时间段内所采集到的真实数据,例如,车辆所获取的各个驾驶场景的图像或图像的索引等等。该真实数据可能蕴含了多种子驾驶场景,且不同时刻的驾驶场景也可能不同,现有技术会调用所有可能的驾驶场景识别算法进行场景识别,占用大量的存储资源和计算资源。而本申请实施例考虑到自动驾驶测试的灵活性,可以根据用户的需求从真实数据中提取测试相关的场景来进行测试,减少了资源浪费,且对测试场景更有针对性,提高了测试和场景提取的效率。
可见,服务单元可基于用户输入的指令得到目标数据和待识别的子驾驶场景,并通过第一识别指令,指示计算单元调用部分场景识别文件进行目标子驾驶场景的场景识别文件进行场景识别,减少了资源浪费。
502、获取第一识别指令所指示的子驾驶场景对应的场景识别文件,场景识别文件用于定义子驾驶场景的识别规则。
503、根据场景识别文件对目标数据进行识别,得到子驾驶场景的识别结果。
服务单元得到目标数据的信息以及待识别的子驾驶场景后,则将基于目标数据的信息以及待识别的子驾驶场景生成的识别请求发送至计算单元,该识别请求可包含第一识别指令和第二识别指令,第一识别指令用于指示待识别的子驾驶场景,第二识别指令用于指示目标数据。计算单元接收到识别请求后,则基于第一识别指令获取待识别的子驾驶场景对应的场景识别文件,场景识别文件用于定义子驾驶场景的识别规则。计算单元根据这部分场景识别文件对目标数据进行识别,得到子驾驶场景的识别结果,计算单元可以根据第二识别指令调取目标数据。
示例性的,服务单元得到目标数据、待识别的晴天场景和转弯场景后,则基于这部分 信息生成识别请求,并将识别请求发送至计算单元。计算单元基于识别请求,可获取晴天场景以及转弯场景对应的场景识别文件,并利用这部分场景识别文件对目标数据进行识别,得到晴天场景的识别结果以及转弯场景的识别结果,即车辆处于晴天场景的时间段,车辆在处于转弯场景的时间段,且处于晴天场景的时间段和处于转弯场景的时间段可以相同或不同,或可以有重叠。可以理解的,在本申请实施例中,服务单元根据用户输入的指令确定待识别的子驾驶场景后,可向计算单元通知待识别的子驾驶场景。因此,计算单元仅需加载该部分子驾驶场景对应的场景识别文件,并利用这部分场景识别文件对目标数据进行识别,以得到子驾驶场景的识别结果。然后,计算单元将子驾驶场景的识别结果发送至服务单元,从而完成子驾驶场景的识别。前述过程中,由于服务单元可向计算单元通知用户指定的子驾驶场景,故计算单元仅需加载该部分子驾驶场景对应的场景识别文件,而不需要调用预置的所有子驾驶场景对应的场景识别文件,并利用实时加载的这部分场景识别文件对目标数据进行识别,从而得到子驾驶场景的识别结果,完成子驾驶场景的识别,该识别过程中占用计算单元较少的存储资源和计算资源,从而减少资源的浪费。
此外,在得到子驾驶场景的识别结果后,计算单元可将子驾驶场景的识别结果发送至服务单元,以使得服务单元基于子驾驶场景的识别结果对目标数据完成驾驶场景的识别,以得到驾驶场景的识别结果。
下文将在图5所示实施例的基础上,结合图6进一步介绍服务单元实现驾驶场景的识别的过程。图6为本申请实施例提供的驾驶场景识别方法的另一流程示意图,如图6所示,该方法包括:
601、服务单元确定目标数据、待识别的驾驶场景以及待识别的子驾驶场景,待识别的驾驶场景基于待识别的子驾驶场景构建。
当用户要对车辆进行自动驾驶测试时,构建用于进行测试的驾驶场景。具体地,用户可先获取车辆在目标时间段内采集的目标数据(也可称为自动驾驶数据),其中,目标数据包括在目标时间段内,车辆所经历的各个驾驶场景的数据、车辆的行驶状态以及车辆的动力学数据等信息。例如,目标数据可包含在某段时间内,车辆所经历的各个驾驶场景的图像和视频、车辆的行驶路径、车辆的速度、车辆的位置等等信息。
由于目标数据的数据量很大,在基于目标数据构建用于进行测试的驾驶场景时,可对目标数据进行驾驶场景识别,从而确定目标数据中,哪一部分数据对应哪一个驾驶场景。如此一来,可用特定的数据直接构建相应的驾驶场景,从而有效减少构建驾驶场景所需的工作量。
在驾驶场景识别的过程中,目标数据包含车辆在目标时间段内,所经历的所有驾驶场景的数据,但用户不一定关注所有的驾驶场景,故用户可指定某个或某些待识别的驾驶场景和/或子驾驶场景。一个驾驶场景可以由多个子驾驶场景构建,或者可以由一个子驾驶场景构建,即可以理解为该子驾驶场景,故用户在指定了待识别的驾驶场景后,相当于指定了待识别的子驾驶场景。例如,待识别的驾驶场景为在晴天天气下转弯的场景,则待识别的多个子驾驶场景为晴天场景和转弯场景。又如,待识别的驾驶场景为旁车切入下紧急制动的场景,则待识别的多个子驾驶场景为旁车切入场景和紧急制动场景。再如,待识别 的驾驶场景为在街道中,自车进行左转时出现后车超车的场景,则待识别的多个子驾驶场景为街道场景、自车左转场景以及后车超车场景等等。
用户确定目标数据、待识别的驾驶场景和/或待识别的子驾驶场景后,可以将目标数据、待识别的驾驶场景和/或待识别的子驾驶场景发送至或输入至服务单元。例如,服务单元可向用户提供交互接口,该交互接口用于接收用户输入的目标数据的信息以及用于指示子驾驶场景和/或驾驶场景的指令。例如,该交互接口可以为一个可视化界面,该可视化界面中显示了目标数据的输入栏、子驾驶场景的输入栏(或供用户选择的子驾驶场景列表)和/或驾驶场景的输入栏(或供用户选择的驾驶场景列表),用户可手动或通过终端设备向该可视化界面显示的各个输入栏处,输入第一指令和/或第二指令,以及第三指令,其中,第一指令用于指示待识别的子驾驶场景,第二指令用于指示待识别的驾驶场景,第三指令用于指示目标数据。
用户所输入的第一指令、第二指令以及第三指令,应满足云服务系统的规范(例如,格式和内容定义等等),该规范包括但不限于:目标数据的目录结构、各个场景的名称、度量单位、基准值、数据采集频率、数据排列顺序等等。
服务单元接收到第一指令和/或第二指令,以及第三指令,可基于第一指令确定待识别的多个子驾驶场景,基于第二指令确定待识别的驾驶场景,基于第三指令得到目标数据。
此外,用户也可仅向服务单元输入第一指令和第三指令,服务单元可基于第一指令确定待识别的多个子驾驶场景,并按预置的场景合成规则将多个子驾驶场景合成待识别的驾驶场景。或者,用户也可仅向服务单元输入第二指令和第三指令,服务单元可基于第二指令确定待识别的驾驶场景,并按预置的场景拆分规则将驾驶场景拆分为待识别的多个子驾驶场景等等,此处不做限制。
602、服务单元根据目标数据以及子驾驶场景确定资源需求。
确定目标数据、待识别的子驾驶场景后,服务单元根据目标数据以及待识别的子驾驶场景确定资源需求,其中,资源需求用于指示进行子驾驶场景的识别,所需要的资源(包含存储资源和/或计算资源)的大小。具体地,计算资源例如包含中央处理器(central processing unit,CPU)的资源,存储资源例如包含内存和/或硬盘的资源。在资源需求中,CPU核心(core)的数量、内存的容量与子驾驶场景的数量、子驾驶场景的复杂度正相关,硬盘的存储空间与目标数据的数据量正相关。
603、服务单元确定与资源需求匹配的计算单元。
服务单元得到资源需求后,可确定与资源需求匹配的计算单元,该计算单元用于进行子驾驶场景的识别。具体地,服务单元可通过多种方式确定与资源需求匹配的计算单元,下文将分别进行介绍:
在一种可能的实现方式中,服务单元可调用云服务(即通过虚拟化技术),创建与资源需求匹配的计算单元,在该实现方式中,计算单元通常为容器或虚拟机。具体地,服务单元预置有镜像,镜像可视为一个特殊的文件系统,封装有主进程的程序,该主进程可实现场景识别文件加载、目标数据下载、算法并行调用、算法结果整合等功能。在服务单元 得到资源需求后,可确定与该资源需求对应的计算资源以及存储资源,并基于这部分资源以及预置的镜像创建出一个新的计算单元(新的容器),该新的计算单元即为用于进行后续识别操作的计算单元。
在另一种可能的实现方式中,服务单元在计算单元集群中,选择与资源需求匹配的计算单元,在该实现方式中,计算单元通常为物理机或虚拟机。具体地,在云服务系统中预先部署有一个计算单元集群(即物理机集群或虚拟机集群),该计算单元集群包含多个计算单元,不同的计算单元基于不同大小的资源(包含计算资源和/或存储资源)创建。因此,服务单元得到资源需求后,可在计算单元集群的多个计算单元中,确定与该资源需求匹配的计算单元,该计算单元即为用于进行后续识别操作的计算单元。
604、服务单元将基于目标数据以及子驾驶场景生成的识别请求发送至计算单元。
服务单元确定与资源需求匹配的计算单元后,则将基于目标数据以及待识别的子驾驶场景生成的识别请求(该识别请求可包含第一识别指令和第二识别指令)发送至计算单元。具体地,服务单元可基于目标数据的下载地址以及包含待识别的子驾驶场景列表生成识别请求,并将识别请求发送至计算单元。计算单元接收来自服务单元的识别请求后,以识别请求为输入启动主进程。如此一来,计算单元通过运行主进程,从而完成子驾驶场景的识别。
605、计算单元加载子驾驶场景对应的场景识别文件,并根据场景识别文件对目标数据进行识别,得到子驾驶场景的识别结果。
在计算单元运行主进程的过程中,主进程可先解析识别请求,得到目标数据的下载地址以及待识别的多个子驾驶场景。然后,主进程可根据目标数据的下载地址下载目标数据,并从算法仓库中,下载待识别的各个子驾驶场景对应的场景识别文件。接着,主进程为每一个待识别的子驾驶场景启动一个子进程。主进程启动的所有子进程可并行运行,对于任意一个子进程,该子进程可基于其对应的子驾驶场景,将该子驾驶场景对应的场景识别文件载入至本地的内存中,从而调用该场景识别文件对目标数据进行识别,以得到该子进程对应的子驾驶场景的识别结果。
为了便于理解,下文结合图7对上述识别过程作进一步的介绍。图7为本申请实施例提供的计算单元进行识别操作的一个示意图。如图7所示,设有待识别的n个子驾驶场景,n为大于或等于1的整数。主进程下载目标数据以及n个子驾驶场景对应的n个场景识别文件后,可为n个子驾驶场景启动相应的n个子进程。可以理解的是,n个子进程与n个场景识别文件一一对应。
具体地,子进程1在确定编写场景识别文件1的编程语言为语言A后,则通过语言A对应的动态加载接口,将场景识别文件1载入到内存中。然后,子进程1需对场景识别文件1进行如下检查:场景识别文件1的名称、参数等信息是否符合云服务系统所规定的要求,和/或,为场景识别文件1编写的类与为云服务系统编写的类之间是否存在类继承关系。若通过检查,子进程1确定加载至内存的场景识别文件1可用。接着,子进程1可将目标数据的格式转换为场景识别文件1所能识别的专属格式(因为不同场景识别文件所规定的数据格式不同),并使用场景识别文件1对目标数据进行识别,得到场景识别文件1 的识别结果。最后,子进程1可将场景识别文件1的识别结果的格式转换为系统所能识别的通用格式。格式转换后的场景识别文件1的识别结果包括子驾驶场景1、子驾驶场景1的起始时间以及子驾驶场景1的结束时间,例如,场景识别文件1的识别结果可以为:车辆在10:00-10:30处于子驾驶场景1。
同样地,子进程2可使用其对应的场景识别文件2(如图7所示,编写场景识别文件2的编程语言为语言B),对目标数据进行识别,从而得到子驾驶场景2的识别结果。子进程2的运行过程可参考上述子进程1的运行过程,此处不再赘述。以此类推,其余子进程的运行过程也是如此。当n个子进程完成运行后,主进程可将n个子驾驶场景的识别结果发送至服务单元。至此,计算单元完成主进程的运行,即计算单元完成子驾驶场景的识别。
在一种可能的实现方式中,云服务系统还可包括子驾驶场景库,子驾驶场景库与算法仓库可为两个不同的云数据库,也可部署于同一个云数据库中,此处不做限制。子驾驶场景库用于存储子驾驶场景的识别结果,故计算单元将子驾驶场景的识别结果发送至服务单元具体包括:计算单元得到多个子驾驶场景的识别结果后,可将多个子驾驶场景的识别结果上传至子驾驶场景库中。然后,服务单元可从子驾驶场景库中获取多个子驾驶场景的识别结果,从而根据多个子驾驶场景的识别结果确定驾驶场景的识别结果。
在一种可能的实现方式中,当计算单元为容器时,计算单元完成子驾驶场景的识别后,还可执行容器退出操作,从而释放计算单元所占用的计算资源和存储资源。
由于主进程以及子进程由计算单元运行,故上述过程中,主进程以及子进程执行的步骤相当于计算单元执行的步骤。
606、服务单元接收来自计算单元的子驾驶场景的识别结果。
607、服务单元以时间为单位,将目标数据划分为多个子数据。
608、服务单元在多个子数据中,根据子驾驶场景的识别结果确定每个子数据是否存在驾驶场景,得到驾驶场景的识别结果。
服务单元得到多个子驾驶场景的识别结果后,可基于多个子驾驶场景的识别结果,确定车辆所经历的各个子驾驶场景。如果用户最终关注的是由这些子驾驶场景所构建的驾驶场景,服务单元还可以根据子驾驶场景的识别结果进行场景融合,从而得到用户指定的驾驶场景的识别结果。其中,服务单元进行场景融合的过程可包括:
服务单元以时间为单位,将目标数据划分为多个子数据。具体地,目标数据为车辆在目标时间段内采集到的数据,故服务单元可将目标数据按时间进行平均划分,从而得到多个子数据。例如,目标数据为车辆在10:00-11:00内采集到的数据,服务单元以10分钟为间隔,将目标数据划分为六个子数据,其中,第一个子数据对应10:00-10:10,第二个子数据对应10:10-10:20,第三个子数据对应10:20-10:30,第四个子数据对应10:30-10:40,第一个子数据对应10:40-10:50,第一个子数据对应10:50-11:00。
得到多个子数据后,由于服务单元已确定待识别的驾驶场景,故服务单元可根据多个子驾驶场景的识别结果检测每个子数据是否存在驾驶场景,从而得到驾驶场景的识别结果。具体地,用户指定的待识别的驾驶场景,可通过布尔检索式的形式呈现,该布尔检索 式基于待识别的多个子驾驶场景构建。服务单元可在多个子数据中,根据多个子驾驶场景的识别结果,检测每个子数据是否符合布尔检索式,从而得到驾驶场景的识别结果,驾驶场景的识别结果包含驾驶场景、驾驶场景的起始时间以及驾驶场景的结束时间。
依旧如上述例子,设用户发送至服务单元的布尔检索式为:(后车超车or前车切出)and直线行驶and晴天天气,该布尔检索式由四个待识别的子驾驶场景构建,故该布尔检索式可等同于两个待识别的驾驶场景。其中,驾驶场景A为:在晴天天气下,自车直线行驶时出现后车超车的场景;驾驶场景B为:在晴天天气下,自车直线行驶时出现前车切出的场景。
服务单元得到后车超车场景的识别结果(即车辆在10:00-10:10处于后车超车场景)、前车切出场景的识别结果(即车辆在10:10-10:20处于前车切出场景)、直线行驶场景的识别结果(即车辆在10:00-10:40处于自车左转场景)以及晴天场景的识别结果(即车辆在10:00-11:00处于晴天场景)后,可基于这四个子驾驶场景的识别结果,在六个子数据中,标注出每个子数据所包含的子驾驶场景,标注结果如表1所示:
表1
Figure PCTCN2022077235-appb-000001
对于服务单元而言,可将后车超车、前车切出、直线行驶以及晴天天气依次设为V1、V2、V3和V4,则前述布尔检索式为f(V1,V2,V3,V4)=(V1 or V2)and V3 and V4。对于任意一个子驾驶场景i,i=1,2,3或4,若某个子数据片段中存在子驾驶场景i,服务单元 可令Vi取值为1,若某个子数据片段中不存在子驾驶场景i,服务单元可令Vi取值为0。那么,服务单元可根据表1所示的标注结果,检测每个子数据是否符合前述布尔检索式:
基于第一个子数据的标注结果,服务单元可确定布尔检索式的取值为(1 or 0)and 1 and 1=1,故服务单元可确定第一个子数据符合布尔检索式。
基于第二个子数据的标注结果,服务单元可确定布尔检索式的取值为(0 or 1)and 1 and 1=1,故服务单元可确定第二个子数据符合布尔检索式。
基于第三个子数据的标注结果,服务单元可确定布尔检索式的取值为(0 or 0)and 1 and 1=0,故服务单元可确定第三个子数据不符合布尔检索式。
基于第四个子数据的标注结果,服务单元可确定布尔检索式的取值为(0 or 0)and 1 and 1=0,故服务单元可确定第四个子数据不符合布尔检索式。
基于第五个子数据的标注结果,服务单元可确定布尔检索式的取值为(0 or 0)and 0 and 1=0,故服务单元可确定第五个子数据不符合布尔检索式。
基于第六个子数据的标注结果,服务单元可确定布尔检索式的取值为(0 or 0)and 0 and 1=0,故服务单元可确定第六个子数据不符合布尔检索式。
可选地,布尔检索式可以基于用户的输入获得,服务单元可确定第一个子数据和第二个子数据符合布尔检索式,即服务单元可确定第一个子数据和第二个子数据中存在用户指定的驾驶场景,从而得到驾驶场景的识别结果,即在10:00-10:10中,车辆处于驾驶场景A(在晴天天气下,自车直线行驶时出现后车超车的场景)。在10:10-10:20中,车辆处于驾驶场景B(在晴天天气下,自车直线行驶时出现前车切出的场景)。
服务单元得到驾驶场景的识别结果后,可基于驾驶场景的识别结果,将目标数据转换为用于进行测试的驾驶场景。依旧如上述例子,服务单元基于驾驶场景的识别结果,在目标数据中可确定第一个子数据存在驾驶场景A,第二个子数据中存在驾驶场景B。因此,服务单元可将第一个子数据转换为驾驶场景A,将第二个子数据转换为驾驶场景B,从而使用驾驶场景A和驾驶场景B对车辆进行自动驾驶测试。
本申请实施例中,服务单元根据用户输入的第一识别指令确定待识别的子驾驶场景后,可向计算单元通知待识别的子驾驶场景。因此,计算单元仅需加载该部分子驾驶场景对应的场景识别文件,并利用这部分场景识别文件对目标数据进行识别,以得到子驾驶场景的识别结果。然后,计算单元将子驾驶场景的识别结果发送至服务单元,从而完成子驾驶场景的识别。前述过程中,由于服务单元可向计算单元通知用户指定的子驾驶场景,故计算单元仅需加载该部分子驾驶场景对应的场景识别文件,而不需要提前预置所有子驾驶场景对应的场景识别文件,并利用实时加载的这部分场景识别文件对目标数据进行识别,从而得到子驾驶场景的识别结果,完成子驾驶场景的识别,该识别过程中占用计算单元较少的存储资源和计算资源,从而减少资源的浪费。
更进一步地,由于算法仓库的存在,可开放用户对场景识别文件的开发能力,用户可通过自定义子驾驶场景对应的场景识别文件,并上传至算法仓库中,从而丰富算法仓库中存储的算法。而且,不同用户可在算法仓库中快速地交换子驾驶场景对应的场景识别文件,例如,某个用户在算法仓库中更新某个子驾驶场景对应的场景识别文件,其余用户也 从算法仓库中快速获取到最新的场景识别文件,降低了算法使用和学习的成本。
更进一步地,在对目标数据进行驾驶场景的识别时,由于目标数据中存在很多个驾驶场景,一种常规的方法是通过驾驶场景对应的场景识别文件对目标数据进行识别,以得到驾驶场景的识别结果,该过程需要开发各种驾驶场景对应的场景识别文件,而每种驾驶场景通常由多个子驾驶场景构建,即每种驾驶场景均具备较高的复杂度,故在开发驾驶场景对应的场景识别文件往往需要付出相当大的开发成本。在本申请实施例中,仅利用了部分子驾驶场景对应的场景识别文件对目标数据进行识别,由于子驾驶场景的复杂度较低,故开发子驾驶场景对应的场景识别文件所需的开发成本较低。然后,基于多个子驾驶场景的识别结果判断是否存在驾驶场景,例如,通过布尔检索式的形式来表示待识别的驾驶场景,将目标数据按时间划分成多个子数据,从而基于多个子驾驶场景的识别结果判断每个子数据是否符合布尔检索式,以得到驾驶场景的识别结果。前述过程中不需要开发驾驶场景对应的场景识别文件,故可以有效降低场景识别文件的开发成本。
以上是对本申请实施例提供的驾驶场景识别方法所进行的详细说明,下文将对本申请实施例提供的驾驶场景识别系统进行介绍。
图8为本申请实施例提供的驾驶场景识别系统的一个结构示意图,该系统可部署于前述的云服务系统、车载系统或云服务系统和车载系统所构建的组合系统中,如图8所示,该系统包括:服务单元801和计算单元802;
服务单元801,用于获取第一识别指令,第一识别指令用于指示子驾驶场景;
服务单元801,还用于将第一识别指令发送至计算单元802;
计算单元802,用于根据第一识别指令获取子驾驶场景对应的场景识别文件,场景识别文件用于定义子驾驶场景的识别规则;
计算单元802,还用于根据场景识别文件对目标数据进行识别,得到子驾驶场景的识别结果。
在一种可能的实现方式中,服务单元801还用于:根据目标数据以及子驾驶场景确定资源需求;在计算单元802集群中,选择与资源需求匹配的计算单元802。
在一种可能的实现方式中,服务单元801还用于:根据目标数据以及子驾驶场景确定资源需求;创建与资源需求匹配的计算单元802。
在一种可能的实现方式中,服务单元801还用于:获取第二识别指令,第二识别指令用于指示驾驶场景,驾驶场景基于多个子驾驶场景构建;根据多个子驾驶场景的识别结果确定驾驶场景的识别结果。
在一种可能的实现方式中,服务单元801具体用于:以时间为单位,将目标数据划分为多个子数据;在多个子数据中,根据多个子驾驶场景的识别结果确定每个子数据是否存在驾驶场景,得到驾驶场景的识别结果。
在一种可能的实现方式中,计算单元802具体用于将数据库中的子驾驶场景对应的场景识别文件,加载至计算单元802的内存中。
在一种可能的实现方式中,子驾驶场景的识别结果包含子驾驶场景、子驾驶场景的起始时间以及子驾驶场景的结束时间。
在一种可能的实现方式中,驾驶场景的识别结果包括驾驶场景、驾驶场景的起始时间以及驾驶场景的结束时间。
在一种可能的实现方式中,驾驶场景为基于子驾驶场景构建的布尔检索式,服务单元801具体用于根据多个子驾驶场景的识别结果,检测每个子数据是否符合布尔检索式,得到驾驶场景的识别结果。
在一种可能的实现方式中,服务单元801为虚拟机、容器或裸金属服务器,计算单元802为虚拟机、容器或裸金属服务器。
上述装置各模块/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其带来的技术效果与本申请方法实施例相同,具体内容可参考本申请实施例前述所示的方法实施例中的叙述,此处不再赘述。
图9为本申请实施例提供的装置的一个结构示意图。如图9所示,本申请实施例中装置一个实施例可以包括一个或一个以上中央处理器901,存储器902,输入输出接口903,有线或无线网络接口904,电源905。
存储器902可以是短暂存储或持久存储,用于存储程序和场景识别文件。更进一步地,中央处理器901可以配置为与存储器902通信,在该装置上执行存储器902中的一系列指令操作。
本实施例中,中央处理器901可以执行前述图5或图6所示实施例中的方法步骤,具体此处不再赘述。
本实施例中,中央处理器901中的具体功能模块划分可以与前述图7中所描述的服务单元、计算单元等单元模块的划分方式类似,此处不再赘述。
本申请实施例还涉及一种计算机存储介质,包括计算机可读指令,当所述计算机可读指令被执行时,实现如图5或图6所述的方法。
本申请实施例还涉及一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如图5或图6所述的方法。
本申请实施例还涉及一种芯片系统,该芯片系统包括处理器,用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行如图5或图6的方法。
在一种可能的实现方式中,该处理器通过接口与存储器耦合。
在一种可能的实现方式中,该芯片系统还包括存储器,该存储器中存储有计算机程序或计算机指令。
本申请实施例还涉及一种处理器,该处理器用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行如图5或图6所述的方法。
其中,上述任一处提到的处理器,可以是一个通用中央处理器,微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制上述图5所示的实施例中的驾驶场景识别方法的程序执行的集成电路。上述任一处提到的存储器可以为只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装 置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (17)

  1. 一种驾驶场景识别方法,其特征在于,所述方法包括:
    获取第一识别指令,所述第一识别指令用于指示子驾驶场景;
    获取所述第一识别指令所指示的所述子驾驶场景对应的场景识别文件,所述场景识别文件用于定义所述子驾驶场景的识别规则;
    根据所述场景识别文件对目标数据进行识别,得到所述子驾驶场景的识别结果。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取第二识别指令,所述第二识别指令用于指示驾驶场景,所述驾驶场景基于多个子驾驶场景构建;
    根据所述多个子驾驶场景的识别结果确定所述驾驶场景的识别结果。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述多个子驾驶场景的识别结果确定所述驾驶场景的识别结果,包括:
    以时间为单位,将所述目标数据划分为多个子数据;
    在所述多个子数据中,根据所述多个子驾驶场景的识别结果确定每个子数据是否存在所述驾驶场景,得到驾驶场景的识别结果。
  4. 根据权利要求3所述的方法,其特征在于,所述驾驶场景的识别结果包括所述驾驶场景、所述驾驶场景的起始时间以及所述驾驶场景的结束时间。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    根据所述目标数据以及所述子驾驶场景确定资源需求;
    在计算单元集群中,选择与所述资源需求匹配的计算单元,所述计算单元用于根据所述场景识别文件对目标数据进行识别,得到所述子驾驶场景的识别结果。
  6. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    根据所述目标数据以及所述子驾驶场景确定资源需求;
    创建与所述资源需求匹配的计算单元,所述计算单元用于根据所述场景识别文件对目标数据进行识别,得到所述子驾驶场景的识别结果。
  7. 根据权利要求1至4任意一项所述的方法,其特征在于,所述子驾驶场景的识别结果包含所述子驾驶场景、所述子驾驶场景的起始时间以及所述子驾驶场景的结束时间。
  8. 一种驾驶场景识别系统,其特征在于,所述系统包括:服务单元和计算单元;
    所述计算单元,用于从所述服务单元获取第一识别指令,所述第一识别指令用于指示子驾驶场景;
    所述计算单元,用于根据所述第一识别指令获取所述子驾驶场景对应的场景识别文件,所述场景识别文件用于定义所述子驾驶场景的识别规则;
    所述计算单元,还用于根据所述场景识别文件对目标数据进行识别,得到所述子驾驶场景的识别结果。
  9. 根据权利要求8所述的系统,其特征在于,所述服务单元还用于:
    获取第二识别指令,所述第二识别指令用于指示驾驶场景,所述驾驶场景基于多个子驾驶场景构建;
    根据所述多个子驾驶场景的识别结果确定所述驾驶场景的识别结果。
  10. 根据权利要求9所述的系统,其特征在于,所述服务单元,还用于:
    以时间为单位,将所述目标数据划分为多个子数据;
    在所述多个子数据中,根据所述多个子驾驶场景的识别结果确定每个子数据是否存在所述驾驶场景,得到驾驶场景的识别结果。
  11. 根据权利要求10所述的系统,其特征在于,所述驾驶场景的识别结果包括所述驾驶场景、所述驾驶场景的起始时间以及所述驾驶场景的结束时间。
  12. 根据权利要求8至11任一项所述的系统,其特征在于,所述服务单元还用于:
    根据所述目标数据以及所述子驾驶场景确定资源需求;
    在计算单元集群中,选择与所述资源需求匹配的所述计算单元。
  13. 根据权利要求8至11任一项所述的系统,其特征在于,所述服务单元还用于:
    根据所述目标数据以及所述子驾驶场景确定资源需求;
    创建与所述资源需求匹配的所述计算单元。
  14. 根据权利要求8至11任意一项所述的系统,其特征在于,所述子驾驶场景的识别结果包含所述子驾驶场景、所述子驾驶场景的起始时间以及所述子驾驶场景的结束时间。
  15. 一种装置,其特征在于,所述装置包括:处理器和存储器;
    所述存储器用于存储程序以及场景识别文件;
    所述处理器用于执行所述存储器所存储的程序,以使所述装置实现如所述权利要求1至7任一项所述的方法。
  16. 一种通信系统,其特征在于,所述通信系统包含车载系统和如权利要求8-14任一项所述的驾驶场景识别系统,所述车载系统与所述驾驶场景识别系统通信连接,以从所述驾驶场景识别系统接收更新数据,所述更新数据根据所述驾驶场景识别系统识别的子驾驶场景或驾驶场景获得。
  17. 一种计算机存储介质,其特征在于,包括计算机可读指令,当所述计算机可读指令被执行时,实现如权利要求1至7任一项所述的方法。
PCT/CN2022/077235 2021-02-22 2022-02-22 一种驾驶场景识别方法及其系统 WO2022174838A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110196810.9A CN112565468B (zh) 2021-02-22 2021-02-22 一种驾驶场景识别方法及其系统
CN202110196810.9 2021-02-22

Publications (1)

Publication Number Publication Date
WO2022174838A1 true WO2022174838A1 (zh) 2022-08-25

Family

ID=75034461

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077235 WO2022174838A1 (zh) 2021-02-22 2022-02-22 一种驾驶场景识别方法及其系统

Country Status (2)

Country Link
CN (1) CN112565468B (zh)
WO (1) WO2022174838A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565468B (zh) * 2021-02-22 2021-08-31 华为技术有限公司 一种驾驶场景识别方法及其系统
CN115249408A (zh) * 2022-06-21 2022-10-28 重庆长安汽车股份有限公司 一种自动驾驶测试数据的场景分类提取方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180011494A1 (en) * 2016-07-05 2018-01-11 Baidu Usa Llc Standard scene-based planning control methods for operating autonomous vehicles
CN108921200A (zh) * 2018-06-11 2018-11-30 百度在线网络技术(北京)有限公司 用于对驾驶场景数据进行分类的方法、装置、设备和介质
CN110304068A (zh) * 2019-06-24 2019-10-08 中国第一汽车股份有限公司 汽车行驶环境信息的采集方法、装置、设备和存储介质
CN110796007A (zh) * 2019-09-27 2020-02-14 华为技术有限公司 场景识别的方法与计算设备
CN112565468A (zh) * 2021-02-22 2021-03-26 华为技术有限公司 一种驾驶场景识别方法及其系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108773373B (zh) * 2016-09-14 2020-04-24 北京百度网讯科技有限公司 用于操作自动驾驶车辆的方法和装置
CN110232335A (zh) * 2019-05-24 2019-09-13 国汽(北京)智能网联汽车研究院有限公司 驾驶场景分类方法及电子设备
CN110765661A (zh) * 2019-11-22 2020-02-07 北京京东乾石科技有限公司 自动驾驶仿真场景生成方法及装置、电子设备、存储介质
CN111619482A (zh) * 2020-06-08 2020-09-04 武汉光庭信息技术股份有限公司 一种车辆驾驶数据采集处理系统及方法
CN111694973B (zh) * 2020-06-09 2023-10-13 阿波罗智能技术(北京)有限公司 自动驾驶场景的模型训练方法、装置、电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180011494A1 (en) * 2016-07-05 2018-01-11 Baidu Usa Llc Standard scene-based planning control methods for operating autonomous vehicles
CN108921200A (zh) * 2018-06-11 2018-11-30 百度在线网络技术(北京)有限公司 用于对驾驶场景数据进行分类的方法、装置、设备和介质
CN110304068A (zh) * 2019-06-24 2019-10-08 中国第一汽车股份有限公司 汽车行驶环境信息的采集方法、装置、设备和存储介质
CN110796007A (zh) * 2019-09-27 2020-02-14 华为技术有限公司 场景识别的方法与计算设备
CN112565468A (zh) * 2021-02-22 2021-03-26 华为技术有限公司 一种驾驶场景识别方法及其系统

Also Published As

Publication number Publication date
CN112565468B (zh) 2021-08-31
CN112565468A (zh) 2021-03-26

Similar Documents

Publication Publication Date Title
WO2022174838A1 (zh) 一种驾驶场景识别方法及其系统
CN112199385B (zh) 用于人工智能ai的处理方法、装置、电子设备和存储介质
CN110569298B (zh) 一种数据对接、可视化方法和系统
JP7012689B2 (ja) コマンド実行方法及び装置
CN112148494B (zh) 用于算子服务的处理方法、装置、智能工作站和电子设备
CN110276074B (zh) 自然语言处理的分布式训练方法、装置、设备及存储介质
CN111581231A (zh) 一种基于异构数据库的查询方法和装置
CN109727376B (zh) 生成配置文件的方法、装置及售货设备
CN110674052B (zh) 内存管理方法、服务器及可读存储介质
CN113779681A (zh) 建筑模型建立方法及相关装置
CN112269588B (zh) 算法的升级方法、装置、终端和计算机可读存储介质
CN107391728B (zh) 数据挖掘方法以及数据挖掘装置
CN111124382A (zh) Java中的属性赋值方法、装置及服务器
CN115543809A (zh) 自动驾驶功能的测试场景库构建方法及装置
CN112948054A (zh) 一种基于虚拟化技术的公交混合云平台系统
CN114064449A (zh) 一种仿真测试报告生成方法、装置、电子设备及存储介质
CN112612514B (zh) 程序开发方法和装置、存储介质及电子装置
CN114637564B (zh) 数据可视化方法、装置、电子设备及存储介质
CN112380829B (zh) 一种文档生成方法及装置
CN116028233B (zh) 一种ai计算资源的数字对象组织与共享方法和装置
CN115174703B (zh) 设备驱动处理方法、设备通信方法、处理系统及电子设备
CN109213602A (zh) 一种应用服务请求的方法和装置
CN113590841B (zh) 智能化快速审单与基于知识图谱的智能预警系统与方法
CN113052675B (zh) 数据展示方法和装置
CN117809062B (zh) 地标识别方法、装置、设备、存储介质及计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22755611

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22755611

Country of ref document: EP

Kind code of ref document: A1