WO2023087182A1 - 一种信息交互方法、装置及系统 - Google Patents

一种信息交互方法、装置及系统 Download PDF

Info

Publication number
WO2023087182A1
WO2023087182A1 PCT/CN2021/131251 CN2021131251W WO2023087182A1 WO 2023087182 A1 WO2023087182 A1 WO 2023087182A1 CN 2021131251 W CN2021131251 W CN 2021131251W WO 2023087182 A1 WO2023087182 A1 WO 2023087182A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
information
parameter
test data
virtual
Prior art date
Application number
PCT/CN2021/131251
Other languages
English (en)
French (fr)
Inventor
甘国栋
马莎
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180100917.7A priority Critical patent/CN117716401A/zh
Priority to PCT/CN2021/131251 priority patent/WO2023087182A1/zh
Publication of WO2023087182A1 publication Critical patent/WO2023087182A1/zh

Links

Images

Definitions

  • the present application relates to the field of smart cars, in particular to an information interaction method, device and system.
  • Embodiments of the present application provide an information interaction method, device, and system for testing the ability of an automatic driving system to recognize and respond to traffic police gestures, so as to improve the safety and intelligence of automatic driving.
  • an information interaction method which can be applied to a first device, where the first device can be any device with an automatic driving function or an assisted driving function, such as an automatic driving system (supporting automatic driving or assisted driving function) or a device or equipment carrying an automatic driving system.
  • the method includes: receiving at least one test data, the at least one test data includes gesture information, the gesture information includes a first parameter, the first parameter is used to indicate the first gesture, and the first parameter belongs to a predefined gesture parameter set; according to the at least one test data , outputting a first control signal, where the first control signal is used to control the first virtual terminal.
  • the automatic driving system can receive at least one test data, and output the first control signal according to the at least one test data, and then realize the test of the ability of the automatic driving system to recognize and respond to gestures, which can improve the safety and intelligence of the automatic driving of the vehicle sex.
  • the gesture parameter set includes multiple parameters, and the multiple parameters are used to indicate any number of the following gestures: stop gesture, straight gesture, left turn gesture, left turn gesture, right turn gesture, right Gestures for turning and waiting, changing lanes, slowing down, pulling over and stopping, or unknown gestures.
  • the first parameter is an RGB value
  • the RGB value is used to represent an image corresponding to the first gesture.
  • the test data can be in image format.
  • the automatic driving system can simulate the effect of recognizing the first gesture based on image technology and making a response.
  • the first parameter is a true value
  • the true value is used to indicate the first gesture.
  • the first parameter is a true value agreed in the OSI standard.
  • the gesture information further includes: a second parameter, where the second parameter is used to indicate information about the lane where the first virtual terminal is located.
  • the information of the lane where the first virtual terminal is located can be provided to the automatic driving system as part of the gesture information, so that the automatic driving system can refer to the information of the lane where the first virtual terminal is located to recognize and respond to the first gesture, which can improve the reliability of the solution sex.
  • the gesture information further includes: a third parameter, where the third parameter is used to indicate whether the first parameter is valid.
  • At least one test data also includes one or more of the following information:
  • the subject reference information of the first gesture
  • the subject driver information for the first gesture is the subject driver information for the first gesture.
  • information related to the execution subject corresponding to the first gesture can be provided to the automatic driving system, which can further improve the reliability of the test solution.
  • At least one piece of test data also includes traffic signal light information.
  • At least one piece of test data further includes obstruction information.
  • At least one piece of test data further includes perception data from the second virtual terminal, the perception data includes reference gesture information, and the reference gesture information is used to indicate the first gesture.
  • At least one piece of test data includes gesture information and traffic light information.
  • outputting the first control signal according to the at least one test data may include: outputting the first control signal according to information with a higher priority among the traffic signal light information and the gesture information.
  • the automatic driving system when the automatic driving system receives traffic signal light information and gesture information at the same time, it can output the first control signal based on the information with higher priority, which can improve the robustness of the automatic driving system.
  • an information interaction method is provided, which can be applied to a second device, and the second device can be any device with a simulation function, such as a simulation system.
  • the method includes: outputting at least one test data to the first device; wherein the at least one test data includes gesture information, the gesture information includes a first parameter, the first parameter is used to indicate the first gesture, and the first parameter belongs to a predefined gesture parameter set ; Receive a first control signal from the first device; control the first virtual terminal according to the first control signal.
  • the gesture parameter set includes multiple parameters, and the multiple parameters are used to indicate any number of the following gestures: stop gesture, straight gesture, left turn gesture, left turn gesture, right turn gesture, right Gestures for turning and waiting, changing lanes, slowing down, pulling over and stopping, or unknown gestures.
  • the first parameter is an RGB value, and the RGB value is used to represent an image corresponding to the first gesture; or, the first parameter is a true value, and the true value is used to indicate the first gesture.
  • the gesture information further includes: a second parameter, the second parameter is used to indicate information about the lane where the first virtual terminal is located; and/or, a third parameter, the third parameter is used to indicate whether the first parameter is valid .
  • At least one test data also includes one or more of the following information:
  • the subject reference information of the first gesture
  • the subject driver information for the first gesture is the subject driver information for the first gesture.
  • At least one piece of test data further includes at least one of the following information: information on traffic lights; or information on obstructions.
  • At least one piece of test data further includes perception data from the second virtual terminal, the perception data includes reference gesture information, and the reference gesture information is used to indicate the first gesture.
  • the virtual subject when the first event is detected, the virtual subject is controlled to perform the first gesture.
  • the first event may include one or more of the following: the distance between the virtual subject and the first virtual terminal is less than a preset distance, traffic jam, or traffic accident. It should be understood that the above items are only examples rather than limitations.
  • This design drives the virtual subject to perform gestures through events, which is closer to the gesture triggering method in the real scene, which can improve the reliability of the test.
  • the virtual subject is controlled to perform the same gesture as the gesture in the road survey data according to the road survey data, wherein the road survey data indicates that the real subject performs the first gesture in a real road scene.
  • This design drives the virtual subject to perform the same gesture through real gestures, which is closer to the gesture variation rules in the real scene and can improve the reliability of the test.
  • the virtual subject is controlled to perform a set gesture at a set time, where the set gesture includes the first gesture.
  • the design drives the virtual subject to perform gestures through preset rules, which can reduce the complexity of the scheme.
  • an information interaction device including a module/unit/technical means for executing the method described in the first aspect or any possible design of the first aspect.
  • the device includes: a transceiver module, configured to receive at least one test data, the at least one test data includes gesture information, the gesture information includes a first parameter, the first parameter is used to indicate the first gesture, and the first parameter belongs to a predefined A set of gesture parameters; a processing module, configured to generate a first control signal according to at least one test data, and the first control signal is used to control the first virtual terminal; a transceiver module, configured to output the first control signal.
  • a transceiver module configured to receive at least one test data, the at least one test data includes gesture information, the gesture information includes a first parameter, the first parameter is used to indicate the first gesture, and the first parameter belongs to a predefined A set of gesture parameters
  • a processing module configured to generate a first control signal according to at least one test data, and the first control signal is used to control the first virtual terminal
  • a transceiver module configured to output the first control signal.
  • an information interaction device including a module/unit/technical means for executing the method described in the second aspect or any possible design of the second aspect.
  • the device includes: a transceiver module, configured to output at least one test data to the first device; wherein, the at least one test data includes gesture information, and the gesture information includes a first parameter, the first parameter is used to indicate the first gesture, and the second A parameter belongs to a predefined set of gesture parameters; the transceiver module is also used to receive a first control signal from the first device; the processing module is used to control the first virtual terminal according to the first control signal.
  • an information interaction device the device includes a processor and an interface circuit, and the interface circuit is used to receive signals from other devices than the device and transmit them to the processor or transfer signals from the device to the processor.
  • the signal of the processor is sent to other communication devices other than the device, and the processor is used to implement the first aspect or any possible design of the first aspect through a logic circuit or executing code instructions. method.
  • an information interaction device the device includes a processor and an interface circuit, and the interface circuit is used to receive signals from other devices than the device and transmit them to the processor or transfer signals from the device to the processor.
  • the signal of the processor is sent to other communication devices other than the device, and the processor is used to implement the second aspect or any possible design of the second aspect through a logic circuit or executing code instructions. method.
  • an information interaction system including the device described in the third aspect and the device described in the fourth aspect; or, including the device described in the fifth aspect and the device described in the sixth aspect .
  • a computer-readable storage medium including programs or instructions.
  • the programs or instructions When the programs or instructions are run on a computer, the first aspect, or any possible design of the first aspect, or the second Aspect, or the method described in any possible design of the second aspect is executed.
  • a computer program product containing instructions, the computer program product stores instructions, and when it is run on a computer, the computer executes the first aspect, or any possible design of the first aspect, Or the second aspect, or the method described in any possible design of the second aspect is executed.
  • a vehicle includes the device according to the third aspect or the device according to the fifth aspect or the computer-readable storage medium according to the eighth aspect.
  • FIG. 1 is a schematic diagram of an information interaction system applicable to an embodiment of the present application
  • Fig. 2 is a schematic diagram of a possible virtual traffic scene provided by the embodiment of the present application.
  • FIG. 3 is a flowchart of an information interaction method provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a possible test data provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of virtual subject images under different virtual terminal perspectives provided by the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an information interaction device provided in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another information interaction device provided by an embodiment of the present application.
  • the embodiments of the present application can be applied to any scenario where the capability of the driving system can be tested.
  • FIG. 1 it is a schematic structural diagram of an information interaction system applicable to an embodiment of the present application, and the system includes a second device and a first device.
  • the second device is used to generate at least one test data, and output at least one test data to the first device;
  • the first device is (or deployed in the first device) the object to be tested, and the first device is used to receive at least one test data, and process at least one test data to generate a test result (such as a first control signal), and output the test result to the second device.
  • a test result such as a first control signal
  • the first device may be an automatic driving system, or any device integrated with an automatic driving system.
  • the specific form of the device may be a chip (such as a controller, processor, integrated circuit, etc.), or it may be a Terminals (such as vehicle-mounted terminals), etc., are not limited in this application.
  • the automatic driving system in the implementation of this application includes but is not limited to the following systems:
  • Emergency assistance emergency assistance system: it cannot continuously perform vehicle lateral or longitudinal motion control in dynamic driving tasks, but has the ability to continuously perform some target and event detection and response in dynamic driving tasks;
  • Partial driver assistance system continuously executes the vehicle lateral or longitudinal motion control in dynamic driving tasks under its designed operating conditions, and has a part adapted to the executed vehicle lateral or longitudinal motion control Target and incident detection and response capabilities;
  • Combined driver assistance (combined driver assistance) system continuously executes the vehicle lateral and longitudinal motion control in dynamic driving tasks under its designed operating conditions, and has a part adapted to the executed vehicle lateral and longitudinal motion control Target and incident detection and response capabilities;
  • Conditionally automated driving system continuously perform all dynamic driving tasks under its designed operating conditions
  • the second device can be a simulation system, or any device (such as a chip, a terminal, etc.) integrated with a simulation system, and the second device is used to generate test data related to the traffic scene, and related to the traffic scene
  • the test data is used to test the ability of the driving system to recognize traffic scenes and respond to traffic scenes.
  • the second device includes at least the following three functional modules:
  • the scene building module is used to build virtual traffic scenes, such as building environment components (such as roads, buildings, etc.), traffic flow (such as vehicles, pedestrians, etc.), signal lights (such as traffic lights, turning lights, etc.), virtual subjects (such as traffic police model), and so on.
  • the traffic flow includes at least one virtual terminal, such as the first virtual terminal.
  • FIG. 2 it is a schematic diagram of a possible virtual traffic scene provided by the embodiment of the present application.
  • the data analysis module is used to obtain the scene data generated by the scene construction module, analyze the scene data, and obtain the test data meeting the preset standards.
  • test data satisfying the preset standard can be identified and processed by the first device.
  • the present application does not specifically limit the preset standard.
  • the preset standard may be an Open Simulation Interface (OSI) standard formulated by the Association for Standardization of Automation and Measuring Systems (ASAM).
  • OSI Open Simulation Interface
  • ASAM Association for Standardization of Automation and Measuring Systems
  • the OSI standard defines a common interface to connect the development of autonomous driving functions and various driving simulation frameworks for compatibility. It enables any autonomous driving function to be connected with any simulation tool, and at the same time can integrate various sensor models.
  • the vehicle dynamics model used to control the first virtual terminal in the virtual traffic scene according to the control signal sent by the first device.
  • the vehicle dynamics model is used to obtain the dynamic response of the first virtual terminal to the virtual traffic scene according to the first control signal sent by the first device and the current pose information of the first virtual terminal, and then according to the vehicle dynamics
  • the model updates the pose information of the first virtual terminal to realize the effect that the automatic driving system controls the automatic driving of the first virtual terminal.
  • the first device includes at least a processing module, and the processing module outputs a first control signal for controlling the first virtual terminal by calculating at least one test data, thereby simulating the automatic driving system Identify events that occur in the scene and control the effects of the vehicle's response to them.
  • the processing module can run instructions or data corresponding to the automatic driving algorithm, so as to calculate at least one test data and output the first control signal.
  • first device and the second device may be deployed in the same physical entity, or may be deployed in different physical entities, which is not limited in this application.
  • FIG. 1 is only an example rather than a specific limitation, and there may be other deformations in practical applications.
  • Fig. 3 it is a flow chart of an information interaction method provided in the embodiment of the present application. Taking the method applied to the system shown in Fig. 1 as an example, the method includes:
  • the second device generates at least one piece of test data
  • the second device builds a virtual traffic scene.
  • the scene building module in the second device can call the data in the scene database to build a virtual traffic scene.
  • the scene database can provide a large number of representative scenes and meet the test requirements.
  • Data in the scene database may include static scene data and dynamic scene data.
  • Static scene data is used to describe static components in traffic scenes, such as environmental components (such as roads, etc.).
  • Dynamic scene data is used to describe dynamic components in traffic scenes, such as flowable traffic flow (such as vehicles, pedestrians, etc.), variable signal lights (such as traffic lights, turn signals, etc.), virtual subjects that can make gestures (Gesture) etc.
  • the scenario database may be stored in the second device, or in other devices than the second device, which is not limited in this application.
  • the constructed virtual traffic scene may be a three-dimensional virtual traffic scene model, and the virtual traffic scene model includes a three-dimensional model of static components and a three-dimensional model of dynamic components.
  • Figure 2 it is a schematic diagram of a virtual traffic scene, including roads, vehicles, pedestrians, traffic police, traffic lights, and the like.
  • the virtual traffic scene includes at least the first virtual terminal and the virtual subject.
  • the first virtual terminal is a virtual model of a terminal equipped with a test object (such as an automatic driving system), for example, a virtual model of an automatic driving vehicle.
  • the virtual subject can simulate a real person to make gestures of a real person.
  • the virtual subject can be a virtual traffic police model, which can make gestures of a traffic policeman, or for example, the virtual subject can be a virtual traffic coordinator model, which can make the gestures of a traffic coordinator. Gestures, etc.
  • the virtual subject is a virtual traffic police model and the first virtual terminal is a vehicle as an example.
  • the scene building module can control (or drive) the virtual subject to perform the first gesture.
  • the scene building module can control (or drive) the virtual subject to perform gestures through events, where different events can correspond to the same or different gestures.
  • the event includes but is not limited to that the distance between the virtual subject and the first virtual terminal is less than a preset distance, traffic jam, or traffic accident, and so on.
  • the control subject performs a slow-moving gesture.
  • the virtual subject is controlled to perform a pull over gesture.
  • the scene construction module controls the virtual subject to perform a lane-changing gesture.
  • the scene building module controls the virtual subject to perform the first gesture.
  • the event-driven virtual subject performs gestures, which is closer to the gesture triggering method in the real scene, which can improve the reliability of the test.
  • the scene construction module can control (or drive) the virtual subject to perform the same gestures as the gestures in the road survey data according to the road survey data, where the road survey data indicates the gestures performed by the real subject in the real road scene .
  • the road mining data is a video image
  • the duration of the video image is 3 minutes, wherein the video of the first minute indicates that a real traffic policeman performs a stop gesture, the video of the second minute indicates that a real traffic policeman performs a left turn gesture, and the video of the third minute indicates that a real traffic policeman performs a left turn gesture.
  • the video instructs the real traffic policeman to perform a right-turn gesture
  • the scene construction module can control the virtual subject to perform a stop gesture, a left-turn gesture, and a right-turn gesture in sequence within three minutes.
  • the scene building module executes the first gesture according to the virtual subject in the first gesture control performed by the real traffic policeman in the road sampling data.
  • the virtual subject is driven to perform the same gesture by real gestures, which is closer to the gesture variation law in the real scene, which can improve the reliability of the test.
  • the scene building module can control (or drive) the virtual subject to perform the set gestures at the set time according to the preset rules.
  • the duration of a test is 10 minutes
  • the preset rule is to perform the ten gestures shown in Table 1 in sequence
  • the execution duration of each gesture is 1 minute.
  • the set gestures include the first gesture.
  • the complexity of the scheme can be reduced by driving the virtual subject to perform gestures through preset rules.
  • the data parsing module can obtain the scene data of the virtual traffic scene. It can be understood that the scene data acquired by the data parsing module at least includes relevant data of the virtual subject.
  • one or more sensor models are integrated in the data parsing module, for example including but not limited to image sensor models, radar models and the like.
  • the data parsing module can acquire the perception data of the virtual subject by the first virtual terminal based on the sensor model, such as images, point clouds, etc. corresponding to the virtual subject. In this way, the scene data obtained by the data analysis module is closer to the perception data of real vehicles in real life.
  • the data parsing module can also directly obtain the data in the scene database used by the scene building module to build the virtual traffic scene, and use the data as the perception data of the virtual subject by the first virtual terminal. In this way, the scene data more accurately describe the virtual traffic scene.
  • the data analysis module obtains the scene data of the virtual traffic scene, if the scene data of the virtual traffic scene is in a preset format, the scene data can be directly used as at least one test data; if the scene data of the virtual traffic scene is not in a preset format, The data parsing module parses the scene data and converts it into at least one test data in a preset format.
  • the preset format is a data format meeting the OSI standard. In this way, the standardized connection between the second device and the automatic driving system in the first device can be realized.
  • At least one piece of test data includes at least gesture information, the gesture information includes a first parameter, the first parameter is used to indicate the first gesture, and the first parameter belongs to a predefined gesture parameter set. At least one test data is used to test the ability of the automatic driving system in the first device to recognize and respond to gestures.
  • the first virtual subject in the virtual traffic scene is a virtual traffic police model
  • the gesture parameter set includes a plurality of parameters, and each parameter in the multiple parameters is used to indicate a corresponding traffic police gesture, and different parameters indicate Different traffic police gestures.
  • the multiple parameters are used to indicate any number of the following gestures: stop gesture, straight ahead gesture, left turn gesture, left turn gesture, right turn gesture, right turn gesture, lane change gesture, slow down OK gesture, pull over gesture, or unknown gesture.
  • the definitions of traffic police gestures may be different. Therefore, in practical applications, the parameters included in the gesture parameter set can be flexibly designed according to needs.
  • the first parameter may be a red green blue (Red Green Blue, RGB) value
  • the RGB value is used to represent the image corresponding to the first gesture.
  • the data parsing module acquires the two-dimensional or three-dimensional image of the virtual subject in the virtual traffic scene from the perspective of the first virtual terminal based on the sensor model, simulating the effect of the traffic police image taken by the vehicle based on the image sensor in the real world. Since the OSI standard supports data in an image format, the data analysis module can directly send the obtained image data (RGB values) to the first device as test data.
  • the first parameter may be a true value, and the true value is used to indicate the first gesture.
  • the first parameter may be a true value, and the true value is used to indicate the first gesture.
  • Table 1 for an example of a gesture parameter set. In this example, it is taken that multiple parameters included in the gesture parameter set are all true values.
  • gesture_unknown unknown gesture
  • gesture_stop gesture 2 sture_straight_ahead (execute gesture)
  • gesture_left (left turn gesture) 4 gesture_left_wait (left turn waiting gesture)
  • gesture_right (right turn gesture) 6 gesture_right_wait (right turn waiting gesture)
  • gesture_lane_change lane change gesture
  • gesture_slow_down 9 gesture_pull_over (pull over gesture) ... ...
  • the first parameter may also be in other data formats, which are specifically determined by the data format supported by the first device.
  • the second device outputs at least one test data to the first device, and correspondingly, the first device receives at least one test data;
  • the data parsing module sends at least one piece of test data to the first device.
  • the first device generates a first control signal according to at least one test data
  • the processing module in the first device identifies events occurring in the virtual traffic scene according to at least one test data, and then generates a first control signal according to the event, wherein the first control signal is used to control the first virtual terminal, so that the first The endpoint responds to events.
  • the processing module in the first device recognizes the gesture (such as the first gesture) performed by the virtual subject in the virtual traffic scene according to the gesture information (such as the first parameter) contained in at least one test data, A first control signal is then generated according to the first gesture.
  • the processing module can integrate one or more of the following functions: perception (for example, based on test data to perceive events that have occurred in the virtual traffic scene, such as the first gesture), prediction (for example, to predict the future in the virtual traffic scene based on the perception results) event to occur), planning (for example, planning the driving trajectory of the first virtual terminal according to the sensing result and/or prediction result), controlling (for example, generating a control signal for controlling the driving trajectory of the first virtual terminal), and so on.
  • perception for example, based on test data to perceive events that have occurred in the virtual traffic scene, such as the first gesture
  • prediction for example, to predict the future in the virtual traffic scene based on the perception results
  • planning for example, planning the driving trajectory of the first virtual terminal according to the sensing result and/or prediction result
  • controlling for example, generating a control signal for controlling the driving trajectory of the first virtual terminal
  • the first gesture performed by the virtual body at the intersection is a left turn gesture
  • the processing module in the first device generates a first control signal for controlling the first virtual terminal to turn left.
  • the first device outputs the first control signal to the second device, and correspondingly, the second device receives the first control signal;
  • the second device controls the first virtual terminal according to the first control signal.
  • the vehicle dynamics model is based on the first control signal (left turn signal) and the current pose information of the first virtual terminal (such as the lane where the first virtual terminal is located, the distance between the first virtual terminal and the intersection, etc.), and obtain the dynamic response of the first virtual terminal to the virtual traffic scene (such as Realize the driving speed, direction, etc. required for the first virtual terminal to turn left), and then update the pose information of the first virtual terminal according to the dynamic response (such as controlling the first virtual terminal to turn left).
  • the first control signal left turn signal
  • the current pose information of the first virtual terminal such as the lane where the first virtual terminal is located, the distance between the first virtual terminal and the intersection, etc.
  • the second device controls the first virtual terminal, it can determine the ability of the automatic driving system to recognize and respond to the first gesture by detecting the state of the first virtual terminal. For example, after the above-mentioned vehicle dynamics model updates the pose information of the first virtual terminal, the second device determines whether the updated pose information of the first virtual terminal determines whether the automatic driving system accurately recognizes and responds to the first gesture, and then determines the first gesture. The ability of the automated driving system in the device to recognize and respond to gestures.
  • the gesture can be changed continuously to test the automatic driving system multiple times.
  • the algorithm of the automatic driving system can also be continuously adjusted according to the test results to improve the ability of the automatic driving system to recognize and respond to gestures.
  • the second device can generate test data, and perform a closed-loop test on the ability of the automatic driving system to recognize and respond to traffic police gestures based on the test data, thereby improving the safety and intelligence of the automatic driving of the vehicle.
  • the gestures performed by the traffic policeman may be targeted (or directional).
  • the gestures performed by the traffic police are used to direct the vehicles facing the traffic police position.
  • the vehicle on is an invalid gesture. Therefore, in the embodiment of the present application, the data transmission module may send at least one test data to the first device when the first gesture is valid for the first virtual terminal (for example, the virtual body is face to face with the first virtual terminal); and/or, When the processing module in the first device determines that the first gesture is valid for the first virtual terminal (for example, the virtual body is facing the first virtual terminal), it outputs a first control signal to the first device.
  • the judgment condition that the first gesture is valid for the first virtual terminal may include other judgment conditions besides the relative position of the virtual subject and the first virtual terminal, which is not limited in this application.
  • the above gesture information may further include a second parameter, and the second parameter is used to indicate information about the lane where the first virtual terminal is located.
  • the information about the lane where the first virtual terminal is located includes, for example, the identifier (assigned_lane_id) of the lane where the first virtual terminal is located.
  • the information of the lane where the first virtual terminal is located can be provided to the automatic driving system as part of the gesture information, so that the automatic driving system can refer to the information of the lane where the first virtual terminal is located to recognize and respond to the first gesture, which can improve program reliability.
  • the above gesture information may further include: a third parameter, where the third parameter is used to indicate whether the first parameter is valid (is_out_of_service).
  • the first device may generate the first control signal according to the first gesture.
  • the first device may output a control signal according to other information.
  • the above at least one test data also includes one or more of the following information:
  • the subject identification information of the first gesture that is, the identification information of the subject performing the first gesture, for example, the identification information of the virtual subject shown in FIG. 2 ;
  • the position information of the subject of the first gesture (police_base), that is, the position information of the subject performing the first gesture, such as the position information of the virtual subject shown in Figure 2, where the position information can be latitude and longitude or coordinates, etc., this There are no restrictions on applications;
  • the subject reference information (model reference) of the first gesture that is, the source information of the subject performing the first gesture, for example, the source information of the virtual subject shown in FIG. 2 .
  • the subject reference information may be a link, used to indicate the address of the file storing the 3D model data of the virtual subject;
  • the subject driving information (source reference) of the first gesture that is, the information used to drive the virtual subject to perform the first gesture.
  • the main driving information of the first gesture can be the first event; for example, when the first gesture is driven by road sampling data, the main driving information of the first gesture can be road sampling Data; for example, when the first gesture is driven by a preset rule, the main driving information of the first gesture may be the preset rule, and so on.
  • the second device can provide the first device with information related to the execution subject corresponding to the first gesture, so that the first device can recognize more information related to the first gesture, which can further improve the reliability of the test solution.
  • the vehicle may also perceive other information used to indicate traffic rules, such as traffic signal information or road sign information.
  • traffic signal information such as traffic signal information or road sign information.
  • at least one piece of test data may also include traffic signal light information, road sign information, and the like. The information may match or conflict with the gesture information, which is not limited in this application.
  • the first device may output the first control signal according to any of the above information.
  • the first device may output the first control signal according to the priority information referring to the above information.
  • At least one piece of test data includes gesture information and traffic signal information
  • the first device outputs a first control signal according to information with higher priority among the traffic signal information and gesture information.
  • the first device when the priority of the gesture information is higher than the priority of the traffic signal light information, the first device generates the first control signal according to the first gesture.
  • the priority of the traffic signal light is lower than that of the gesture information, and the traffic signal light information indicates to go straight, and the gesture information indicates to turn left, then the first control signal output by the first device is used to control the first virtual terminal to turn left.
  • the first device determines whether the priority of the gesture information is lower than that of the traffic signal light information. If the priority of the traffic signal is lower than that of the traffic signal light information, the first device generates the first control signal according to the traffic signal light information. For example, the priority of the traffic signal is higher than that of the gesture information, and the traffic signal information indicates going straight, and the gesture information indicates a left turn, then the first control signal output by the first device is used to control the first virtual terminal to go straight.
  • the priorities of various types of information in at least one test data can be determined according to local traffic regulations.
  • the second device can support conflicting configurations between gesture information and other information, can test the ability of the automatic driving system to process conflicting information, and can improve the robustness of the automatic driving system.
  • At least one piece of test data may also include interference information.
  • the interference information includes obstruction information.
  • the occluder information is the information of the object (ie occluder) that blocks the line of sight of the virtual subject perceived by the first virtual terminal, including but not limited to the shape, color, outline or spatial position of the occluder. It should be understood that in an occlusion scene, gesture information may be partially missing. For example, taking the test data as an example, see FIG. 4 , in the image received by the first device, part of the virtual subject is blocked by other vehicles, resulting in incomplete gesture information of the virtual subject.
  • the interference information is not limited to the information of the obstruction, and may also be other information.
  • the second device may also simulate the interference caused by the hardware performance of the image sensor in real life by reducing parameters such as brightness and resolution of the image.
  • the second device can support the configuration of interference information, can test the ability of the automatic driving system to process the interference information, and can improve the robustness of the automatic driving system.
  • At least one piece of test data further includes perception data from the second virtual terminal, the perception data includes reference gesture information, and the reference gesture information is used to indicate the first gesture.
  • the relative position of the second virtual terminal and the virtual subject is different from the relative position of the first virtual terminal and the virtual subject.
  • the image of the virtual subject under the perspective of the first virtual terminal is different from the image of the virtual subject under the perspective of the second virtual terminal, because the line of sight of the first virtual terminal It is blocked by the second virtual terminal, so there is only a partial image of the virtual subject, while the line of sight of the first virtual terminal is not blocked by any object, so the complete image of the virtual subject can be obtained.
  • the first device may simultaneously refer to the image of the virtual subject under the perspective of the first virtual terminal and the image of the virtual subject under the perspective of the second virtual terminal to recognize the gesture performed by the virtual subject.
  • At least one piece of test data may also include data from other devices, such as image data collected by a roadside camera, in addition to perception data from the second virtual terminal, and the present application does not limit this.
  • the second device can provide the first device with gesture information of the virtual subject under different viewing angles, and can test the ability of the automatic driving system to recognize gestures based on the Vehicle to Everything (V2X) communication function.
  • V2X Vehicle to Everything
  • an embodiment of the present application provides an information interaction device, which includes a module/unit/means for executing the method performed by the second device and/or the first device in the above method embodiment.
  • the module/unit/means can be realized by software, or by hardware, or can be realized by executing corresponding software by hardware.
  • the device may include a transceiver module 201 and a processing module 202, and the device is configured to realize the functions of the above-mentioned second device or the first device.
  • the transceiver module 201 is configured to receive at least one test data, the at least one test data includes gesture information, the gesture information includes a first parameter, the first parameter is used to indicate the first gesture, and the second A parameter belongs to a predefined set of gesture parameters; the processing module 202 is used to generate a first control signal according to at least one test data, and the first control signal is used to control the first virtual terminal; the transceiver module 201 is also used to output the first control signal Signal.
  • the gesture parameter set includes multiple parameters, and the multiple parameters are used to indicate any number of the following gestures: stop gesture, go straight gesture, left turn gesture, left turn gesture to turn, right turn gesture, right turn gesture to wait Gestures, lane change gestures, slow down gestures, pull over and stop gestures, or unknown gestures.
  • the first parameter is an RGB value, and the RGB value is used to represent an image corresponding to the first gesture; or, the first parameter is a true value, and the true value is used to indicate the first gesture.
  • the gesture information further includes: a second parameter, the second parameter is used to indicate information about the lane where the first virtual terminal is located; and/or, a third parameter, the third parameter is used to indicate whether the first parameter is valid.
  • At least one test data also includes one or more of the following information:
  • the subject reference information of the first gesture
  • the subject driver information for the first gesture is the subject driver information for the first gesture.
  • At least one test data also includes at least one of the following information:
  • At least one piece of test data further includes perception data from the second virtual terminal, the perception data includes reference gesture information, and the reference gesture information is used to indicate the first gesture.
  • At least one piece of test data includes gesture information and traffic signal light information; the processing module 202 is configured to: generate the first control signal according to the priority information of the traffic signal light information and the priority information of the gesture information with higher priority .
  • the transceiver module when the device is used in the second device, the transceiver module is configured to output at least one test data to the first device; wherein, at least one test data includes gesture information, and the gesture information includes a first parameter, and the first parameter is used to indicate For the first gesture, the first parameter belongs to a predefined set of gesture parameters; the transceiver module 201 is also used to receive a first control signal from the first device; the processing module 202 is used to control the first virtual terminal according to the first control signal.
  • the gesture parameter set contains multiple parameters, and the multiple parameters are used to indicate any number of the following gestures:
  • Stop go straight, turn left, turn left, turn right, turn right, change lanes, slow down, pull over, or unknown.
  • the first parameter is an RGB value, and the RGB value is used to represent an image corresponding to the first gesture; or, the first parameter is a true value, and the true value is used to indicate the first gesture.
  • the gesture information further includes: a second parameter, the second parameter is used to indicate information about the lane where the first virtual terminal is located; and/or, a third parameter, the third parameter is used to indicate whether the first parameter is valid.
  • At least one test data also includes one or more of the following information:
  • the subject reference information of the first gesture
  • the subject driver information for the first gesture is the subject driver information for the first gesture.
  • At least one test data also includes at least one of the following information:
  • At least one piece of test data further includes perception data from the second virtual terminal, the perception data includes reference gesture information, and the reference gesture information is used to indicate the first gesture.
  • the processing module 202 is further configured to: when the first event is detected, control the virtual subject to perform the first gesture; or, control the virtual subject to perform the same gesture as the gesture in the road survey data according to the road survey data, where the road survey data
  • the collected data indicates that the real subject performs the first gesture in the real road scene; or, the virtual subject is controlled to perform a set gesture at a set time, wherein the set gesture includes the first gesture.
  • the first event includes one or more of the following: the distance between the virtual subject and the first virtual terminal is less than a preset distance, traffic jam, or traffic accident.
  • the above-mentioned device may have various product forms, and several possible product forms are introduced below.
  • an embodiment of the present application also provides an information exchange device, which includes at least one processor 301 and an interface circuit 302; the interface circuit 302 is used to receive signals from other devices other than the device and transmit them to the processor 301 or send a signal from the processor 301 to other communication devices other than the device, and the processor 301 implements the method performed by the second device or the first device through a logic circuit or executing code instructions.
  • the processor mentioned in the embodiments of the present application may be implemented by hardware or by software.
  • the processor When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like.
  • the processor When implemented by software, the processor may be a general-purpose processor implemented by reading software codes stored in a memory.
  • the processor can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC) , off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the memory mentioned in the embodiments of the present application may be a volatile memory or a nonvolatile memory, or may include both volatile memory and nonvolatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash.
  • the volatile memory can be Random Access Memory (RAM), which acts as external cache memory.
  • RAM Static Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • Synchronous Dynamic Random Access Memory Synchronous Dynamic Random Access Memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Eate SDRAM, DDR SDRAM enhanced synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM synchronous connection dynamic random access memory
  • Synchlink DRAM, SLDRAM Direct Memory Bus Random Access Memory
  • Direct Rambus RAM Direct Rambus RAM
  • the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components
  • the memory storage module may be integrated in the processor.
  • an embodiment of the present application also provides a computer-readable storage medium, including a program or an instruction, when the program or instruction is run on a computer, the method performed by the above-mentioned second device or the first device be executed.
  • the embodiment of the present application also provides a computer program product containing instructions, the computer program product stores instructions, and when it is run on a computer, the above-mentioned second device or the first device executes the method be executed.
  • an embodiment of the present application further provides a vehicle, including the device shown in FIG. 6 or FIG. 7 .
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Traffic Control Systems (AREA)

Abstract

本申请公开了一种信息交互方法、装置及系统。方法包括:接收至少一个测试数据,所述至少一个测试数据包含手势信息,所述手势信息包含第一参数,所述第一参数用于指示第一手势,所述第一参数属于预先定义的手势参数集合;根据所述至少一个测试数据,输出第一控制信号,所述第一控制信号用于控制第一虚拟终端。如此,可以实现对自动驾驶系统识别和响应交警手势的能力进行测试,可以提高自动驾驶的安全性和智能性。

Description

一种信息交互方法、装置及系统 技术领域
本申请涉及智能车领域,尤其涉及一种信息交互方法、装置及系统。
背景技术
近年来,自动驾驶已经成为汽车研发领域重要的发展方向。为了提升车辆自动驾驶的智能性,需在各种工况条件下,验证其自动驾驶系统的能力。在对自动驾驶系统进行测试过程中,应充分考虑各种环境条件及交通状况,例如包括行人、交通车辆、信号灯、道路基础设施等。
目前,在交通灯故障、交通事故处理或临时交通管制等场景下,会有交警协调指挥交通。因此识别和响应交警手势对于车辆自动驾驶非常重要。然而,目前对于自动驾驶系统识别和响应交警手势的研究比较少,亟需一种能够对自动驾驶系统识别和响应交警手势的能力进行测试的方案。
发明内容
本申请实施例提供一种信息交互方法、装置及系统,用于测试自动驾驶系统识别和响应交警手势的能力,以提高自动驾驶的安全性和智能性。
第一方面,提供一种信息交互方法,该方法可以应用于第一装置,其中第一装置可以是具有自动驾驶功能或辅助驾驶功能的任何装置,例如为自动驾驶系统(支持自动驾驶或者辅助驾驶功能)或者承载自动驾驶系统的装置或设备。方法包括:接收至少一个测试数据,至少一个测试数据包含手势信息,手势信息包含第一参数,第一参数用于指示第一手势,第一参数属于预先定义的手势参数集合;根据至少一个测试数据,输出第一控制信号,第一控制信号用于控制第一虚拟终端。
上述方案,自动驾驶系统可以接收至少一个测试数据,并根据至少一个测试数据输出第一控制信号,进而实现对自动驾驶系统识别和响应手势的能力进行测试,可以提高车辆自动驾驶的安全性和智能性。
一种可能的设计中,手势参数集合包含多个参数,多个参数用于指示以下手势中的任意多个:停止手势、直行手势、左转弯手势、左转弯待转手势、右转弯手势、右转弯待转手势、变道手势、减速慢行手势、靠边停车手势、或未知手势。
应理解,以上几种仅为示例而非具体限定。
一种可能的设计中,第一参数为RGB值,RGB值用于表征第一手势对应的图像。换而言之,测试数据可以是图像格式。
如此,可以模拟自动驾驶系统基于图像技术识别第一手势并做出响应的效果。
一种可能的设计中,第一参数为真值,真值用于指示第一手势。示例性的,第一参数为OSI标准中约定的真值。
如此,可以模拟自动驾驶系统基于真值识别第一手势并做出响应的效果。
一种可能的设计中,手势信息还包含:第二参数,第二参数用于指示第一虚拟终端所在车道的信息。
如此,第一虚拟终端所在车道的信息可以作为手势信息的一部分,提供给自动驾驶系统,以便自动驾驶系统可以参考第一虚拟终端所在的车道的信息识别和响应第一手势,可以提高方案的可靠性。
一种可能的设计中,手势信息还包含:第三参数,第三参数用于指示第一参数是否有效。
如此,可以根据需要灵活决定是否对自动驾驶系统识别和响应手势的能力进行测试,可以节省自动驾驶系统的计算资源。
一种可能的设计中,至少一个测试数据还包含以下信息中的一个或多个:
第一手势的主体标识信息;
第一手势的主体位置信息;
第一手势的主体参考信息;
第一手势的主体驱动信息。
如此,可以提供第一手势对应的执行主体相关的信息给自动驾驶系统,可以进一步提高测试方案的可靠性。
一种可能的设计中,至少一个测试数据还包括交通信号灯信息。
如此,可以测试自动驾驶系统对冲突信息(例如手势信息与交通信号灯信息冲突)的处理能力,以提高自动驾驶系统的鲁棒性。
一种可能的设计中,至少一个测试数据还包括遮挡物信息。
如此,可以测试自动驾驶系统对干扰信息的处理能力,以提高自动驾驶系统的鲁棒性。
一种可能的设计中,至少一个测试数据还包含来自第二虚拟终端的感知数据,感知数据包含参考手势信息,参考手势信息用于指示第一手势。
如此,可以测试自动驾驶系统基于V2X通信功能识别手势的能力。
一种可能的设计中,至少一个测试数据包含手势信息和交通信号灯信息。相应的,根据至少一个测试数据,输出第一控制信号,可以包括:根据交通信号灯信息和手势信息中优先级较高的信息,输出第一控制信号。
如此,自动驾驶系统同时收到交通信号灯信息、手势信息时,可基于优先级较高的信息输出第一控制信号,可以提高自动驾驶系统的鲁棒性。
第二方面,提供一种信息交互方法,该方法可以应用于第二装置,第二装置可以是具有仿真功能的任何装置,例如为仿真系统。方法包括:向第一装置输出至少一个测试数据;其中,至少一个测试数据包含手势信息,手势信息包含第一参数,第一参数用于指示第一手势,第一参数属于预先定义的手势参数集合;接收来自第一装置的第一控制信号;根据第一控制信号控制第一虚拟终端。
一种可能的设计中,手势参数集合包含多个参数,多个参数用于指示以下手势中的任意多个:停止手势、直行手势、左转弯手势、左转弯待转手势、右转弯手势、右转弯待转手势、变道手势、减速慢行手势、靠边停车手势、或未知手势。
一种可能的设计中,第一参数为RGB值,RGB值用于表征第一手势对应的图像;或者,第一参数为真值,真值用于指示第一手势。
一种可能的设计中,手势信息还包含:第二参数,第二参数用于指示第一虚拟终端所在车道的信息;和/或,第三参数,第三参数用于指示第一参数是否有效。
一种可能的设计中,至少一个测试数据还包含以下信息中的一个或多个:
第一手势的主体标识信息;
第一手势的主体位置信息;
第一手势的主体参考信息;
第一手势的主体驱动信息。
一种可能的设计中,至少一个测试数据还包括以下信息中的至少一个:交通信号灯信息;或者,遮挡物信息。
一种可能的设计中,至少一个测试数据还包含来自第二虚拟终端的感知数据,感知数据包含参考手势信息,参考手势信息用于指示第一手势。
以上第二方面各设计的技术效果可以参考上述第一方面对应设计的技术效果,此处不再赘述。
一种可能的设计中,在检测到第一事件时,控制虚拟主体执行第一手势。例如,第一事件可以包含以下一项或多项:虚拟主体与第一虚拟终端的距离小于预设距离、交通拥堵、或交通事故。应理解,以上几种仅为示例而非限定。
该设计通过事件驱动虚拟主体执行手势,比较贴近真实场景中的手势触发方式,可以提高测试的可靠性。
一种可能的设计中,根据路采数据控制虚拟主体执行与路采数据中的手势相同的手势,其中路采数据指示了真实主体在真实道路场景中执行第一手势。
该设计通过真实手势驱动虚拟主体执行相同手势,比较贴近真实场景中的手势变化规律,可以提高测试的可靠性。
一种可能的设计中,控制虚拟主体在设定的时间执行设定的手势,其中设定的手势包括第一手势。
该设计通过预设规则驱动虚拟主体执行手势,可以降低方案的复杂度。
第三方面,提供一种信息交互装置,包括用于执行如第一方面或第一方面任一种可能的设计中所述方法的模块/单元/技术手段。
示例性的,装置包括:收发模块,用于接收至少一个测试数据,至少一个测试数据包含手势信息,手势信息包含第一参数,第一参数用于指示第一手势,第一参数属于预先定义的手势参数集合;处理模块,用于根据至少一个测试数据生成第一控制信号,第一控制信号用于控制第一虚拟终端;收发模块,还用于输出第一控制信号。
第四方面,提供一种信息交互装置,包括用于执行如第二方面或第二方面任一种可能的设计中所述方法的模块/单元/技术手段。
示例性的,装置包括:收发模块,用于向第一装置输出至少一个测试数据;其中,至少一个测试数据包含手势信息,手势信息包含第一参数,第一参数用于指示第一手势,第一参数属于预先定义的手势参数集合;收发模块,还用于接收来自第一装置的第一控制信号;处理模块,用于根据第一控制信号控制第一虚拟终端。
第五方面,提供一种信息交互装置,所述装置包括处理器和接口电路,所述接口电路用于接收来自所述装置之外的其它装置的信号并传输至所述处理器或将来自所述处理器的信号发送给所述装置之外的其它通信装置,所述处理器通过逻辑电路或执行代码指令用于实现如第一方面、或者第一方面任一种可能的设计中所述的方法。
第六方面,提供一种信息交互装置,所述装置包括处理器和接口电路,所述接口电路用于接收来自所述装置之外的其它装置的信号并传输至所述处理器或将来自所述处理器 的信号发送给所述装置之外的其它通信装置,所述处理器通过逻辑电路或执行代码指令用于实现如第二方面、或者第二方面任一种可能的设计中所述的方法。
第七方面,提供一种信息交互系统,包括如第三方面所述的装置和如第四方面所述的装置;或者,包括如第五方面所述的装置和如第六方面所述的装置。
第八方面,提供一种计算机可读存储介质,包括程序或指令,当所述程序或指令在计算机上运行时,使得如第一方面、或者第一方面任一种可能的设计、或者第二方面、或者第二方面任一种可能的设计中所述的方法被执行。
第九方面,提供一种包含指令的计算机程序产品,该计算机程序产品中存储有指令,当其在计算机上运行时,使得计算机执行如第一方面、或者第一方面任一种可能的设计、或者第二方面、或者第二方面任一种可能的设计中所述的方法被执行。
第十方面,提供一种车辆,该车辆包括如第三方面所述的装置或第五方面所述的装置或第八方面所述的计算机可读存储介质。
附图说明
图1为本申请实施例适用一种信息交互系统的架构示意图;
图2为本申请实施例提供的一种可能的虚拟交通场景的示意图;
图3为本申请实施例提供的一种信息交互方法的流程图;
图4为本申请实施例提供的一种可能的测试数据的示意图;
图5为本申请实施例提供的不同虚拟终端视角下的虚拟主体图像的示意图;
图6为本申请实施例提供的一种信息交互装置的结构示意图;
图7为本申请实施例提供的另一种信息交互装置的结构示意图。
具体实施方式
下面将结合附图,对本申请实施例进行详细描述。
本申请实施例可以应用于任何可以对驾驶系统的能力进行测试的场景。
示例性的,参见图1,为本申请实施例适用一种信息交互系统的架构示意图,系统包括第二装置和第一装置。其中,第二装置用于生成至少一个测试数据,并向第一装置输出至少一个测试数据;第一装置为(或第一装置中部署有)被测试的对象,第一装置用于接收至少一个测试数据,以及对至少一个测试数据进行处理,生成测试结果(如第一控制信号),并向第二装置输出测试结果。
一种可能的设计中,第一装置可以为自动驾驶系统,或者为集成有自动驾驶系统的任何装置,该装置具体形式可以是芯片(例如控制器、处理器、集成电路等),也可以为终端(例如车载终端),等等,本申请不做限制。
示例性的,本申请实施中的自动驾驶系统包括但不限于是以下系统:
1)应急辅助(emergency assistance)系统:不能持续执行动态驾驶任务中的车辆横向或纵向运动控制,但具备持续执行动态驾驶任务中的部分目标和事件探测与响应的能力;
2)部分驾驶辅助(partial driver assistance)系统:在其设计的运行条件下持续地执行动态驾驶任务中的车辆横向或纵向运动控制,且具备与所执行的车辆横向或纵向运动控制相适应的部分目标和事件探测与响应的能力;
3)组合驾驶辅助(combined driver assistance)系统:在其设计的运行条件下持续地执行动态驾驶任务中的车辆横向和纵向运动控制,且具备与所执行的车辆横向和纵向运动控制相适应的部分目标和事件探测与响应的能力;
4)有条件自动驾驶(conditionally automated driving)系统:在其设计的运行条件下持续地执行全部动态驾驶任务;
5)高度自动驾驶(highly automated driving)系统:在其设计的运行条件下持续地执行全部动态驾驶任务并自动执行最小风险策略;
6)完全自动驾驶(fully automated driving)系统:在任何可行驶条件下持续地执行全部动态驾驶任务并自动执行最小风险策略。
应理解,以上几种仅为示例而非具体限定,本申请实施例中的自动驾驶系统不限定国家和等级,只要是具备辅助驾驶功能或自动驾驶功能的系统,则都在本申请的自动驾驶系统的保护范围之内。
一种可能的设计中,第二装置可以为仿真系统,或者为集成有仿真系统的任何装置(如芯片、终端等),第二装置用于生成与交通场景相关的测试数据,与交通场景相关的测试数据用于测试驾驶系统识别交通场景的能力、以及对交通场景做出响应的能力等。
一种可能的设计中,参见图1,第二装置至少包括以下三个功能模块:
1)场景搭建模块,用于搭建虚拟交通场景,例如搭建环境组件(如道路、建筑等)、交通流(如车辆、行人等)、信号灯(如红绿灯、转弯灯等)、虚拟主体(如交警模型),等等。其中,交通流中包含至少一个虚拟终端,如第一虚拟终端。如图2所示,为本申请实施例提供的一种可能的虚拟交通场景的示意图。
2)数据解析模块,用于获取场景搭建模块生成的场景数据,并对该场景数据进行解析,获得满足预设标准的测试数据。
可以理解的是,满足预设标准的测试数据能够被第一装置识别并进行处理。
可以理解的是,本申请对预设标准不做具体限制。例如,预设标准可以是自动化及测量系统标准协会(Association for Standardisation of Automation and Measuring Systems,ASAM)制定的开放式仿真接口(Open Simulation Interface,OSI)标准。OSI标准定义了一个通用接口,用来连接自动驾驶功能的开发和各种驾驶模拟框架,以实现兼容性,它能够使任何自动驾驶功能与任何仿真工具连接,同时能够集成各种传感器模型。
3)车辆动力学模型,用于根据第一装置发来的控制信号控制虚拟交通场景中的第一虚拟终端。例如,车辆动力学模型用于根据第一装置发来的第一控制信号和第一虚拟终端当前的位姿信息,获得第一虚拟终端针对虚拟交通场景的动力学响应量,进而根据车辆动力学模型更新第一虚拟终端的位姿信息,实现自动驾驶系统控制第一虚拟终端自动驾驶的效果。
一种可能的设计中,参见图1,第一装置中至少包括处理模块,处理模块通过对至少一个测试数据进行计算,输出用于控制第一虚拟终端的第一控制信号,进而模拟自动驾驶系统识别场景中发生的事件并控制车辆对事件做出响应的效果。例如,处理模块可以运行自动驾驶算法对应的指令或数据,从而实现对至少一个测试数据进行计算并输出第一控制信号。
应理解,第一装置和第二装置可以部署在同一物理实体中,也可以部署在不同的物理实体中,本申请不做限制。
应理解,图1所示的系统架构仅为示例,而非具体限定,实际应用中还可以有其它变形。
参见图3,为本申请实施例提供的一种信息交互方法的流程图,以方法应用于图1所示的系统为例,方法包括:
S101、第二装置生成至少一个测试数据;
首先,第二装置搭建虚拟交通场景。示例性的,第二装置中的场景搭建模块可以调用场景数据库中的数据,搭建虚拟交通场景。其中,场景数据库可以提供大量具有代表性的场景,并满足测试要求。场景数据库中的数据可以包括静态场景数据和动态场景数据。静态场景数据用于描述交通场景中静态组件,例如环境组件(如道路等)。动态场景数据用于描述交通场景中的动态组件,例如可流动的交通流(如车辆、行人等),可变化的信号灯(如红绿灯、转向灯等)、可做出手势(Gesture)的虚拟主体等等。场景数据库可以保存在第二装置中,也可以保存在第二装置之外的其它装置中,本申请不做限制。
搭建好的虚拟交通场景可以是一个三维的虚拟交通场景模型,该虚拟交通场景模型中包括静态组件的三维模型、动态组件的三维模型。如图2所示,为虚拟交通场景的示意图,其中包括道路、车辆、行人、交警、红绿灯等。
可以理解的是,本申请实施例中虚拟交通场景中至少包含第一虚拟终端和虚拟主体。其中,第一虚拟终端为搭载被测试对象(如自动驾驶系统)的终端的虚拟模型,例如是自动驾驶车辆的虚拟模型。虚拟主体可以模拟真人做出真人的手势,例如虚拟主体可以为虚拟的交警模型,能够做出交警的手势,或者例如虚拟主体可以为虚拟的交通协管员模型,能够做出交通协管员的手势,等等。如图2所示,是以虚拟主体为虚拟的交警模型以及第一虚拟终端为车辆为例。
当虚拟交通场景搭建完成之后,场景搭建模块可控制(或驱动)虚拟主体执行第一手势。
一种可能的设计中,场景搭建模块可以通过事件来控制(或驱动)虚拟主体执行手势,其中不同的事件可以对应相同或不同的手势。其中,事件包括但不限于是虚拟主体与第一虚拟终端的距离小于预设距离、交通拥堵、或交通事故,等等。例如,虚拟主体与第一虚拟终端的距离小于预设距离时,控制主体执行减速慢行手势。例如,虚拟交通场景中发生交通事故时,控制虚拟主体执行靠边停车手势。例如,虚拟交通场景中发生交通拥堵时,场景搭建模块控制虚拟主体执行变道手势。相应的,在第一事件发生时或发生后,场景搭建模块控制虚拟主体执行第一手势。通过事件驱动虚拟主体执行手势,比较贴近真实场景中的手势触发方式,可以提高测试的可靠性。
一种可能的设计中,场景搭建模块可以根据路采数据控制(或驱动)虚拟主体执行与路采数据中的手势相同的手势,其中路采数据指示了真实主体在真实道路场景中执行的手势。例如,路采数据例如为视频图像,该视频图像的时长为3分钟,其中第一分钟的视频指示了真实交警执行停止手势,第二分钟的视频指示了真实交警执行左转弯手势,第三分钟的视频指示了真实交警执行右转弯手势,相应的,场景搭建模块可控制虚拟主体在三分钟的时长内依次执行停止手势、左转弯手势和右转弯手势。相应的,场景搭建模块根据路采数据中的真实交警执行的第一手势控制中的虚拟主体执行第一手势。通过真实手势驱动虚拟主体执行相同手势,比较贴近真实场景中的手势变化规律,可以提高测试的可靠性。
一种可能的设计中,场景搭建模块可以按照预设规则控制(或驱动)虚拟主体在设定 的时间执行设定的手势。例如,一次测试时长为10分钟,预设规则为依次执行表1所示的十个手势,其中每个手势的执行时长为1分钟。相应的,设定的手势包括第一手势。通过预设规则驱动虚拟主体执行手势,可以降低方案的复杂度。
在虚拟主体执行手势的过程中,数据解析模块可以获取虚拟交通场景的场景数据。可以理解的是,数据解析模块获取到的场景数据中至少包含虚拟主体的相关数据。
一种可能的设计中,数据解析模块中集成有一个或多个传感器模型,例如包括但不限于是图像传感器模型、雷达模型等等。数据解析模块可以基于传感器模型获取第一虚拟终端对虚拟主体的感知数据,例如虚拟主体对应的图像、点云等等。如此,数据解析模块获取到的场景数据更加接近实际生活中真实车辆的感知数据。
另一种可能的设计中,数据解析模块也可以直接获取场景搭建模块搭建虚拟交通场景所用的场景数据库中的数据,将该数据作为第一虚拟终端对虚拟主体的感知数据。如此,场景数据更加准确地描述虚拟交通场景。
数据解析模块在获取到虚拟交通场景的场景数据之后,若虚拟交通场景的场景数据为预设格式,则可以直接场景数据直接作为至少一个测试数据;若虚拟交通场景的场景数据不是预设格式,数据解析模块对场景数据进行解析,转化为预设格式的至少一个测试数据。可选的,预设格式为满足OSI标准的数据格式。如此,可以实现第二装置与第一装置中的自动驾驶系统标准化连接。
可以理解的是,在本申请实施例中,至少一个测试数据至少包含手势信息,手势信息包含第一参数,第一参数用于指示第一手势,第一参数属于预先定义的手势参数集合。至少一个测试数据用于对第一装置中的自动驾驶系统识别和响应手势的能力进行测试。
一种可能的设计中,虚拟交通场景中的第一虚拟主体为虚拟的交警模型,手势参数集合包含多个参数,多个参数中的每个参数用于指示一个对应的交警手势,不同参数指示不同的交警手势。
示例性的,多个参数用于指示以下手势中的任意多个:停止手势、直行手势、左转弯手势、左转弯待转手势、右转弯手势、右转弯待转手势、变道手势、减速慢行手势、靠边停车手势、或未知手势。
应理解,在不同的国家或地区,对于交警手势的定义可以不同,因此在实际应用中,可以根据需要灵活设计手势参数集合中所包含的参数。
一种可能的设计中,以预设标准为OSI标准为例,则第一参数可以为红绿蓝(Red Green Blue,RGB)值,RGB值用于表征第一手势对应的图像。例如,数据解析模块基于传感器模型获取第一虚拟终端视角下的虚拟交通场景中虚拟主体的二维或三维图像,模拟出真实世界中车辆基于图像传感器拍摄交警图像的效果。由于OSI标准支持图像格式的数据,因此数据解析模块可以将获得的图像数据(RGB值)直接作为测试数据发送给第一装置。
另一种可能的设计中,以预设标准为OSI标准为例,则第一参数可以为真值,真值用于指示第一手势。示例性的,参见表1,为手势参数集合的示例,该示例中以手势参数集合包含的多个参数均为真值为例。
表1手势参数集合
参数(真值) 手势(Gesture)类型
0 gesture_unknown(未知手势)
1 gesture_stop(停止手势)
2 sture_straight_ahead(执行手势)
3 gesture_left(左转弯手势)
4 gesture_left_wait(左转弯待转手势)
5 gesture_right(右转弯手势)
6 gesture_right_wait(右转弯待转手势)
7 gesture_lane_change(变道手势)
8 gesture_slow_down(减速慢行手势)
9 gesture_pull_over(靠边停车手势)
应理解,以上表1仅为示例而非具体限定,实际应用中真值的类型、以及各真值对应的手势类型可以根据需求灵活设定。
当然,除了RGB值、真值之外,第一参数还可以为其它数据格式,具体由第一装置支持的数据格式决定。
S102,第二装置向第一装置输出至少一个测试数据,相应的,第一装置接收至少一个测试数据;
具体的,数据解析模块将至少一个测试数据发送至第一装置。
S103、第一装置根据至少一个测试数据生成第一控制信号;
具体的,第一装置中的处理模块根据至少一个测试数据识别虚拟交通场景中发生的事件,然后根据事件生成第一控制信号,其中第一控制信号用于控制第一虚拟终端,进而使得第一终端对事件做出响应行为。
一种可能的实施方式中,第一装置中的处理模块根据至少一个测试数据中所包含的手势信息(如第一参数)识别虚拟交通场景中虚拟主体所执行的手势(如第一手势),然后根据第一手势生成第一控制信号。
在具体实现时,处理模块可以集成以下一个或多个功能:感知(例如根据测试数据感知虚拟交通场景中已发生的事件,如第一手势)、预测(例如根据感知结果预测虚拟交通场景中未来要发生的事件),规划(例如根据感测结果和/或预测结果规划第一虚拟终端的行驶轨迹)、控制(例如生成用于控制第一虚拟终端的行驶轨迹的控制信号),等等。
示例性的,参见图2,虚拟主体在十字路口执行的第一手势为左转弯手势,则第一装置中的处理模块生成第一控制信号用于控制第一虚拟终端左转弯行驶。
S104、第一装置向第二装置输出第一控制信号,相应的,第二装置接收第一控制信号;
S105、第二装置根据第一控制信号控制第一虚拟终端。
示例性的,以虚拟交通场景中发生的事件为左转弯手势以及第一控制信号用于控制第一虚拟终端左转弯为例,车辆动力学模型根据第一装置发来的第一控制信号(左转弯信号)和第一虚拟终端当前的位姿信息(例如第一虚拟终端所处的车道,第一虚拟终端与路口距离等),获得第一虚拟终端针对虚拟交通场景的动力学响应量(如实现第一虚拟终端左转 弯所需要的行驶速度、方向等),进而根据动力学响应量更新第一虚拟终端的位姿信息(如控制第一虚拟终端左转弯行驶)。
可以理解的是,第二装置在控制第一虚拟终端之后,可以通过检测第一虚拟终端的状态来确定自动驾驶系统识别和响应第一手势的能力。例如,上述车辆动力学模型更新第一虚拟终端的位姿信息之后,第二装置确定第一虚拟终端更新后的位姿信息确定测自动驾驶系统是否准确识别和响应第一手势,进而确定第一装置中的自动驾驶系统识别和响应手势的能力。
可以理解的是,以上是以第一手势为例,在实际测试过程中,可以不断变化手势以对自动驾驶系统进行多次测试。在测试过程中,还可以根据测试结果不断调整自动驾驶系统的算法,以提高自动驾驶系统识别和响应手势的能力。
根据以上S101~S105,可以实现第二装置生成测试数据,以及基于测试数据对自动驾驶系统识别和响应交警手势的能力进行闭环测试,进而提高车辆自动驾驶的安全性和智能性。
可以理解的是,在实际生活中,交警执行的手势可以具有针对性(或者说方向性)。例如,一般情况下,交警执行的手势都是用于指挥向面向交警方位上的车辆行驶,换而言之,交警执行的手势对于面向交警方位上的车辆为有效手势,而对于背向交警方位上的车辆为无效手势。因此,在本申请实施例中,数据传输模块可以在第一手势对第一虚拟终端有效时(例如虚拟主体与第一虚拟终端面对面),向第一装置发送至少一个测试数据;和/或,第一装置中的处理模块确定第一手势对第一虚拟终端有效(例如虚拟主体与第一虚拟终端面对面)时,向第一装置输出第一控制信号。当然,第一手势对第一虚拟终端有效的判断条件除了虚拟主体与第一虚拟终端的相对位置之外,还可以包括其它判断条件,本申请不做限制。
一种可能的设计中,上述手势信息还可以包含第二参数,第二参数用于指示第一虚拟终端所在车道的信息。第一虚拟终端所在车道的信息例如包括第一虚拟终端所在车道的标识(assigned_lane_id)。换而言之,第一虚拟终端所在车道的信息可以作为手势信息的一部分,提供给自动驾驶系统,以便自动驾驶系统可以参考第一虚拟终端所在的车道的信息识别和响应第一手势,可以提高方案的可靠性。
一种可能的设计中,上述手势信息还可以包含:第三参数,第三参数用于指示第一参数是否有效(is_out_of_service)。相应的,第三参数指示第一参数有效时,第一装置可以根据第一手势生成第一控制信号。
可选的,若第三参数指示第一参数无效,且至少一个测试数据还包括手势信息之外的其它信息,则第一装置可以根据其它信息输出控制信号。
如此,可以设定手势信息是否有效,根据需要灵活决定是否对自动驾驶系统识别和响应手势的能力进行测试,可以节省第一装置的计算资源。
一种可能的设计中,上述至少一个测试数据还包含以下信息中的一个或多个:
1)、第一手势的主体标识信息,即执行第一手势的主体的标识信息,例如是图2所示的虚拟主体的标识信息;
2)、第一手势的主体位置信息(police_base),即执行第一手势的主体的位置信息,例如是图2所示的虚拟主体的位置信息,其中,位置信息可以是经纬度或坐标等,本申请不做限制;
3)、第一手势的主体参考信息(model reference),即执行第一手势的主体的来源信息,例如是图2所示的虚拟主体的来源信息。可选的,主体参考信息可以是一个链接,用于指示存储虚拟主体的三维模型数据的文件地址;
4)、第一手势的主体驱动信息(source reference),即用于驱动虚拟主体执行第一手势的信息。例如,第一手势由第一事件驱动时,则第一手势的主体驱动信息可以为第一事件;例如,第一手势由路采数据驱动时,则第一手势的主体驱动信息可以为路采数据;例如,第一手势由预设规则驱动时,则第一手势的主体驱动信息可以为预设规则,等等。
如此,第二装置可以提供第一手势对应的执行主体相关的信息给第一装置,使得第一装置可以识别出更多与第一手势相关的信息,可以进一步提高测试方案的可靠性。
考虑实际生活中,除了手势信息之外,车辆还可能同时感知到其它用于指示交通规则的信息,如交通信号灯信息、或路标信息等。鉴于此,一种可能的设计中,至少一个测试数据还可以包括交通信号灯信息、或路标信息等。该些信息可以与手势信息匹配,也可以与手势信息冲突,本申请不做限制。
若交通信号灯信息或路标信息等与手势信息匹配(例如手势信息指示左转弯手势,交通信号灯指示左转弯可通行或者第一虚拟终端所在的道路上有左转弯标识等)且第三参数指示第一参数有效,则第一装置可以根据以上任一信息输出第一控制信号。
若交通信号灯信息或路标信息等与第一手势冲突(例如第一手势为左转弯手势,交通信号灯指示直行可通行或者第一虚拟终端所在的道路上只有直行标识等)且第三参数指示第一参数有效,则第一装置可以根据参考以上各信息的优先级信息输出第一控制信号。
示例性的,至少一个测试数据包含手势信息和交通信号灯信息,则第一装置根据交通信号灯信息和手势信息中优先级较高的信息,输出第一控制信号。
相应的,在手势信息的优先级高于交通信号灯信息的优先级时,第一装置根据第一手势生成第一控制信号。例如,交通信号灯的优先级低于手势信息的优先级,交通信号灯信息指示直行,手势信息指示左转弯,则第一装置输出的第一控制信号用于控制第一虚拟终端左转弯。
相反的,若手势信息的优先级低于交通信号灯信息的优先级,则第一装置根据交通信号灯信息生成第一控制信号。例如,交通信号灯的优先级高于手势信息的优先级,交通信号灯信息指示直行,手势信息指示左转弯,则第一装置输出的第一控制信号用于控制第一虚拟终端直行。
可选的,至少一个测试数据中各类信息的优先级可以根据当地的交通法规确定。
如此,第二装置可以支持手势信息与其它信息的冲突配置,可以测试自动驾驶系统对冲突信息的处理能力,可提高自动驾驶系统的鲁棒性。
考虑际生活中,车辆周围还存在其它交通参与者,会影响车辆对交警手势的感知,例如其它车辆、行人、或者建筑物等等。一种可能的设计中,至少一个测试数据还可以包括干扰信息。
示例性的,干扰信息包括遮挡物信息。其中,该遮挡物信息为遮挡第一虚拟终端感知虚拟主体的视线的物体(即遮挡物)的信息,包括但不限于遮挡物的形状、颜色、轮廓或空间位置等。应理解,在遮挡场景下,手势信息可以存在部分缺失。例如,以测试数据为图像为例,则参见图4,第一装置收到的图像中,虚拟主体的一部分被其它车辆遮挡,导致虚拟主体的手势信息不完整。
当然,干扰信息不限于是遮挡物的信息,还可以是其它信息。例如,测试数据为图像,则第二装置还可以通过降低图像的亮度、分辨率等参数来模拟实际生活中图像传感器的硬件性能带来的干扰。
如此,第二装置可以支持干扰信息的配置,可以测试自动驾驶系统对干扰信息的处理能力,可提高自动驾驶系统的鲁棒性。
一种可能的设计中,至少一个测试数据还包含来自第二虚拟终端的感知数据,感知数据包含参考手势信息,参考手势信息用于指示第一手势。其中,第二虚拟终端和虚拟主体的相对位置与第一虚拟终端与虚拟主体的相对位置不同。
示例性的,以测试数据和感知数据均为图像为例,参见图5,第一虚拟终端视角下的虚拟主体图像与第二虚拟终端视角下的虚拟主体图像不同,由于第一虚拟终端的视线被第二虚拟终端遮挡,所以只有虚拟主体的部分图像,而第一虚拟终端的视线未被任何物体遮挡,所以能够获得虚拟主体的完整图像。
相应的,第一装置可以同时参考第一虚拟终端视角下的虚拟主体图像与第二虚拟终端视角下的虚拟主体图像识别虚拟主体执行的手势。
当然,至少一个测试数据除了还包含来自第二虚拟终端的感知数据,还可以包含来自其他设备的数据,例如道路旁的摄像机采集的图像数据,等等,本申请对此不做限制。
如此,第二装置可以提供不同视角下的虚拟主体的手势信息给第一装置,可以测试自动驾驶系统基于车联网(Vehicle to Everything,V2X)通信功能识别手势的能力。
可以理解的是,以上各实施方式可以分别单独实施,也可以相互结合实施,本申请不做限制。
以上结合附图介绍了本申请实施例提供的方法,以下结合附图介绍本申请实施例提供的装置。
基于同一技术构思,本申请实施例提供一种信息交互装置,该装置包括用于执行上述方法实施例中第二装置和/或第一装置所执行的方法的模块/单元/手段。该模块/单元/手段可以通过软件实现,或者通过硬件实现,也可以通过硬件执行相应的软件实现。
示例性的,参见图6该装置可以包括收发模块201和处理模块202,该装置用于实现上述第二装置或第一装置的功能。
例如,当该装置用于第一装置时,收发模块201,用于接收至少一个测试数据,至少一个测试数据包含手势信息,手势信息包含第一参数,第一参数用于指示第一手势,第一参数属于预先定义的手势参数集合;处理模块202,用于根据至少一个测试数据生成第一控制信号,第一控制信号用于控制第一虚拟终端;收发模块201,还用于输出第一控制信号。
可选的,手势参数集合包含多个参数,多个参数用于指示以下手势中的任意多个:停止手势、直行手势、左转弯手势、左转弯待转手势、右转弯手势、右转弯待转手势、变道手势、减速慢行手势、靠边停车手势、或未知手势。
可选的,第一参数为RGB值,RGB值用于表征第一手势对应的图像;或者,第一参数为真值,真值用于指示第一手势。
可选的,手势信息还包含:第二参数,第二参数用于指示第一虚拟终端所在车道的信息;和/或,第三参数,第三参数用于指示第一参数是否有效。
可选的,至少一个测试数据还包含以下信息中的一个或多个:
第一手势的主体标识信息;
第一手势的主体位置信息;
第一手势的主体参考信息;
第一手势的主体驱动信息。
可选的,至少一个测试数据还包括以下信息中的至少一个:
交通信号灯信息;或者,
遮挡物信息。
可选的,至少一个测试数据还包含来自第二虚拟终端的感知数据,感知数据包含参考手势信息,参考手势信息用于指示第一手势。
可选的,至少一个测试数据包含手势信息和交通信号灯信息;处理模块202用于:根据交通信号灯信息的优先级信息、手势信息的优先级信息中优先级较高的信息,生成第一控制信号。
例如,当该装置用于第二装置时,收发模块,用于向第一装置输出至少一个测试数据;其中,至少一个测试数据包含手势信息,手势信息包含第一参数,第一参数用于指示第一手势,第一参数属于预先定义的手势参数集合;收发模块201,还用于接收来自第一装置的第一控制信号;处理模块202,用于根据第一控制信号控制第一虚拟终端。
可选的,手势参数集合包含多个参数,多个参数用于指示以下手势中的任意多个:
停止手势、直行手势、左转弯手势、左转弯待转手势、右转弯手势、右转弯待转手势、变道手势、减速慢行手势、靠边停车手势、或未知手势。
可选的,第一参数为RGB值,RGB值用于表征第一手势对应的图像;或者,第一参数为真值,真值用于指示第一手势。
可选的,手势信息还包含:第二参数,第二参数用于指示第一虚拟终端所在车道的信息;和/或,第三参数,第三参数用于指示第一参数是否有效。
可选的,至少一个测试数据还包含以下信息中的一个或多个:
第一手势的主体标识信息;
第一手势的主体位置信息;
第一手势的主体参考信息;
第一手势的主体驱动信息。
可选的,至少一个测试数据还包括以下信息中的至少一个:
交通信号灯信息;或者,
遮挡物信息。
可选的,至少一个测试数据还包含来自第二虚拟终端的感知数据,感知数据包含参考手势信息,参考手势信息用于指示第一手势。
可选的,处理模块202还用于:在检测到第一事件时,控制虚拟主体执行第一手势;或者,根据路采数据控制虚拟主体执行与路采数据中的手势相同的手势,其中路采数据指示了真实主体在真实道路场景中执行第一手势;或者,控制虚拟主体在设定的时间执行设定的手势,其中设定的手势包括第一手势。
可选的,第一事件包含以下一项或多项:虚拟主体与第一虚拟终端的距离小于预设距离、交通拥堵、或交通事故。
应理解,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的 功能描述,在此不再赘述。
在具体实施时,上述装置可以有多种产品形态,以下介绍几种可能的产品形态。
参见图7,本申请实施例还提供一种信息交互装置,该装置包括至少一个处理器301和接口电路302;接口电路302用于接收来自该装置之外的其它装置的信号并传输至处理器301或将来自处理器301的信号发送给该装置之外的其它通信装置,处理器301通过逻辑电路或执行代码指令用于实现上述第二装置或第一装置所执行的方法。
应理解,本申请实施例中提及的处理器可以通过硬件实现也可以通过软件实现。当通过硬件实现时,该处理器可以是逻辑电路、集成电路等。当通过软件实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现。
示例性的,处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
应理解,本申请实施例中提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Eate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
需要说明的是,当处理器为通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件时,存储器(存储模块)可以集成在处理器中。
应注意,本文描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
基于相同技术构思,本申请实施例还提供一种计算机可读存储介质,包括程序或指令,当所述程序或指令在计算机上运行时,使得如上述第二装置或第一装置所执行的方法被执行。
基于相同技术构思,本申请实施例还提供一种包含指令的计算机程序产品,该计算机程序产品中存储有指令,当其在计算机上运行时,使得上述第二装置或第一装置所执行的方法被执行。
基于相同技术构思,本申请实施例还提供一种车辆,包括图6或图7所示的装置。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程 序产品的形式。
本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的保护范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (24)

  1. 一种信息交互方法,其特征在于,所述方法包括:
    接收至少一个测试数据,所述至少一个测试数据包含手势信息,所述手势信息包含第一参数,所述第一参数用于指示第一手势,所述第一参数属于预先定义的手势参数集合;
    根据所述至少一个测试数据,输出第一控制信号,所述第一控制信号用于控制第一虚拟终端。
  2. 如权利要求1所述的方法,其特征在于,所述手势参数集合包含多个参数,所述多个参数用于指示以下手势中的任意多个:
    停止手势、直行手势、左转弯手势、左转弯待转手势、右转弯手势、右转弯待转手势、变道手势、减速慢行手势、靠边停车手势、或未知手势。
  3. 如权利要求1或2所述的方法,其特征在于,所述第一参数为RGB值,所述RGB值用于表征所述第一手势对应的图像;
    或者,所述第一参数为真值,所述真值用于指示所述第一手势。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述手势信息还包含:
    第二参数,所述第二参数用于指示所述第一虚拟终端所在车道的信息;和/或,
    第三参数,所述第三参数用于指示所述第一参数是否有效。
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述至少一个测试数据还包含以下信息中的一个或多个:
    所述第一手势的主体标识信息;
    所述第一手势的主体位置信息;
    所述第一手势的主体参考信息;
    所述第一手势的主体驱动信息。
  6. 如权利要求1-5任一项所述的方法,其特征在于,至少一个测试数据还包括以下信息中的至少一个:
    交通信号灯信息;或者,
    遮挡物信息。
  7. 如权利要求1-6任一项所述的方法,其特征在于,所述至少一个测试数据还包含来自第二虚拟终端的感知数据,所述感知数据包含参考手势信息,所述参考手势信息用于指示所述第一手势。
  8. 如权利要求6所述的方法,其特征在于,所述至少一个测试数据包含所述手势信息和所述交通信号灯信息;
    所述根据所述至少一个测试数据,输出第一控制信号,包括:
    根据所述交通信号灯信息和所述手势信息中优先级较高的信息,输出所述第一控制信号。
  9. 一种信息交互方法,其特征在于,所述方法包括:
    向第一装置输出所述至少一个测试数据;其中,所述至少一个测试数据包含手势信息,所述手势信息包含第一参数,所述第一参数用于指示第一手势,所述第一参数属于预先定义的手势参数集合;
    接收来自所述第一装置的第一控制信号;根据所述第一控制信号控制第一虚拟终端。
  10. 如权利要求9所述的方法,其特征在于,所述手势参数集合包含多个参数,所述多个参数用于指示以下手势中的任意多个:
    停止手势、直行手势、左转弯手势、左转弯待转手势、右转弯手势、右转弯待转手势、变道手势、减速慢行手势、靠边停车手势、或未知手势。
  11. 如权利要求9或10所述的方法,其特征在于,所述第一参数为RGB值,所述RGB值用于表征所述第一手势对应的图像;
    或者,所述第一参数为真值,所述真值用于指示所述第一手势。
  12. 如权利要求9-11任一项所述的方法,其特征在于,所述手势信息还包含:
    第二参数,所述第二参数用于指示所述第一虚拟终端所在车道的信息;和/或,
    第三参数,所述第三参数用于指示所述第一参数是否有效。
  13. 如权利要求9-12任一项所述的方法,其特征在于,所述至少一个测试数据还包含以下信息中的一个或多个:
    所述第一手势的主体标识信息;
    所述第一手势的主体位置信息;
    所述第一手势的主体参考信息;
    所述第一手势的主体驱动信息。
  14. 如权利要求9-13任一项所述的方法,其特征在于,至少一个测试数据还包括以下信息中的至少一个:
    交通信号灯信息;或者,
    遮挡物信息。
  15. 如权利要求9-14任一项所述的方法,其特征在于,所述至少一个测试数据还包含来自第二虚拟终端的感知数据,所述感知数据包含参考手势信息,所述参考手势信息用于指示所述第一手势。
  16. 如权利要求9-15任一项所述的方法,其特征在于,所述方法包括:
    在检测到第一事件时,控制虚拟主体执行所述第一手势,其中所述第一事件包含以下一项或多项:所述虚拟主体与所述第一虚拟终端的距离小于预设距离、交通拥堵、或交通事故;或者,
    根据路采数据控制虚拟主体执行与路采数据中的手势相同的手势,其中所述路采数据指示了真实主体在真实道路场景中执行所述第一手势;或者,
    控制虚拟主体在设定的时间执行设定的手势,其中所述设定的手势包括所述第一手势。
  17. 一种信息交互装置,其特征在于,所述装置包括:
    收发模块,用于接收至少一个测试数据,所述至少一个测试数据包含手势信息,所述手势信息包含第一参数,所述第一参数用于指示第一手势,所述第一参数属于预先定义的手势参数集合;
    处理模块,用于根据所述至少一个测试数据生成第一控制信号,所述第一控制信号用于控制第一虚拟终端;
    所述收发模块,还用于输出所述第一控制信号。
  18. 一种信息交互装置,其特征在于,所述装置包括:
    收发模块,用于向第一装置输出至少一个测试数据;其中,所述至少一个测试数据包含手势信息,所述手势信息包含第一参数,所述第一参数用于指示第一手势,所述第一参 数属于预先定义的手势参数集合;
    所述收发模块,还用于接收来自所述第一装置的第一控制信号;
    处理模块,用于根据所述第一控制信号控制第一虚拟终端。
  19. 一种信息交互装置,其特征在于,所述装置包括处理器和接口电路,所述接口电路用于接收来自所述装置之外的其它装置的信号并传输至所述处理器或将来自所述处理器的信号发送给所述装置之外的其它通信装置,所述处理器通过逻辑电路或执行代码指令用于实现如权利要求1至8中任一项所述的方法。
  20. 一种信息交互装置,其特征在于,所述装置包括处理器和接口电路,所述接口电路用于接收来自所述装置之外的其它装置的信号并传输至所述处理器或将来自所述处理器的信号发送给所述装置之外的其它通信装置,所述处理器通过逻辑电路或执行代码指令用于实现如权利要求9至16中任一项所述的方法。
  21. 一种计算机可读存储介质,其特征在于,包括程序或指令,当所述程序或指令在计算机上运行时,使得如权利要求1至8中任一项所述的方法被执行。
  22. 一种计算机可读存储介质,其特征在于,包括程序或指令,当所述程序或指令在计算机上运行时,使得如权利要求9至16中任一项所述的方法被执行。
  23. 一种信息交互系统,其特征在于,包括如权利要求17所述的装置和如权利要求18所述的装置;或者,包括如权利要求19所述的装置和如权利要求20所述的装置。
  24. 一种车辆,其特征在于,包括如权利要求17所述的装置或权利要求19所述的装置或权利要求21所述的计算机可读存储介质。
PCT/CN2021/131251 2021-11-17 2021-11-17 一种信息交互方法、装置及系统 WO2023087182A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180100917.7A CN117716401A (zh) 2021-11-17 2021-11-17 一种信息交互方法、装置及系统
PCT/CN2021/131251 WO2023087182A1 (zh) 2021-11-17 2021-11-17 一种信息交互方法、装置及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/131251 WO2023087182A1 (zh) 2021-11-17 2021-11-17 一种信息交互方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2023087182A1 true WO2023087182A1 (zh) 2023-05-25

Family

ID=86396123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131251 WO2023087182A1 (zh) 2021-11-17 2021-11-17 一种信息交互方法、装置及系统

Country Status (2)

Country Link
CN (1) CN117716401A (zh)
WO (1) WO2023087182A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105313835A (zh) * 2014-05-29 2016-02-10 深圳市赛格导航技术有限公司 一种通过手势动作对车辆部件进行控制的方法及系统
CN109211582A (zh) * 2018-07-27 2019-01-15 山东省科学院自动化研究所 无人驾驶车辆认知交通指挥手势能力的测试系统及方法
US20190219997A1 (en) * 2018-01-12 2019-07-18 Superior Marine Products Llc Gesturing for control input for a vehicle
CN209657498U (zh) * 2019-03-19 2019-11-19 公安部交通管理科学研究所 一种用于交通指挥的模拟警察装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105313835A (zh) * 2014-05-29 2016-02-10 深圳市赛格导航技术有限公司 一种通过手势动作对车辆部件进行控制的方法及系统
US20190219997A1 (en) * 2018-01-12 2019-07-18 Superior Marine Products Llc Gesturing for control input for a vehicle
CN109211582A (zh) * 2018-07-27 2019-01-15 山东省科学院自动化研究所 无人驾驶车辆认知交通指挥手势能力的测试系统及方法
CN209657498U (zh) * 2019-03-19 2019-11-19 公安部交通管理科学研究所 一种用于交通指挥的模拟警察装置及系统

Also Published As

Publication number Publication date
CN117716401A (zh) 2024-03-15

Similar Documents

Publication Publication Date Title
CN111123933B (zh) 车辆轨迹规划的方法、装置、智能驾驶域控制器和智能车
US9922565B2 (en) Sensor fusion of camera and V2V data for vehicles
JP6714513B2 (ja) 車両のナビゲーションモジュールに対象物の存在を知らせる車載装置
CN110796007B (zh) 场景识别的方法与计算设备
CN107450529A (zh) 用于自动驾驶车辆的改进的物体检测
CN110910673B (zh) 一种提醒行人碰撞车辆的方法及系统、摄像头、移动终端
WO2022134364A1 (zh) 车辆的控制方法、装置、系统、设备及存储介质
CN111094095B (zh) 自动地感知行驶信号的方法、装置及运载工具
CN113971432A (zh) 通过元数据关联生成融合传感器数据
JP2019519051A (ja) 知的照明システム、照明デバイス、車両、車載端末、車両運転支援システム及び車両運転支援方法
WO2021010396A1 (ja) 走行記憶システム、潜在事故責任決定装置、走行記憶方法、潜在事故責任決定方法、映像記録システム、自動運転システム、および映像記録方法
CN111766866B (zh) 信息处理装置和包括信息处理装置的自动行驶控制系统
WO2023087182A1 (zh) 一种信息交互方法、装置及系统
CN116872957A (zh) 智能驾驶车辆的预警方法、装置及电子设备、存储介质
US20230256994A1 (en) Assessing relative autonomous vehicle performance via evaluation of other road users
CN114511834A (zh) 一种确定提示信息的方法、装置、电子设备及存储介质
CN114527735A (zh) 用于控制自动驾驶车辆的方法和装置、车辆及存储介质
CN113771845A (zh) 预测车辆轨迹的方法、装置、车辆和存储介质
Gruyer et al. PerSEE: A central sensors fusion electronic control unit for the development of perception-based ADAS
WO2023010236A1 (zh) 一种显示方法、装置和系统
JP7358593B1 (ja) 車載装置、車載装置の動作方法、およびプログラム
WO2020241273A1 (ja) 車両用通信システム、車載機、制御方法及びコンピュータプログラム
US20220309693A1 (en) Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation
US11881031B2 (en) Hierarchical processing of traffic signal face states
EP4180838A1 (en) System for localizing three-dimensional objects

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202180100917.7

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2021964345

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021964345

Country of ref document: EP

Effective date: 20240515