CN117765793A - Takeover training method based on remote driving simulator and related device - Google Patents

Takeover training method based on remote driving simulator and related device Download PDF

Info

Publication number
CN117765793A
CN117765793A CN202311753798.2A CN202311753798A CN117765793A CN 117765793 A CN117765793 A CN 117765793A CN 202311753798 A CN202311753798 A CN 202311753798A CN 117765793 A CN117765793 A CN 117765793A
Authority
CN
China
Prior art keywords
remote
data
driving
training
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311753798.2A
Other languages
Chinese (zh)
Inventor
田宇
孙庆瑞
肖登宇
夏黎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311753798.2A priority Critical patent/CN117765793A/en
Publication of CN117765793A publication Critical patent/CN117765793A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a take-over training method and a related device based on a remote driving simulator, and relates to the technical fields of automatic driving, remote assistance, simulation training and the like. The method comprises the following steps: according to a training request initiated by a remote driver to a remote driving simulation platform through a remote cockpit, distributing a virtual vehicle in an idle state for the remote driver; replaying target real scene data corresponding to the training request by taking a virtual vehicle as a carrier, wherein the real scene data is extracted from real running data of an automatic driving vehicle which historically initiates a remote driving assistance request, and the real running data comprises running data of the automatic driving vehicle from a preset time before initiating the remote driving assistance request to a time when the remote driving assistance is finished; acquiring control behavior information of a remote driver on a virtual vehicle in the process of replaying real scene data; and determining the takeover training result of the driving event reflected by the target real scene data by the remote driver according to the control behavior information.

Description

Takeover training method based on remote driving simulator and related device
Technical Field
The disclosure relates to the technical field of data processing, in particular to the technical fields of automatic driving, remote assistance, simulation training and the like, and particularly relates to a take-over training method, device, electronic equipment, computer readable storage medium and computer program product based on a remote driving simulator.
Background
The remote driving safety personnel can carry out remote driving by transmitting back the state data and the environment sensing data of the vehicle end such as the vehicle end video, the radar signal, the speed, the position and the like of the remote driving cabin in real time, thereby realizing the risk taking over, the blocking and getting rid of the trouble and the like of the automatic driving vehicle.
Along with the rapid landing of the automatic driving all-unmanned service, continuous 'region expansion and vehicle expansion' operation also puts higher demands on the remote cockpit scale and the number of remote safety officers matched with the vehicle expansion and vehicle expansion operation. A large number of newly-hired remote security officers lack the car control experience under a remote driving system, but the traditional security officer training and checking method needs to be configured with actual automatic driving vehicles and remote cockpit environments, and is high in cost and complex in flow.
Disclosure of Invention
Embodiments of the present disclosure provide a take-over training method, apparatus, electronic device, computer readable storage medium and computer program product based on a remote driving simulator.
In a first aspect, an embodiment of the present disclosure proposes a takeover training method based on a remote driving simulator, including: according to a training request initiated by a remote driver to a remote driving simulation platform through a remote cockpit, distributing a virtual vehicle in an idle state for the remote driver; the target real scene data corresponding to the training request is replayed by taking the virtual vehicle as a carrier; the real scene data is extracted from real running data of the automatic driving vehicle which historically initiates the remote driving assistance request, wherein the real running data comprises running data of the automatic driving vehicle from a preset time before initiating the remote driving assistance request to a time when the remote driving assistance is finished; acquiring control behavior information of a remote driver on a virtual vehicle in the process of replaying real scene data; and determining the takeover training result of the driving event reflected by the target real scene data by the remote driver according to the control behavior information.
In a second aspect, an embodiment of the present disclosure proposes a takeover training device based on a remote driving simulator, including: the virtual vehicle distribution unit is configured to distribute the virtual vehicle in an idle state to the remote driver according to a training request initiated by the remote driver to the remote driving simulation platform through the remote cockpit; a scene data playback unit configured to play back target real scene data corresponding to the training request on a virtual vehicle as a carrier; the real scene data is extracted from real running data of the automatic driving vehicle which historically initiates the remote driving assistance request, wherein the real running data comprises running data of the automatic driving vehicle from a preset time before initiating the remote driving assistance request to a time when the remote driving assistance is finished; a control behavior information acquisition unit configured to acquire control behavior information of a remote driver on a virtual vehicle during playback of real scene data; and the takeover training result determining unit is configured to determine takeover training results of the driving event reflected by the target real scene data by the remote driver according to the control behavior information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to implement the remote driving simulator based takeover training method as described in the first aspect when executed.
In a fourth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement a remote driving simulator based takeover training method as described in the first aspect when executed.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, is capable of implementing the steps of the remote driving simulator based takeover training method as described in the first aspect.
According to the take-over training scheme based on the remote driving simulator, firstly, real driving data which are completely collected from real automatic driving vehicles to initiate remote assistance requests to a remote driving assistance platform are used as real scene data, and then a large amount of real scene data are used for constructing the remote driving simulation platform, so that when a remote driver is trained through the remote driving simulation platform, target real scene data for training the remote driver can be replayed on a virtual vehicle distributed to the remote driver, the most real video image and sensor information system effect of the remote driver can be achieved, and further the capability of the remote driver for taking over the vehicle and how to take over the vehicle can be accurately trained, and the reality and effectiveness of the remote driving training are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture in which the present disclosure may be applied;
FIG. 2 is a flow chart of a method of takeover training based on a remote driving simulator provided in an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method for determining to take over training results provided by an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for extracting real scene data according to an embodiment of the present disclosure;
FIGS. 5-1 to 5-5 are schematic views of different links of a set of remote driving simulator-based take over training schemes provided in a specific application scenario provided by embodiments of the present disclosure;
FIG. 6 is a block diagram of a remote driving simulator based take over training device provided in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device adapted to perform a remote driving simulator-based takeover training method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of remote driving simulator-based take over training methods, apparatus, electronic devices, and computer-readable storage media of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include an autonomous vehicle 101, a remote driving assistance platform 102, and a remote cockpit 103. The network is a medium to provide a communication link between the autonomous vehicle 101, the remote driving assistance platform 102 and the remote cockpit 103. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The remote driver may use the remote cockpit 103 to provide remote driving assistance to the autonomous vehicle 101 initiating the remote driving assistance request through the remote driving assistance platform 102, while inexperienced remote drivers may also perform capability training through the remote cockpit 103 through a simulation platform provided by the remote driving assistance platform 102. Various applications for enabling information communication between the remote cockpit 103, the remote driving assistance platform 102, and the automated driving vehicle 101, such as a remote driving assistance type application, a data transmission type application, a remote driving training type application, and the like, may be installed on the remote cockpit.
The autonomous vehicle 101, the remote driving assistance platform 102 and the remote cockpit 103 are usually represented by different types of hardware devices, and in particular, may also be represented as software or software simulation products in a simulation environment. The remote driving assistance platform 102 is typically carried by a computing terminal with high computing power, such as a server cluster built up from a plurality of servers.
The remote assistance platform 102 may provide various services through various built-in applications, for example, a remote driving training class application that may perform training services for a remote driver requiring training, and the remote assistance platform 102 may achieve the following effects when running the remote driving training class application: firstly, a training request initiated by a remote driver to a remote driving assistance platform 102 through a remote cockpit 103 is received through a network, and a virtual vehicle in an idle state is distributed to the remote cockpit 103 corresponding to the remote driver according to the training request; then, replaying target real scene data corresponding to the training request by taking the virtual vehicle as a carrier, wherein the real scene data is extracted from real running data of an automatic driving vehicle which historically initiates a remote driving assistance request, and the real running data comprises running data of the automatic driving vehicle from a preset time period before initiating the remote driving assistance request to a remote driving assistance ending period; then, acquiring control behavior information issued by the remote driver to the virtual vehicle through the remote cockpit 103 in the process of replaying the real scene data; and finally, determining the takeover training result of the driving event reflected by the target real scene data by the remote driver according to the control behavior information.
The takeover training method based on the remote driving simulator provided in the subsequent embodiments of the present disclosure is generally performed by the remote assistance platform 102 with a relatively high computing capability and a relatively high computing resource, and accordingly, the takeover training device based on the remote driving simulator is also generally disposed in the remote assistance platform 102.
It should be understood that the number of autonomous vehicles, remote driving assistance platforms and remote cabs in fig. 1 is merely illustrative. There may be any number of autonomous vehicles, remote driving assistance platforms, and remote cabs, as desired for implementation.
Referring to fig. 2, fig. 2 is a flowchart of a method for taking over training based on a remote driving simulator according to an embodiment of the disclosure, wherein the flowchart 200 includes the following steps:
step 201: according to a training request initiated by a remote driver to a remote driving simulation platform through a remote cockpit, distributing a virtual vehicle in an idle state for the remote driver;
this step is intended to be performed by an executing body (e.g., the remote assistance platform 102 shown in fig. 1) of the remote driving simulator-based takeover training method, first receiving a training request initiated by a remote driver through a remote cockpit (e.g., the remote cockpit 103 shown in fig. 1), and then determining that the remote driver needs to perform training according to the training request, so as to allocate a virtual vehicle in an idle state to the remote cockpit used by the remote driver.
The virtual vehicle is obtained by virtualizing necessary functional modules of the real automatic driving vehicle, and can make a response consistent with the real automatic driving vehicle on a received control instruction aiming at the real automatic driving vehicle in a virtual simulation scene. This may be achieved in a number of ways, which are not listed here.
Step 202: the target real scene data corresponding to the training request is replayed by taking the virtual vehicle as a carrier;
on the basis of step 201, this step aims at reproducing, by the above-described execution subject, the target real scene data corresponding to the training request on the virtual vehicle as a carrier.
The training request is not only used for indicating that training is needed, but also carrying some specific requirement information of what training is needed, so that target real scene data matching the requirement of the training request can be determined from a plurality of alternative real scene data according to the requirement information, and then the target real scene data is replayed by taking the virtual vehicle as a carrier for presentation, namely, the target real scene data is presented to a remote driver and is the driving related data collected by the virtual vehicle in a real driving state, so that the authenticity of the scene data in the presenting process is improved.
Specifically, the real scene data is extracted from real driving data of an autonomous vehicle that has historically initiated a remote driving assistance request, and the real driving data includes at least driving data of the autonomous vehicle from a preset time period (e.g., 15 seconds) before initiating the remote driving assistance request to the end of the remote driving assistance, that is, driving related data should be included at least for a period of time before initiating the remote driving assistance request and throughout the duration of the remote driving assistance in order to sufficiently reflect the cause of initiating the remote driving assistance request and the specific manner and result of assistance.
Step 203: acquiring control behavior information of a remote driver on a virtual vehicle in the process of replaying real scene data;
on the basis of step 202, this step aims at acquiring, by the above-mentioned executing body, control behavior information issued by the remote driver to the virtual vehicle through the remote cockpit during playback of the real scene data. The remote driver checks the real scene data on the display screen of the remote cockpit, and transmits the control behavior to the execution main body through the control module on the remote cockpit according to the judgment of the reflected driving event, and then the execution main body reflects the received control behavior to the control button on the virtual vehicle.
Wherein, the control behavior information may include: each specific control behavior operation and a time point of each specific control behavior operation and a timing relationship between each control behavior operation.
Step 204: and determining the takeover training result of the driving event reflected by the target real scene data by the remote driver according to the control behavior information.
Based on step 203, this step aims at determining, by the executing body, from the control behavior information, a takeover training result of the driving event reflected by the target real scene data by the remote driver.
The takeover training result is used for representing whether the takeover training of the driving event by the remote driver is successful or not, for example, taking over operation is performed at a proper takeover time, which can be considered as successful, for example, taking over time training is performed, and for example, taking over operation effectiveness training is performed after effective takeover operation is issued, which is not successful.
According to the takeover training method based on the remote driving simulator, firstly, real driving data which are completely collected from real automatic driving vehicles to initiate remote assistance requests to a remote driving assistance platform are used as real scene data, and then a large amount of real scene data are used for constructing the remote driving simulation platform, so that when a remote driver is trained through the remote driving simulation platform, target real scene data for training the remote driver can be replayed on a virtual vehicle distributed to the remote driver, the truest video image and sensor information system effect of the remote driver can be achieved, and further the capability of the remote driver for taking over the vehicle and how to take over the vehicle can be accurately trained, and the authenticity and effectiveness of the remote driving training are improved.
Referring to fig. 3, fig. 3 is a flowchart of a method for determining to take over training results according to an embodiment of the disclosure, where the flowchart 300 includes the following steps:
step 301: determining an actual taking-over time point corresponding to taking-over control behaviors in the control behavior information;
step 302: determining that the taking-over time of the remote driver for the driving event reflected by the target real scene data accords with the training requirement in response to the fact that the actual taking-over time point is in an effective taking-over time range corresponding to the target real scene data;
step 301-step 302 provides a solution for judging whether the pipe connection time accords with the training requirement, namely, firstly determining an actual pipe connection time point corresponding to the pipe connection control behavior according to the control behavior information, and then judging whether the actual pipe connection time point falls within an effective pipe connection time range matched with the target real scene data, if so, determining that the pipe connection time of the remote driver to the running event reflected by the target real scene data accords with the training requirement.
Furthermore, the training behavior of the remote driver in the take-over opportunity can be scored according to which subinterval in the effective take-over time range the actual take-over time point specifically falls in, so as to perform specific quantization processing.
Step 303: responding to the fact that the taking over time of the driving event reflected by the target real scene data by the remote driver meets the training requirement, and determining the behavior effectiveness of each taking over control behavior in the effective taking over time range according to the control behavior information;
step 304: and responding to the effectiveness of each take-over control behavior, and determining that the effectiveness of the take-over behavior of the remote driver on the running event reflected by the target real scene data meets the training requirement.
Step 303-step 304 provides a solution for judging whether the validity of the pipe connection operation meets the training requirement on the basis of step 301-step 302, namely, on the basis that the pipe connection time meets the training requirement, the behavior validity of each pipe connection control behavior in the valid pipe connection time range is continuously determined according to the control behavior information, namely, which pipe connection behaviors are valid and which are invalid is judged, and if the pipe connection control behaviors are judged to be valid, the pipe connection behavior validity of the remote driver on the running event reflected by the target real scene data can be determined to meet the training requirement.
Furthermore, the effectiveness training behavior of the take-over behavior of the remote driver can be scored according to the similarity degree between the specific take-over control behavior combination and the standard take-over control behavior combination, so as to perform specific quantitative processing.
It should be understood that steps 301-302 in this embodiment can be completely independent as an embodiment for determining to take over training results, and this embodiment exists only as a preferred embodiment of a scheme for determining effectiveness of a set of butt joint pipe behaviors based on this embodiment.
Further, if the takeover training results of the driving events respectively reflected by the real scene data exceeding the preset number under each scene type by the same remote driver meet the training requirements, a capability tag for representing the required remote driving takeover capability can be added to the remote driver, so that whether different remote drivers can respond to the remote driving assistance request initiated by the real automatic driving vehicle or not can be distinguished by whether the capability tag is provided.
To enhance the understanding of the portion of the way in which real scene data is obtained for training a remote driver, the present embodiment also shows, through step 4, a scheme of generating real scene data, the flow 400 of which includes the steps of:
step 401: according to the received remote driving assistance request, determining a target automatic driving vehicle initiating the remote driving assistance request;
this step aims at determining, by the executing body, the target autonomous vehicle according to the received remote driving assistance request.
Step 402: acquiring first real driving data of a target automatic driving vehicle for a preset time period before a remote driving assistance request is initiated;
on the basis of step 401, this step aims to acquire, by the above-described execution subject, first real travel data of a preset duration, for example, the first 15 seconds of real travel data, of the target driving vehicle before the remote driving assistance request is initiated.
Step 403: acquiring second real driving data of the target automatic driving vehicle during remote assistance by a remote driver;
on the basis of step 401, this step aims at acquiring, by the above-described executing body, all real travel data of the target autopilot during the remote driver takes over the remote assistance.
Step 404: and generating real scene data corresponding to the remote driving assistance request according to the first real driving data and the second real driving data.
Based on steps 402 and 403, this step aims at summarizing (e.g., by time series tandem concatenation) the first real travel data and the second real travel data by the execution subject described above to generate real scene data corresponding to the remote driving assistance request.
For a further understanding, the present disclosure also provides a complete set of implementation schemes in combination with a specific application scenario, please refer to fig. 5-1 to fig. 5-5:
According to the embodiment, a high-performance computing frame Cybertron of a real vehicle, a remote control vehicle end module Rmc-agent (push information flow) and an Rcs-agent (receive vehicle control instruction) are firstly stripped off and abstracted into a virtual vehicle end, and a driving Grading module are added and are respectively used for playing scenes and taking over training evaluation. And the virtual vehicle is deployed at the cloud, and any remote cockpit can bind the virtual vehicle for scene training and assessment like binding a real vehicle. And processing the key scene set data of the dotting and landing in the actual operation and then guiding the key scene set data into a virtual vehicle end of the driving training simulator. The remote security officer can select a certain scene in the scene set to play or switch, and make a decision to take over or closely observe, so that the security officer can be trained on the processing capability of certain key scenes.
The embodiment divides the complete embodiment into three parts: the method comprises the steps of automatically dotting and recording a key scene data scheme, automatically guiding the scene data processing to a remote driving simulator scheme, playing a scene, switching and remotely taking over a training scheme. These three parts will be described in detail below:
1. scheme for automatically dotting and recording key scene data at vehicle end
The scheme flow of automatic dotting and key scene data recording at the vehicle end is shown in the figure 5-1, the automatic driving vehicle end and the remote cockpit end log in and open a bill and enter a cloud dispatching pool in an idle state. When an autonomous vehicle encounters certain scenarios, for example: blocking events (construction occupation), safety risks (collision risks), service events, etc. And reporting a scene early warning event by the vehicle end, and beginning to record the key scene at the moment.
After the cloud receives the early warning event, a work order is generated, an idle cockpit is matched, 1-to-1 binding is carried out on the dispatching vehicle-cabin, and the remote cockpit enters a bicycle monitoring interface. After the remote security officer confirms that there is a risk, the vehicle is remotely taken over by stepping on the brake, and then the vehicle event is processed, for example: risk avoidance by braking, vehicle moving after blocking, and the like. After the processing is finished, returning the control right of the automatic driving vehicle, removing the binding of the bicycle, and ending recording the related data in the key scene and the landing scene at the moment.
2. Scene data processing and automatic import remote driving simulator scheme
1) Scene classification
The key scene recorded by dotting is imported into a simulation platform for simulation analysis, and a specific flow is shown in fig. 5-2. First, a reasoning analysis is performed as to whether a collision is likely. Marking the collision scene when the simulation posterior is collided, otherwise, marking the non-collision scene; for the simulation collision scene, the simulation collision scene can be further classified according to the braking deceleration, for example, the sudden braking scene is the braking deceleration of more than 4m/s2, and the slow braking scene is the deceleration of less than 4m/s 2. In addition, scene features can be classified according to an image detection algorithm, such as a rainy day scene, a night scene, a foggy day scene and the like. Different scene sets can be constructed according to the classification, and a scene list is automatically generated.
In other words, a scene type corresponding to each real scene data may be determined, the scene type including: collision scene, non-collision scene, sudden braking scene, slow braking scene, rainy day scene, foggy day scene and sunny day scene; and then, grouping the real scene data according to the determined scene types to obtain a training real scene set composed of the real scene data under the scene types.
The key description field of each scene in the scene list may be as follows:
scene_set_name:"dark_night_set"
scene_item{
scene_id"MRDR-12267426"
scene_file:"../data/MRDR-12267426.record"
scene_remote_time:1690894169
scene_start_time:1690894155
scene_carsh_time:1690894255
scene_desc:crash_case1
scene_type:0
}
wherein, scene_set_name is the name of the scene set; each scene set may contain several scene_items; each scene consists of an id, a scene file path, a remote take over time, a simulation posterior collision time, a scene description, and a scene type field. Each scene set contains a certain number of key scenes.
2) Scene data processing (extraction, deletion complement, clipping)
The processing scene data is mainly divided into two parts of contents: data of a target channel (communication channel), missing data complement, and image cropping are extracted.
It is also understood that the preprocessing operation including at least one of the following is performed before the real scene data is subjected to the landing storage:
The method comprises the steps of simplifying real data content which is not required to be used for taking over training as a training remote driver, re-analyzing data damage parts of original data acquired by each sensor arranged on the automatic driving vehicle due to storage, cutting and splicing images shot by an optical camera arranged on the automatic driving vehicle, and adjusting the resolution of the images.
The following will be described separately:
a) And extracting data of the target channel. Since several hundred channels are involved in the communication during the automatic driving of the vehicle, the throughput of data transmission is extremely high (for example, radar-related channels may have data transmission of 1s for several tens of frames). But not all of the data therein may be used to support remote driving. In the remote take-over training, only data of some necessary channels (i.e. content reduction) need to be extracted, for example, driving parameters of a vehicle, planning control information, chassis information, control state switching data, video data shot by six cameras arranged on the vehicle and barrier information perceived by the vehicle, and the driving parameters include: parameters of speed, brake, throttle, position and heading angle, the obstacle information includes: the type and track of the obstacle, the state of the traffic light and the like can be played back to play back the remote take-over scene at the time for the safety officer to train.
b) Missing data complement and image cropping. In some scenarios, some of the data may be incomplete or corrupted for some reason (system data drop process failure, data disk full, or network reasons, etc.). For example, the vehicle transmits six-way vehicle-end video back to the remote cockpit through a 5G network, and the key video data is lost. Therefore, the missing channel data needs to be complemented. The key method of the completion is that the data which can be used by the remote cockpit is re-analyzed from the landing data of the original sensor, and the processed channel data is used for replacing damaged or missing channel data in the existing scene file. If the video image data is the video image data, the resolution adjustment is needed again, the video image frames are cut and spliced, and finally, six paths of spliced video pictures seen in the remote cockpit are restored. For example: the h265compressed image frames originally spitted by the camera are firstly converted into rgb images, then resolution adjustment is carried out, splicing is carried out after clipping, and then the rgb images are converted into h264compressed video data, so that the scene can be played back by a remote cockpit.
One specific implementation may be: converting an original H265 image frame shot by an optical camera arranged on an automatic driving vehicle into an RGB image; then, adjusting the resolution of the RGB image to be suitable for the target resolution of a display component on the remote cockpit to obtain an adjusted RGB image; then, clipping and splicing the adjusted RGB images respectively corresponding to different optical cameras to obtain processed RGB images; finally, the processed RGB image is converted into H264 video data.
3) Scene leading-in remote driving simulator
According to the scene list generated in the first step and the scene data file processed in the second step, the scene list can be automatically generated into a configuration file by using a script, and the scene data file is copied to a path of the remote driving simulator in sequence.
3. Scene playing, switching and remote takeover training scheme
1) Basic principle of takeover training and result evaluation
As shown in fig. 5-3, for the simulation posterior scene as the collision scene, assuming that the simulation posterior collision time point is t2, the time interval for intercepting the scene playback is [ t2-15, t2+1] s based on t 2. After the remote security officer initiates the scene, there is a scene load time of 5 s. At 5s, playback of the scene is started. And the main interface of the cockpit stays at the multi-vehicle monitoring interface within 5 s-15 s. And (3) until the time t1 (namely 15s of the time axis of fig. 5-5), the virtual vehicle pre-warning is reported (for example, collision risk pre-warning) to the cloud, and the cloud dispatching cabin-vehicle is bound 1 to 1. At this time, the remote security personnel can monitor the vehicle and step on the brake to take over in case of danger.
In the process, whether the take-over time point of the remote security officer is in the target interval can be compared to evaluate whether the security officer passes the scene training. The target take-over interval is a period of time before and after the remote take-over time is carried out by the remote security personnel when the scene occurs, and the take-over can avoid collision in the interval. For example, in fig. 5-3, the target take over time is:
[remote_time-x,remote_time+y]And satisfies:wherein x and y are parameters of the take-over section and are calculated by the speed and the acceleration of the main vehicle, the speed and the acceleration of the obstacle and the like. And calculating the time required by braking through a kinematic model, so as to determine a reasonable brake connection pipe interval and ensure no collision.
For a scene of a non-collision class (without brake takeover), taking the remote takeover time point remote_time when the scene happens at the time as a reference, and taking forward 16s to start playback. Similarly, after the remote security officer starts the scene, there is 5s of scene loading time, and then 10s of multi-car monitoring interface. After the simulation early warning is reported, the cloud dispatch cabin-vehicle is bound in a mode of 1 to 1. The training evaluation of the scene is simple, and before the scene playing is finished, the remote security personnel fails to take over by stepping on the brake, and succeeds if the remote security personnel does not take over. Non-collision class (no need to take over brake) scenario take over training evaluation timelines can be seen in fig. 5-4.
2) Scene playing, switching and taking over training process
Scene play, switch and take over training flows and architectures can be seen in fig. 5-5.
As shown in fig. 5 to 5, the cockpit simulator is mainly divided into three-terminal modules: remote cockpit, cloud and virtual car. The specific flow is as follows: the remote security personnel register and log in the remote cockpit at first, then enter the scene training or examination through the training software provided by the remote cockpit, then select scene set (night, collision, general ability mixed scene set, etc.), and send the selected training scene ID/examination scene ID down the virtual car, the virtual car will replay with the received scene ID on the idle virtual car, and return the scene picture presented to the remote cockpit, so that the remote security personnel can watch the real-time scene picture in real time, and send the car control instruction to the virtual car according to the running event reflected by the remote security personnel to the scene picture, so that the car control instruction acts on the relevant control module of the virtual car. The train control instruction and the target scene information replayed on the virtual train are transmitted to the cloud end, whether the training of the remote security officer on the scene is passed or not is determined through analysis, and the evaluation result information is pushed to the browser page. The remote security officer can check the result through checking the browser page, and can also perform automatic statistics of scene set training results, and the remote take-over capability of each remote security officer is described.
The traditional remote driving safety training and checking method needs to be configured with the actual automatic driving vehicle and the remote cockpit environment, and has high cost and complicated flow. The 3D virtual scene driving training simulator based on the game engine and driving simulation engine technology cannot achieve the most realistic video picture and sensor information effect, and the reality and the effectiveness of remote driving training are poor.
Aiming at the technical defects of the prior art, the embodiment provides a real scene data automatic import remote driving simulator and a takeover training scheme, which can automatically save, process and import the data of key scenes (such as collision risk, blockage escaping and the like) in real operation into the remote driving simulator and is used for taking over the vehicle control training for a remote driving safety person. The simulator can judge whether the take-over time of the remote driver is within a reasonable take-over time interval or not, and give a scene training result. Meanwhile, scene training and assessment information of the safety officer can be pushed to the cloud end, and the cloud end can automatically count and analyze training results, so that taking-over capacity of a remote safety officer is depicted, and service level of the safety officer is improved. According to the scheme, the remote driving takeover training is identical to the remote driving in the real operation, so that the training convenience of a remote safety officer can be greatly improved, the training cost is reduced, and the professional skill level of the remote takeover of the safety officer is improved.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a remote driving simulator-based takeover training device, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic apparatuses.
As shown in fig. 6, the remote driving simulator-based takeover training device 600 of the present embodiment may include: a virtual vehicle allocation unit 601, a scene data playback unit 602, a control behavior information acquisition unit 603, and a takeover training result determination unit 604. The virtual vehicle distribution unit 601 is configured to distribute a virtual vehicle in an idle state to a remote driver according to a training request initiated by the remote driver to a remote driving simulation platform through a remote cockpit; a scene data playback unit 602 configured to play back target real scene data corresponding to the training request on a virtual vehicle as a carrier; the real scene data is extracted from real running data of the automatic driving vehicle which historically initiates the remote driving assistance request, wherein the real running data comprises running data of the automatic driving vehicle from a preset time before initiating the remote driving assistance request to a time when the remote driving assistance is finished; a control behavior information acquisition unit 603 configured to acquire control behavior information of a remote driver on a virtual vehicle during playback of real scene data; the takeover training result determination unit 604 is configured to determine, according to the control behavior information, a takeover training result of the driving event reflected by the target real scene data by the remote driver.
In the present embodiment, in the takeover training apparatus 600 based on the remote driving simulator: the specific processing of the virtual vehicle allocation unit 601, the scene data playback unit 602, the control behavior information acquisition unit 603, and the takeover training result determination unit 604 and the technical effects thereof may refer to the relevant descriptions of steps 201 to 204 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of the present embodiment, the takeover training result determination unit 604 may be further configured to:
determining an actual taking-over time point corresponding to taking-over control behaviors in the control behavior information;
and determining that the taking-over time of the remote driver for the driving event reflected by the target real scene data meets the training requirement according to the fact that the actual taking-over time point is in an effective taking-over time range corresponding to the target real scene data.
In some optional implementations of the present embodiment, the remote driving simulator-based takeover training device 600 may further include:
the behavior validity determining unit is configured to respond to the fact that the taking over time of the driving event reflected by the target real scene data by the remote driver meets the training requirement, and determine the behavior validity of each taking over control behavior within the valid taking over time range according to the control behavior information;
And the judging unit is configured to respond to the effectiveness of each take-over control behavior, and determine that the effectiveness of the take-over behavior of the remote driver on the running event reflected by the target real scene data meets the training requirement.
In some optional implementations of the present embodiment, the remote driving simulator-based takeover training device 600 may further include:
the capacity tag attaching unit is configured to attach a capacity tag representing the capacity of taking over the remote driving meeting the requirements to the remote driver in response to the fact that the taking over training results of the driving event respectively reflected by the real scene data exceeding the preset quantity under each scene type meet the training requirements.
In some optional implementations of the present embodiment, the remote driving simulator-based takeover training device 600 may further include: a real scene data extraction unit, the real scene data extraction unit being further configured to:
according to the received remote driving assistance request, determining a target automatic driving vehicle initiating the remote driving assistance request;
acquiring first real driving data of a target driving vehicle for a preset time period before a remote driving assistance request is initiated;
Acquiring second real driving data of the target driving vehicle during remote assistance by a remote driver;
and generating real scene data corresponding to the remote driving assistance request according to the first real driving data and the second real driving data.
In some optional implementations of the present embodiment, the remote driving simulator-based takeover training device 600 may further include:
a scene type determining unit configured to determine a scene type to which each real scene data corresponds respectively; wherein, scene type includes: collision scene, non-collision scene, sudden braking scene, slow braking scene, rainy day scene, foggy day scene and sunny day scene;
the grouping unit is configured to group the real scene data according to the determined scene types to obtain a training real scene set composed of the real scene data under the scene types.
In some optional implementations of the present embodiment, the remote driving simulator-based takeover training device 600 may further include:
a preprocessing unit configured to perform preprocessing operations including at least one of the following before the real scene data is subjected to the landing storage:
the method comprises the steps of simplifying real data content which is not required to be used for taking over training as a training remote driver, re-analyzing data damage parts of original data acquired by each sensor arranged on the automatic driving vehicle due to storage, cutting and splicing images shot by an optical camera arranged on the automatic driving vehicle, and adjusting the resolution of the images.
In some optional implementations of the present embodiment, the preprocessing unit may include a compaction subunit configured to compact real data content that is not required for takeover training as training of the remote driver, the compaction subunit may be further configured to:
only the following partial data content from the original, complete real scene data is retained:
running parameters of the vehicle, planning control information, chassis information, control state switching data, video data shot by six cameras arranged on the vehicle and barrier information perceived by the vehicle; wherein, the driving parameters include: parameters of speed, brake, throttle, position and heading angle, obstacle information includes: obstacle type and trajectory, and traffic light status.
In some optional implementations of the present embodiment, the preprocessing unit includes an image processing subunit configured to crop, stitch, and adjust a resolution of an image captured by an optical camera disposed on the autonomous vehicle, the image processing subunit being further configured to:
converting an original H265 image frame shot by an optical camera into an RGB image;
The resolution of the RGB image is adjusted to be suitable for the target resolution of a display component on the remote cockpit, and an adjusted RGB image is obtained;
clipping and splicing the adjusted RGB images respectively corresponding to different optical cameras to obtain processed RGB images;
the processed RGB image is converted into H264 video data.
The embodiment exists as an embodiment of the device corresponding to the embodiment of the method, and the take-over training device based on the remote driving simulator provided by the embodiment firstly uses real driving data completely collected before and after a real automatic driving vehicle initiates a remote assistance request to a remote driving assistance platform as real scene data, and then uses a large amount of real scene data to construct the remote driving simulation platform, so that when a remote driver trains through the remote driving simulation platform, the target real scene data for training the remote driver can be replayed on a virtual vehicle distributed to the remote driver, and the most real video image and the sensor information system effect of the remote driver can be obtained, and further, the capability of the remote driver for taking over the vehicle and how to take over the vehicle can be accurately trained, and the reality and the effectiveness of the remote driving training are improved.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to implement the remote driving simulator-based takeover training method described in any of the embodiments above when executed by the at least one processor.
According to an embodiment of the present disclosure, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the remote driving simulator-based takeover training method described in any of the above embodiments when executed.
According to an embodiment of the present disclosure, the present disclosure further provides a computer program product, which, when executed by a processor, is capable of implementing the steps of the remote driving simulator based takeover training method described in any of the embodiments above.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above, such as taking over a training method based on a remote driving simulator. For example, in some embodiments, the remote driving simulator based takeover training method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into RAM 703 and executed by computing unit 701, one or more of the steps of the remote driving simulator based take over training method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the remote driving simulator based takeover training method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
According to the technical scheme of the embodiment of the disclosure, the real driving data before and after the real automatic driving vehicle initiates the remote assistance request to the remote driving assistance platform is completely collected to serve as the real scene data, and then a large amount of real scene data is used for constructing the remote driving simulation platform, so that when a remote driver is trained through the remote driving simulation platform, the target real scene data for training the remote driver can be replayed on the virtual vehicle distributed to the remote driver, the truest video image and the sensor information system effect of the remote driver can be achieved, the capability of the remote driver for taking over the vehicle and how to take over the vehicle can be accurately trained, and the authenticity and the effectiveness of the remote driving training are improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (21)

1. A take-over training method based on a remote driving simulator, comprising:
according to a training request initiated by a remote driver to a remote driving simulation platform through a remote cockpit, distributing a virtual vehicle in an idle state for the remote driver;
replaying target real scene data corresponding to the training request by taking the virtual vehicle as a carrier; the real scene data is extracted from real driving data of an automatic driving vehicle which historically initiates a remote driving assistance request, wherein the real driving data comprises driving data of the automatic driving vehicle from a preset time before initiating the remote driving assistance request to a period from the end of the remote driving assistance;
acquiring control behavior information of the remote driver on the virtual vehicle in the replay process of the real scene data;
And determining the takeover training result of the driving event reflected by the target real scene data by the remote driver according to the control behavior information.
2. The method of claim 1, wherein the determining, from the control behavior information, a takeover training result of the driving event reflected by the target real scene data by the remote driver includes:
determining an actual takeover time point corresponding to the takeover control behavior in the control behavior information;
and determining that the taking-over time of the driving event reflected by the target real scene data by the remote driver meets the training requirement according to the fact that the actual taking-over time point is in an effective taking-over time range corresponding to the target real scene data.
3. The method of claim 2, further comprising:
responding to the fact that the take-over time of the driving event reflected by the target real scene data by the remote driver meets training requirements, and determining the behavior effectiveness of each take-over control behavior in the effective take-over time range according to the control behavior information;
and responding to the effectiveness of each take-over control behavior, and determining that the effectiveness of the take-over behavior of the remote driver on the running event reflected by the target real scene data meets the training requirement.
4. The method of claim 1, further comprising:
and responding to the taking over training results of the driving events respectively reflected by the real scene data exceeding the preset quantity under each scene type by the remote driver, wherein the taking over training results meet the training requirements, and attaching capability labels which characterize the required remote driving taking over capability to the remote driver.
5. The method according to any one of claims 1-4, wherein the extraction process of the real scene data comprises:
according to the received remote driving assistance request, determining a target automatic driving vehicle initiating the remote driving assistance request;
acquiring first real driving data of the target driving vehicle for a preset duration before the remote driving assistance request is initiated;
acquiring second real driving data of the target driving vehicle during remote assistance by the remote driver;
and generating real scene data corresponding to the remote driving assistance request according to the first real driving data and the second real driving data.
6. The method of claim 5, further comprising:
determining scene types corresponding to the real scene data respectively; wherein, the scene type includes: collision scene, non-collision scene, sudden braking scene, slow braking scene, rainy day scene, foggy day scene and sunny day scene;
Grouping the real scene data according to the determined scene types to obtain a training real scene set composed of the real scene data under the scene types.
7. The method of claim 5, further comprising:
before the real scene data is subjected to the disc-drop storage, preprocessing operation comprising at least one of the following steps:
the method comprises the steps of simplifying real data content which is not used for training a remote driver to take over, re-analyzing data damage parts of original data acquired by sensors arranged on the automatic driving vehicle due to storage, cutting and splicing images shot by an optical camera arranged on the automatic driving vehicle, and adjusting the resolution of the images.
8. The method of claim 7, wherein the compacting of real data content that is not needed for takeover training as training the remote driver comprises:
only the following partial data content from the original, complete real scene data is retained:
running parameters of a vehicle, planning control information, chassis information, control state switching data, video data shot by six paths of cameras arranged on the vehicle and barrier information perceived by the vehicle; wherein the driving parameters include: parameters of speed, brake, throttle, position and orientation angle, the obstacle information comprises: obstacle type and trajectory, and traffic light status.
9. The method of claim 7, wherein the cropping, stitching, and adjusting the resolution of the image captured by an optical camera disposed on the autonomous vehicle comprises:
converting the original H265 image frame shot by the optical camera into an RGB image;
the resolution of the RGB image is adjusted to be suitable for the target resolution of a display component on the remote cockpit, and an adjusted RGB image is obtained;
clipping and splicing the adjusted RGB images respectively corresponding to different optical cameras to obtain processed RGB images;
the processed RGB image is converted into H264 video data.
10. A take over training device based on a remote driving simulator, comprising:
the virtual vehicle distribution unit is configured to distribute the virtual vehicle in an idle state to a remote driver according to a training request initiated by the remote driver to a remote driving simulation platform through a remote cockpit;
a scene data playback unit configured to play back target real scene data corresponding to the training request on the virtual vehicle as a carrier; the real scene data is extracted from real driving data of an automatic driving vehicle which historically initiates a remote driving assistance request, wherein the real driving data comprises driving data of the automatic driving vehicle from a preset time before initiating the remote driving assistance request to a period from the end of the remote driving assistance;
A control behavior information acquisition unit configured to acquire control behavior information of the virtual vehicle by the remote driver during playback of the real scene data;
and the takeover training result determining unit is configured to determine a takeover training result of the driving event reflected by the target real scene data by the remote driver according to the control behavior information.
11. The apparatus of claim 10, wherein the takeover training result determination unit is further configured to:
determining an actual takeover time point corresponding to the takeover control behavior in the control behavior information;
and determining that the taking-over time of the driving event reflected by the target real scene data by the remote driver meets the training requirement according to the fact that the actual taking-over time point is in an effective taking-over time range corresponding to the target real scene data.
12. The apparatus of claim 11, further comprising:
the behavior validity determining unit is configured to respond to the fact that the taking over time of the driving event reflected by the target real scene data by the remote driver meets the training requirement, and determine the behavior validity of each taking over control behavior within the valid taking over time range according to the control behavior information;
And the judging unit is configured to respond to the fact that each take-over control behavior has effectiveness, and determine that the effectiveness of the take-over behavior of the remote driver on the running event reflected by the target real scene data meets training requirements.
13. The apparatus of claim 10, further comprising:
and the capability tag attaching unit is configured to attach a capability tag representing the required remote driving takeover capability to the remote driver in response to the fact that the takeover training results of the driving event respectively reflected by the real scene data exceeding the preset quantity under each scene type meet the training requirements.
14. The apparatus of any of claims 10-13, further comprising: a real scene data extraction unit, the real scene data extraction unit being further configured to:
according to the received remote driving assistance request, determining a target automatic driving vehicle initiating the remote driving assistance request;
acquiring first real driving data of the target driving vehicle for a preset duration before the remote driving assistance request is initiated;
acquiring second real driving data of the target driving vehicle during remote assistance by the remote driver;
And generating real scene data corresponding to the remote driving assistance request according to the first real driving data and the second real driving data.
15. The apparatus of claim 14, further comprising:
a scene type determining unit configured to determine a scene type to which each of the real scene data corresponds, respectively; wherein, the scene type includes: collision scene, non-collision scene, sudden braking scene, slow braking scene, rainy day scene, foggy day scene and sunny day scene;
the grouping unit is configured to group the real scene data according to the determined scene types to obtain a training real scene set composed of the real scene data under the scene types.
16. The apparatus of claim 14, further comprising:
a preprocessing unit configured to perform preprocessing operations including at least one of the following before the real scene data is subjected to the landing storage:
the method comprises the steps of simplifying real data content which is not used for training a remote driver to take over, re-analyzing data damage parts of original data acquired by sensors arranged on the automatic driving vehicle due to storage, cutting and splicing images shot by an optical camera arranged on the automatic driving vehicle, and adjusting the resolution of the images.
17. The apparatus of claim 16, wherein the preprocessing unit comprises a compaction subunit configured to compact real data content that is not required for takeover training as training the remote driver, the compaction subunit further configured to:
only the following partial data content from the original, complete real scene data is retained:
running parameters of a vehicle, planning control information, chassis information, control state switching data, video data shot by six paths of cameras arranged on the vehicle and barrier information perceived by the vehicle; wherein the driving parameters include: parameters of speed, brake, throttle, position and orientation angle, the obstacle information comprises: obstacle type and trajectory, and traffic light status.
18. The apparatus of claim 16, wherein the preprocessing unit comprises an image processing subunit configured to crop, stitch, and adjust a resolution of an image captured by an optical camera disposed on the autonomous vehicle, the image processing subunit further configured to:
Converting the original H265 image frame shot by the optical camera into an RGB image;
the resolution of the RGB image is adjusted to be suitable for the target resolution of a display component on the remote cockpit, and an adjusted RGB image is obtained;
clipping and splicing the adjusted RGB images respectively corresponding to different optical cameras to obtain processed RGB images;
the processed RGB image is converted into H264 video data.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the remote driving simulator-based takeover training method in accordance with any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the remote driving simulator-based takeover training method in accordance with any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the remote driving simulator-based takeover training method according to any of claims 1-9.
CN202311753798.2A 2023-12-19 2023-12-19 Takeover training method based on remote driving simulator and related device Pending CN117765793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311753798.2A CN117765793A (en) 2023-12-19 2023-12-19 Takeover training method based on remote driving simulator and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311753798.2A CN117765793A (en) 2023-12-19 2023-12-19 Takeover training method based on remote driving simulator and related device

Publications (1)

Publication Number Publication Date
CN117765793A true CN117765793A (en) 2024-03-26

Family

ID=90315671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311753798.2A Pending CN117765793A (en) 2023-12-19 2023-12-19 Takeover training method based on remote driving simulator and related device

Country Status (1)

Country Link
CN (1) CN117765793A (en)

Similar Documents

Publication Publication Date Title
CN103219030B (en) Method for synchronous acquisition and playback of vehicle-mounted video and bus data
CN109345829A (en) Monitoring method, device, equipment and the storage medium of unmanned vehicle
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
CN110738251A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113255439B (en) Obstacle identification method, device, system, terminal and cloud
WO2023185564A1 (en) Visual enhancement method and system based on multi-connected vehicle space alignment feature fusion
JP5962898B2 (en) Driving evaluation system, driving evaluation method, and driving evaluation program
CN112233428A (en) Traffic flow prediction method, traffic flow prediction device, storage medium and equipment
CN113342704A (en) Data processing method, data processing equipment and computer readable storage medium
CN114492022A (en) Road condition sensing data processing method, device, equipment, program and storage medium
CN114722631A (en) Vehicle test simulation scene generation method and device, electronic equipment and storage medium
CN114051116A (en) Video monitoring method, device and system for driving test vehicle
CN112182289B (en) Data deduplication method and device based on Flink frame
CN117765793A (en) Takeover training method based on remote driving simulator and related device
CN110853364B (en) Data monitoring method and device
CN112699754A (en) Signal lamp identification method, device, equipment and storage medium
CN115858456A (en) Data acquisition system and method for automatic driving vehicle
CN115891868A (en) Fault detection method, device, electronic apparatus, and medium for autonomous vehicle
CN113727215B (en) Data processing method, device, storage medium and equipment
CN117152945A (en) Method and system for handling traffic accidents and storage medium
CN113963310A (en) People flow detection method and device for bus station and electronic equipment
CN113962107A (en) Method and device for simulating driving road section, electronic equipment and storage medium
CN112560685A (en) Facial expression recognition method and device and storage medium
CN112348381A (en) Processing method and device for scheduling data of unmanned aerial vehicle equipment and server
CN109886234A (en) Object detection method, device, system, electronic equipment, storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination