CN108944945B - State prediction method and device for driving assistance, electronic equipment and vehicle - Google Patents

State prediction method and device for driving assistance, electronic equipment and vehicle Download PDF

Info

Publication number
CN108944945B
CN108944945B CN201810749440.5A CN201810749440A CN108944945B CN 108944945 B CN108944945 B CN 108944945B CN 201810749440 A CN201810749440 A CN 201810749440A CN 108944945 B CN108944945 B CN 108944945B
Authority
CN
China
Prior art keywords
prediction
model
initial
training
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810749440.5A
Other languages
Chinese (zh)
Other versions
CN108944945A (en
Inventor
刘景初
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Horizon Robotics Science and Technology Co Ltd
Original Assignee
Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Horizon Robotics Science and Technology Co Ltd filed Critical Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority to CN201810749440.5A priority Critical patent/CN108944945B/en
Publication of CN108944945A publication Critical patent/CN108944945A/en
Application granted granted Critical
Publication of CN108944945B publication Critical patent/CN108944945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a state prediction method and device for driving assistance, an electronic device and a vehicle. According to an embodiment, a state prediction method for driving assistance may include: acquiring an initial state quantity of a driving environment; generating a plurality of initial prediction results based on the initial state quantities using a prediction model; scoring the plurality of initial predictions using a discriminative model; selecting a portion of the predictors from the plurality of initial predictors based on the score; and providing the selected portion of the predicted outcome to the predictive model for further prediction based thereon until a final predicted outcome is obtained. By scoring the prediction results generated by the prediction model using the identification model and optimizing the prediction results of the prediction model based on the scoring, the accuracy of the state prediction can be improved.

Description

State prediction method and device for driving assistance, electronic equipment and vehicle
Technical Field
The present application relates generally to the field of Assisted Driving (ADAS), and more particularly, to a state prediction method, a state prediction apparatus, an electronic device, and a vehicle for assisted driving.
Background
In recent years, automated driving, or Advanced Driving Assistance Systems (ADAS), have received extensive attention and intense research. The ADAS system needs to sense various states of the vehicle itself and the surrounding environment using various vehicle-mounted sensors, collect data, perform identification, detection and tracking of static and dynamic entities, and perform systematic calculation and analysis in combination with map data, thereby making driving policy decisions and finally realizing an automatic driving function.
In an autonomous driving scenario, dynamic predictions of entities in the environment are required, and in this prediction task, a predictive model is often employed to generate a probability distribution of one or more predicted outcomes for use by subsequent modules. However, due to the complex diversity of the driving environment, the conventional prediction model cannot provide a complete constraint scheme, and the inherent inaccuracy thereof may introduce a certain error into the prediction result, which causes accumulation and/or amplification of the error in the subsequent use steps, thereby causing the prediction result to be inaccurate, and possibly even causing a driving accident.
Accordingly, there remains a need for improved state prediction schemes for driving assistance.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a state prediction method, a state prediction device, an electronic device and a vehicle for driving assistance, wherein a prediction result generated by a prediction model is scored through an identification model, and the prediction result of the prediction model is optimized based on the scoring, so that the accuracy of state prediction is improved.
According to an aspect of the present application, there is provided a state prediction method for driving assist, including: acquiring an initial state quantity of a driving environment; generating a plurality of initial prediction results based on the initial state quantities using a prediction model; scoring the plurality of initial predictions using a discriminative model; selecting a portion of the predictors from the plurality of initial predictors based on the score; and providing the selected portion of the predicted outcome to the predictive model for further prediction based thereon until a final predicted outcome is obtained, wherein the method may further comprise: expressing the structured state quantity of the entity to be predicted in the driving environment as unstructured data to serve as the initial state quantity; and extracting the structural state quantity of the entity to be predicted from the final prediction result.
In some examples, the method may further include deciding a driving strategy based on the final prediction.
In some examples, the entities to be predicted include one or more of vehicles, pedestrians, lane lines, road signs, buildings, shoulders, roadside green belts, road surface obstacles, the structured state quantities include one or more of positions, velocities, accelerations, azimuths, contour lines, contour bounding boxes, attributes, categories, and the unstructured data includes images.
In some examples, prior to generating a plurality of initial prediction results based on the initial state quantities using a prediction model, the method may further include training the discriminative model.
In some examples, training the discriminative model may include: providing the prediction result and the prediction truth value of the prediction model to the identification model as input; identifying whether the input is a predicted result or a predicted true value using the discrimination model; and optimizing parameters of the authentication model to increase the probability of correct authentication.
In some examples, prior to generating a plurality of initial predicted results based on the initial state quantities using a prediction model, the method may further comprise: performing countermeasure training on the predictive model using the discriminative model.
In some examples, the confrontation training may include: generating prediction data based on the training data using a prediction model; identifying a probability that the prediction data is a true value using an identification model; and optimizing parameters of the predictive model based on the identified probabilities.
In some examples, training the discriminative model and counter-training the predictive model may be performed alternately.
According to another aspect of the present application, there is provided a state prediction apparatus for driving assistance, including: an acquisition unit configured to acquire an initial state quantity of a driving environment; a prediction unit for generating a plurality of initial prediction results based on the initial state quantities using a prediction model; an identification unit for scoring the plurality of initial prediction results using an identification model; a selection unit configured to select a part of the prediction results from the plurality of initial prediction results based on the score to be supplied to the prediction model so as to perform further prediction until a final prediction result is obtained, wherein the driving-assist state prediction apparatus further includes: the device comprises a conversion unit and an extraction unit, wherein the conversion unit is used for expressing the structured state quantity of the entity to be predicted as unstructured data to be used as the initial state quantity for the prediction process, and the extraction unit is used for extracting the structured state quantity of the entity to be predicted from a final prediction result which is unstructured data.
In some examples, the apparatus may further include: a decision unit for deciding a driving strategy based on the final prediction result.
In some examples, the apparatus may further include: a first training unit for training the discrimination model before generating a plurality of initial prediction results based on the initial state quantities using a prediction model.
In some examples, the apparatus may further include: a second training unit configured to perform countermeasure training on the prediction model using the discrimination model before generating a plurality of initial prediction results based on the initial state quantities using the prediction model.
According to another aspect of the present application, there is provided an electronic apparatus, comprising: a processor; and a memory in which computer program instructions are stored, which, when executed by the processor, cause the processor to perform the above-described state prediction method for driving assistance.
According to another aspect of the present application, there is provided a vehicle including the above-described electronic apparatus.
According to another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the above-described state prediction method for driving assistance.
Compared with the prior art, the initial state quantity of the driving environment can be acquired by adopting the state prediction method for driving assistance, the state prediction device, the electronic equipment, the vehicle and the computer readable medium according to the embodiment of the application; generating a plurality of initial prediction results based on the initial state quantities using a prediction model; scoring the plurality of initial predictions using a discriminative model; selecting a portion of the predictors from the plurality of initial predictors based on the score; and providing the selected portion of the predicted outcome to the predictive model for further prediction based thereon until a final predicted outcome is obtained. Therefore, the prediction results generated by the prediction model can be scored through the identification model, and the prediction results of the prediction model are optimized based on the scoring, so that the accuracy of state prediction is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates a schematic diagram of a system architecture to which a state prediction method for driving assistance according to an embodiment of the present application is applied.
Fig. 2 illustrates a flowchart of a state prediction method for driving assistance according to an embodiment of the present application.
Fig. 3 illustrates a schematic diagram of expressing structured state quantities as unstructured data according to an embodiment of the application.
Fig. 4 illustrates a schematic diagram of extracting structured state quantities from final predicted results according to an embodiment of the present application.
Fig. 5 illustrates a block diagram of a state prediction apparatus for driving assistance according to an embodiment of the present application.
FIG. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, in the current automatic driving scenario, due to the diversity of driving environments, a generative model for dynamically predicting entities in the environment often cannot provide an accurate prediction result, which causes accumulation and/or amplification of errors in subsequent links, thereby resulting in inaccurate prediction result and possibly even driving accidents.
Here, existing prediction methods for an automatic driving scenario are all generative, and typical methods include directly giving a prediction result according to a fixed formula or learning a correspondence between history information and a prediction output through a data-driven manner and then outputting the prediction result.
In the method, the optimization of the performance of the prediction model mainly depends on the optimization of model parameters before the generation process and the self-supervision based on the difference between the prediction result and the prediction truth value in the model training process. After the generative model used for the inference is determined, no further improvement in the accuracy of the generation can be made.
In the prediction problem outside the automatic driving scenario, it has been proposed to further improve the results of generating the model based on an identification model. One method is to use the identification model to carry out antagonism training on the generative model, and the performance of the generative model is improved through the game of the identification model and the generative model.
However, these methods are designed for the prediction of unstructured data, and the problem of precise generation of structured state quantities needs to be handled in an automatic driving scene, and additional processing is needed.
In addition, post-screening of the generated model prediction results can also improve the prediction performance. For example, in the prediction of natural language character sequences, post-screening by Beam Search and the like has been proposed. The scoring criteria used in the post-screening are generally derived from the generator itself (single step generation probability) without the aid of an external discriminatory model.
In view of the above technical problems, a basic idea of the present application is to provide a state prediction method, a state prediction apparatus, an electronic device, a vehicle, and a computer-readable medium for driving assistance that generate a plurality of initial prediction results using a prediction model, score the plurality of initial prediction results using an authentication model to select a partial prediction result, and provide the partial prediction results to the prediction model again for further prediction. Therefore, the prediction results generated by the prediction model can be scored through the identification model, and the prediction results of the prediction model are optimized based on the scoring, so that the accuracy of state prediction is improved.
On the other hand, for structured data, the basic concept of the present application also includes expressing the structured data as unstructured data, so that the unstructured prediction model can be used for predicting the structured data, and then extracting the predicted value of the structured data from the prediction result of the unstructured data. In this way, more flexible predictions can be made for structured state quantities in complex driving environments.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 1 illustrates a schematic diagram of a system architecture to which a state prediction method for driving assistance according to an embodiment of the present application is applied. As shown in FIG. 1, the system 100 includes a predictive model 110 and a discriminative model 120. The predictive model 110 is used for receiving initial state quantities of the driving environment, for example, the initial state quantities may be various state parameters of the current vehicle or surrounding entities, such as position, speed, acceleration, azimuth angle, and the like, obtained through various sensors. These state quantities are generally referred to as structured data, which can be constrained and predicted by, for example, fixed formulas, but for complex driving environments, the movement trends of these entities may change for various reasons, and thus it is difficult to achieve accurate and comprehensive prediction with fixed formulas. Thus, in some embodiments, these initial state quantities are also expressed as unstructured data, e.g. image data, so that they can be predicted using an unstructured prediction model, e.g. an image prediction model. The predictive model 110, which may be, for example, the unstructured predictive model described above, generates a plurality of initial predictors based on the received initial state quantities, each of which may include its corresponding probability. The authentication model 120 is configured to receive a plurality of initial prediction results from the prediction model 110 and score the initial prediction results, and based on the scoring results, select a partial prediction result from the plurality of initial prediction results and return the partial prediction result to the prediction model 110. The predictive model 110 may make further predictions based on the portion of the prediction returned from the discriminative model 120 until a final prediction is output.
Exemplary method
Fig. 2 illustrates a flow chart of a state prediction method 200 for assisted driving according to an embodiment of the present application. As shown in fig. 2, a state prediction method 200 for driving assistance according to an embodiment of the present application may include the steps of: step S210, acquiring an initial state quantity of a driving environment; step S220, generating a plurality of initial prediction results based on the initial state quantities by using a prediction model; step S230, scoring the plurality of initial prediction results using an identification model; step S240, selecting a part of prediction results from the plurality of initial prediction results based on the score; and step S250, providing the selected part of the prediction results to the prediction model to perform further prediction based on the prediction model until a final prediction result is obtained. These steps will be described in detail below as an example.
The initial state quantity of the driving environment acquired in step S210 may be various state parameters of the current vehicle or surrounding entities obtained through various in-vehicle sensors. Here, the current vehicle and the surrounding entities may be collectively referred to as entities to be predicted, and the surrounding entities may include dynamic entities and static entities, such as vehicles, pedestrians, lane lines, road signs, buildings, shoulders, roadside green belts, road surface obstacles, and the like. Examples of state parameters described herein may include position, velocity, acceleration, azimuth, contour, outline bounding box, attributes, categories, and the like.
In some embodiments, the predictive model may be a structured predictive model, such as a model of motion that may include various entities therein, such that these state parameters may be directly predicted. In other embodiments of the present invention, the prediction model may be an unstructured prediction model, such as an image prediction model. With the benefit of the recent advances in neural networks and deep learning, image prediction technology has made great progress, and is more suitable for prediction tasks in complex driving scenes than conventional structured prediction models. Thus, in some embodiments, the structured state quantity may also be expressed as unstructured data, e.g. image data, which serves as the initial state quantity for use by the image prediction model.
Fig. 3 illustrates a schematic diagram expressing a structured state quantity as unstructured data (here, image data) according to an embodiment of the present application. Fig. 3 shows only an example of expressing one known value of a set of structured state quantities as one frame of image data, and in practice, a plurality of known values may be expressed as a plurality of frame images, which constitute an image sequence, for prediction. It should also be noted that each structured state quantity may correspond to one image channel, or may also be referred to as an image layer, and the known values of the plurality of structured state quantities at the same time may be expressed as a plurality of image layers of one frame image.
Fig. 3 shows an example in which three state quantities of position, velocity, and azimuth of two entities are expressed as image data, which may correspond to the position layer 10a, the velocity layer 10b, and the azimuth layer 10c of one frame image, respectively. In the position layer 10a, the pixels 11a and 12a may respectively represent the positions of two entities, and the values thereof may be 1, and the values of the other pixels in the position layer 10a may be zero. In the velocity layer 10b, the pixels 11b and 12b corresponding to the pixels 11a and 12a, respectively, represent the velocities of two entities, and the values thereof may be velocity values of the two entities. The value of the other pixels in the velocity layer 10b may be zero. In the azimuth layer 10c, the pixels 11c and 12c corresponding to the pixels 11a and 12a, respectively, represent azimuth angles of two entities, and the values thereof may be azimuth angle values of the two entities. The value of the other pixels in the velocity layer 10c may be zero.
It should be understood that fig. 3 shows only an example, and that other structured state tables may also be expressed as unstructured data according to the principles shown in fig. 3.
Next, referring back to fig. 2, in step S220, a plurality of initial prediction results are generated based on the initial state quantities using the prediction model. Here, any suitable predictive model may be used for prediction, including existing predictive models or future developed predictive models. It will be appreciated that the prediction model used should be adapted to the initial state quantities that need to be predicted. For example, if the initial state quantity is in the form of a structured state quantity, a structured prediction model may be used which gives the prediction result according to a fixed formula or model or the like; if the initial state quantity has been expressed in the form of an unstructured state quantity, e.g. an image, the prediction can be made using an unstructured prediction model, e.g. an image prediction model. It will be appreciated that the image prediction model may be trained by learning the correspondence between historical information and prediction output in a data-driven manner, which will be described in detail below.
Then, in step S230, the plurality of initial predictions generated by the predictive model may be scored using the discriminative model. It should be noted here that when a predictive model generates multiple possible predicted outcomes, the probabilities of these possible outcomes are generally generated simultaneously. However, due to the inherent inaccuracies of the predictive models themselves, there may be differences between these predictions and their probabilities and the actual possibilities. Thus, in the present invention, a discriminative model is also employed that scores the initial prediction results generated by the predictive model. It will also be appreciated that the predictive model and the discriminative model may both be trained models, and that in the present invention, the concept of countertraining is employed to improve the accuracy of the predictive model and the discriminative model, which will be described in detail below. In step S230, the score of the multiple initial prediction results by the identification model may be a value in the range of 0 to 1, or may be a parameterized score value.
Next, in step S240, a part of the predicted results may be selected from a plurality of initial predicted results provided by the prediction model based on the scored results of the identification model. For example, the multiple initial predictors may be ranked according to their scoring scores, and one or a portion of the predictors with the higher score may be selected. Of course, in other embodiments, such as to ensure diversity of the prediction results, the ordering rules used in the selection may not be strictly in terms of scoring. For example, a plurality of initial predictors may be sampled, averaged, etc. according to the scoring scores to output one or a portion of the initial predictors.
The selected portion of the initial prediction results may then be provided to a predictive model for further prediction based thereon until a final prediction result is obtained in step S250. Therefore, for the prediction problem in the automatic driving scene, the output of the prediction model can be subjected to post-screening according to the scoring result of the identification model through the process, so that the prediction performance is improved.
Additionally, in some embodiments, when the prediction and post-authentication screening is performed on unstructured data, then the method 200 may further include the step of extracting the desired structured data from the unstructured data obtained as a result of the prediction. Fig. 4 illustrates a schematic diagram of the extraction process. As shown in fig. 4, the structured state quantity of the entity to be predicted can be extracted from the unstructured final prediction result, i.e. the prediction image frame, by an estimator 13. The estimator 13 may output a probability of containing an entity for each image position and a value of the target state quantity if it is a real entity. In the example of fig. 4, the predicted image frame also includes a location layer 10a, a velocity layer 10b, and an azimuth layer 10 c. The estimator 13 may detect the positions of the pixels 11a and 12a in the position layer 10a, which represent the positions of the two entities, and extract the velocity prediction values and the azimuth prediction values of the two entities from the corresponding pixels of the velocity layer 10b and the azimuth layer 10 c. The estimator 13 may be trained in a data-driven manner, and in the process of training the prediction model, the estimator is trained by using a true value of the target state quantity of the predicted target entity as a guide.
In addition, in some simple implementations, when the result of unstructured prediction has good decoupling of the expression of each predicted target entity and its predicted target state quantity, the estimation of the target state quantity can be obtained by performing rule-based calculation on each channel and region. For example, if the corresponding value regions of the target entities are not overlapped in the image and can be mutually divided, and the target prediction state quantities of the entities are respectively and independently encoded in different channels, the average of the values in the corresponding channels of the entity regions can be used as the estimated value of the state quantity.
After obtaining the predicted value of the state quantity, in some embodiments, the predicted value may also be provided to a driver assistance system of the vehicle to decide an appropriate driving strategy based on the prediction result.
As mentioned above, before prediction using the predictive model and post-screening using the discriminative model, a training step for the predictive model and the discriminative model may be further included. In some embodiments, the training step may include a challenge training process, which may include training the identification model and the prediction model, either alternately or simultaneously.
Training the discriminative model may include providing a prediction result and a prediction truth value of the predictive model to the discriminative model as an input, discriminating whether the input is the prediction result or the prediction truth value using the discriminative model, and optimizing parameters of the discriminative model to increase a probability of correct discrimination.
Specifically, the prediction result of the prediction model and the prediction truth value in the training data set may be provided as inputs to the identification model, a certain scoring criterion is output by the identification model as an output, and the identification model is trained by a data-driven method to score the prediction result. For example, a target output of 1 may be assigned to the prediction truth value, a target output of the prediction result of the prediction model may be assigned to 0, and the discrimination model may be trained using a binary model and using a binary logic loss (logistic loss). The training target of the discriminative model can be interpreted as the probability that the prediction result is a true value. Of course, in the above training process, the target output of the identification model may not be a value in the range of 0 to 1, but a parameterized scoring model is used to measure the error between the prediction result and the true value, and the error is used as the training target. Through the training, the correct identification probability of the identification model is improved.
The predictive model may then be countertrained using the discriminative model. In particular, the antagonistic training process of the predictive model may include generating predictive data based on training data using the predictive model, identifying a probability that the predictive data is a true value using the discriminative model, and optimizing parameters of the predictive model based on the identified probability. In some examples, the training cost of the predictive model may include a term that is inversely related to the output of the identification model, and the process of minimizing the cost term of the predictive model in the training process is the process of attempting to modify its output to improve the output of the identification model. Thus, by such countertraining, the accuracy of the prediction model can be improved. In some embodiments, the antagonistic training of the identification model and the prediction model can be alternated, so as to continuously improve the accuracy of the identification model and the prediction model until a mutually balanced state is reached.
Furthermore, in some embodiments, the authentication model may not require training. For example, in the case of direct prediction of the structural state quantities, the identification model may comprise a deterministic parameterized formula, which does not require training.
Exemplary devices
Fig. 5 illustrates a functional block diagram of a state prediction apparatus 500 for driving assistance according to an embodiment of the present application. As shown in fig. 5, a state prediction apparatus 500 for driving assistance according to an embodiment of the present application may include an acquisition unit 510, a prediction unit 520, a discrimination unit 530, a selection unit 540, and an optional decision unit 550.
The acquisition unit 510 may be used to acquire an initial state quantity of the driving environment, which may be, for example, various state quantities of the current vehicle or surrounding entities acquired by in-vehicle sensors. In some embodiments, these initial state quantities may also be expressed in the form of unstructured data. Although not shown, in some examples, the state prediction apparatus 500 for driving assistance may further include a conversion unit for expressing the structured state quantity of the entity to be predicted as unstructured data as the initial state quantity for use in the prediction process, and an extraction unit that may extract the structured state quantity of the entity to be predicted from a final prediction result as unstructured data.
The prediction unit 520 may be configured to generate a plurality of initial prediction results based on the initial state quantities using a prediction model. The discrimination unit 530 may be configured to score the plurality of initial predictors using a discrimination model. Based on the scored results, the selection unit 540 may select a part of the predicted results from the plurality of initial predicted results and provide it to the prediction model, so that the prediction model may make further predictions until a final predicted result is obtained. The decision unit 550 may use the final prediction result to decide a corresponding driving strategy.
Although not shown, in some examples, the state prediction apparatus 500 for driving assistance may further include one or more training units. For example, a first training unit may be used to train the discriminative model and a second training unit may be used to opportunistically train the predictive model using the discriminative model. As previously described, the training of the identification model and the predictive model may be performed alternately or simultaneously.
Here, it may be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the driving-assistance state prediction apparatus 500 described above have been described in detail in the driving-assistance state prediction method described above with reference to fig. 1 to 4, and thus, a repetitive description thereof will be omitted.
As described above, the state prediction apparatus 500 for driving assistance according to the embodiment of the present application can be implemented in various terminal devices, for example, in-vehicle devices for driving assistance. In one example, the state prediction apparatus 500 for driving assistance according to the embodiment of the present application may be integrated into the terminal device as a software module and/or a hardware module. For example, the apparatus 500 may be a software module in an operating system of the terminal device, or may be an application program developed for the terminal device, which runs on a CPU (central processing unit) and/or a GPU (graphics processing unit), or runs on a dedicated hardware acceleration chip, such as a dedicated chip adapted to run a deep neural network; of course, the apparatus 500 may also be one of many hardware modules of the terminal device.
Alternatively, in another example, the driving assistance state prediction apparatus 500 and the terminal device may be separate devices, and the apparatus 500 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information according to an agreed data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 6.
Fig. 6 illustrates a block diagram of an electronic device 600 according to an embodiment of the present application. As shown in fig. 6, the electronic device 600 includes one or more processors 610 and memory 620. The processor 610 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 600 to perform desired functions.
Memory 620 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 610 to implement the state prediction method for driving assistance of the various embodiments of the present application described above and/or other desired functions. Various contents such as an initial state quantity, an initial prediction result, a score, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 600 may further include: an input device 630 and an output device 640, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 630 may include various input interfaces, which may be connected to appropriate external devices to receive inputs thereof. For example, the input device 630 may be connected to an onboard sensor or processing unit associated therewith to receive various state quantity data. Such onboard sensors may include cameras, lidar, ultrasonic radar, inertial measurement units, chassis odometers, and the like.
The output device 640 may output, for example, the predicted outcome to the outside, which may include a structured dataform of the final predicted outcome, which may be used, for example, by a driver assistance system to make appropriate driving strategy decisions.
Of course, for simplicity, only some of the components of the electronic device 600 relevant to the present application are shown in fig. 6, and many other necessary components are omitted. In addition, electronic device 600 may include any other suitable components depending on the particular application.
It should be understood that the electronic device 600 described above may be implemented on, for example, a vehicle equipped with a driving assistance system. Accordingly, in some embodiments, a vehicle including the electronic device 600 is also provided.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in a state prediction method for driving assistance according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in a state prediction method for assisted driving according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (15)

1. A state prediction method for driving assist, comprising:
acquiring an initial state quantity of a driving environment;
generating a plurality of initial prediction results based on the initial state quantities using a prediction model;
scoring the plurality of initial predictions using a discriminative model;
selecting a portion of the predictors from the plurality of initial predictors based on the score; and
providing the selected portion of the predicted outcome to the predictive model for further prediction based thereon until a final predicted outcome is obtained,
wherein the state prediction method for driving assist further includes:
expressing the structured state quantity of the entity to be predicted in the driving environment as unstructured data to serve as the initial state quantity; and
and extracting the structural state quantity of the entity to be predicted from the final prediction result.
2. The method of claim 1, further comprising:
deciding a driving strategy based on the final prediction result.
3. The method of claim 1, wherein,
the entity to be predicted comprises one or more of vehicles, pedestrians, lane lines, road signs, buildings, road shoulders, roadside green belts and road surface barriers,
the structured state quantity comprises one or more of position, velocity, acceleration, azimuth, contour, outline bounding box, attribute and category, and
the unstructured data comprises an image.
4. The method of claim 1, wherein prior to generating a plurality of initial predictions based on the initial state quantities using a prediction model, further comprising: and training the identification model.
5. The method of claim 4, wherein training the discriminative model comprises:
providing the prediction result and the prediction truth value of the prediction model to the identification model as input;
identifying whether the input is a predicted result or a predicted true value using the discrimination model; and
parameters of the authentication model are optimized to increase the probability of correct authentication.
6. The method of claim 4, wherein prior to generating a plurality of initial predictions based on the initial state quantities using a prediction model, further comprising: performing countermeasure training on the predictive model using the discriminative model.
7. The method of claim 6, wherein the counter training comprises:
generating prediction data based on the training data using a prediction model;
identifying a probability that the prediction data is a true value using an identification model; and
optimizing parameters of the predictive model based on the identified probabilities.
8. The method of claim 6, wherein training the identification model alternates with counter-training the predictive model.
9. A state prediction apparatus for driving assist, comprising:
an acquisition unit configured to acquire an initial state quantity of a driving environment;
a prediction unit for generating a plurality of initial prediction results based on the initial state quantities using a prediction model;
an identification unit for scoring the plurality of initial prediction results using an identification model;
a selection unit for selecting a part of the prediction results from the plurality of initial prediction results based on the score to provide to the prediction model for further prediction until a final prediction result is obtained,
wherein the state prediction device for driving assist further includes:
a conversion unit for expressing the structured state quantity of the entity to be predicted as unstructured data as said initial state quantity for use in the prediction process, and
and the extraction unit is used for extracting the structured state quantity of the entity to be predicted from the final prediction result which is the unstructured data.
10. The apparatus of claim 9, further comprising:
a decision unit for deciding a driving strategy based on the final prediction result.
11. The apparatus of claim 9, further comprising:
a first training unit for training the discrimination model before generating a plurality of initial prediction results based on the initial state quantities using a prediction model.
12. The apparatus of claim 11, further comprising:
a second training unit configured to perform countermeasure training on the prediction model using the discrimination model before generating a plurality of initial prediction results based on the initial state quantities using the prediction model.
13. An electronic device, comprising:
a processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to carry out a state prediction method for driving assistance as claimed in any one of claims 1-8.
14. A vehicle comprising the electronic device of claim 13.
15. A computer-readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to carry out a state prediction method for driving assistance as claimed in any one of claims 1-8.
CN201810749440.5A 2018-07-10 2018-07-10 State prediction method and device for driving assistance, electronic equipment and vehicle Active CN108944945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810749440.5A CN108944945B (en) 2018-07-10 2018-07-10 State prediction method and device for driving assistance, electronic equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810749440.5A CN108944945B (en) 2018-07-10 2018-07-10 State prediction method and device for driving assistance, electronic equipment and vehicle

Publications (2)

Publication Number Publication Date
CN108944945A CN108944945A (en) 2018-12-07
CN108944945B true CN108944945B (en) 2020-03-20

Family

ID=64482475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810749440.5A Active CN108944945B (en) 2018-07-10 2018-07-10 State prediction method and device for driving assistance, electronic equipment and vehicle

Country Status (1)

Country Link
CN (1) CN108944945B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991095B (en) * 2020-03-05 2020-07-03 北京三快在线科技有限公司 Training method and device for vehicle driving decision model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200267A (en) * 2014-09-23 2014-12-10 清华大学 Vehicle driving economy evaluation system and vehicle driving economy evaluation method
CN105143827A (en) * 2013-04-26 2015-12-09 罗伯特·博世有限公司 Method and apparatus for selecting a route to be travelled by a vehicle
CN105678077A (en) * 2016-01-07 2016-06-15 北京北交新能科技有限公司 Online prediction method of power performance of lithium ion battery for hybrid power vehicle
US9900747B1 (en) * 2017-05-16 2018-02-20 Cambridge Mobile Telematics, Inc. Using telematics data to identify a type of a trip

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10268200B2 (en) * 2016-12-21 2019-04-23 Baidu Usa Llc Method and system to predict one or more trajectories of a vehicle based on context surrounding the vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105143827A (en) * 2013-04-26 2015-12-09 罗伯特·博世有限公司 Method and apparatus for selecting a route to be travelled by a vehicle
CN104200267A (en) * 2014-09-23 2014-12-10 清华大学 Vehicle driving economy evaluation system and vehicle driving economy evaluation method
CN105678077A (en) * 2016-01-07 2016-06-15 北京北交新能科技有限公司 Online prediction method of power performance of lithium ion battery for hybrid power vehicle
US9900747B1 (en) * 2017-05-16 2018-02-20 Cambridge Mobile Telematics, Inc. Using telematics data to identify a type of a trip

Also Published As

Publication number Publication date
CN108944945A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
US11360477B2 (en) Trajectory generation using temporal logic and tree search
US11216674B2 (en) Neural networks for object detection and characterization
US11783568B2 (en) Object classification using extra-regional context
JP2022516288A (en) Hierarchical machine learning network architecture
EP4066171A1 (en) Vehicle intent prediction neural network
US20210134002A1 (en) Variational 3d object detection
CN114061581A (en) Ranking agents in proximity to autonomous vehicles by mutual importance
US20240149906A1 (en) Agent trajectory prediction using target locations
EP4060626A1 (en) Agent trajectory prediction using context-sensitive fusion
US20210150349A1 (en) Multi object tracking using memory attention
CN108944945B (en) State prediction method and device for driving assistance, electronic equipment and vehicle
US20230082079A1 (en) Training agent trajectory prediction neural networks using distillation
Villagra et al. Motion prediction and risk assessment
CN116343169A (en) Path planning method, target object motion control device and electronic equipment
KR102602271B1 (en) Method and apparatus for determining the possibility of collision of a driving vehicle using an artificial neural network
CN108960160A (en) The method and apparatus of structural state amount are predicted based on unstructured prediction model
Dey et al. Machine learning based perception architecture design for semi-autonomous vehicles
CN113963027B (en) Uncertainty detection model training method and device, and uncertainty detection method and device
US20240051557A1 (en) Perception fields for autonomous driving
US20220155096A1 (en) Processing sparse top-down input representations of an environment using neural networks
US20240005794A1 (en) Adaptive perception affected by V2X signal
US20230365155A1 (en) Virtual fields driving related operations
US20230365156A1 (en) Virtual fields driving related operations
Dey et al. Machine Learning for Efficient Perception in Automotive Cyber-Physical Systems
CN112859849A (en) Crossing motion planning method and device of automatic driving equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant