CN109711557B - Driving track prediction method, computer equipment and storage medium - Google Patents

Driving track prediction method, computer equipment and storage medium Download PDF

Info

Publication number
CN109711557B
CN109711557B CN201811653153.0A CN201811653153A CN109711557B CN 109711557 B CN109711557 B CN 109711557B CN 201811653153 A CN201811653153 A CN 201811653153A CN 109711557 B CN109711557 B CN 109711557B
Authority
CN
China
Prior art keywords
driving
driver
state
vehicle
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811653153.0A
Other languages
Chinese (zh)
Other versions
CN109711557A (en
Inventor
周扬
王栋
崔静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Original Assignee
Xian Aeronautical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautical University filed Critical Xian Aeronautical University
Priority to CN201811653153.0A priority Critical patent/CN109711557B/en
Publication of CN109711557A publication Critical patent/CN109711557A/en
Application granted granted Critical
Publication of CN109711557B publication Critical patent/CN109711557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the field of driving management, in particular to a driving track prediction method, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring a driver image and vehicle driving data; identifying the driver image through a driver state identification model to obtain the current driving state data of the driver; processing the current driving state data of the driver through a driving strategy prediction model to obtain the driving strategy data of the current state of the driver; and processing the current vehicle state data and the current state driving strategy data of the driver through a driving track prediction model to obtain future driving track information adaptive to the current state of the driver. The invention realizes the training of the driver state recognition model by adopting transfer learning, and has high recognition accuracy and strong generalization capability; the driving tracks of the driver in different states can be predicted through the driving strategy, and the track prediction is more accurate.

Description

Driving track prediction method, computer equipment and storage medium
Technical Field
The present invention relates to the field of vehicle management, and in particular, to a vehicle trajectory prediction method, a computer device, and a storage medium.
Background
With the popularization of mobile phones and the wide equipment in vehicles of entertainment facilities such as electronic screens, sound equipment and the like, the distraction of drivers in the driving process is more common, and the traffic safety problem caused by the distraction is more obvious.
The existing visual distraction prevention and control method mainly utilizes machine learning to establish a driver state classifier to judge the current state of a driver. When the vision distraction state of the driver is detected, voice warning is taken for the driver so as to help the driver to recover the concentration state. However, the occurrence of a traffic accident is not necessarily caused by the visual distraction state of the driver in the driving process, or sometimes the distraction of the driver is caused by the situations such as traffic jam, the vehicle speed is too low, the traffic jam result can be aggravated if the vehicle automatically adopts operations such as braking, and the like, and the misjudgment still occurs in the prior art, so that the driver is reminded when misjudgment is performed, and the driving mood of the driver can be influenced.
Therefore, the prior art has more problems when preventing the driver from driving in the visual distraction state, and needs to be improved.
Disclosure of Invention
In view of the above, it is desirable to provide a driving trajectory prediction method, a computer device and a storage medium.
In one embodiment, the present invention provides a driving trajectory prediction method, which comprises the following steps:
acquiring a driver image and vehicle driving data, wherein the vehicle driving data at least comprises current vehicle state data;
identifying the driver image through a driver state identification model to obtain current driving state data of a driver;
processing the current driving state data of the driver through a driving strategy prediction model to obtain the driving strategy data of the current state of the driver;
and processing the current vehicle state data and the current state driving strategy data of the driver through a driving track prediction model to obtain future driving track information adaptive to the current state of the driver.
In one embodiment, the present invention provides a driving trajectory prediction system, comprising:
the system comprises a driver image acquisition device, a controller and a display device, wherein the driver image acquisition device is used for acquiring a driver image when a vehicle runs and sending the driver image to the controller;
the vehicle driving data acquisition device is used for acquiring driving data of a vehicle and sending the driving data of the vehicle to the controller;
the controller is used for receiving the driver image and the vehicle running data, analyzing the driver image and the vehicle running data and predicting the running track of the vehicle.
In one embodiment, the present invention also provides a computer apparatus comprising: a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to execute the method for predicting the driving trajectory according to the above embodiment.
In one embodiment, the present invention further provides a storage medium, where a computer program is stored, and when executed by a processor, the computer program causes the processor to execute a driving trajectory prediction method according to the above embodiment.
According to the driving track prediction method, the driving track prediction device, the computer equipment and the storage medium, the training of the driver state recognition model is realized by adopting transfer learning, the obtained driver state recognition model can inherit the knowledge of the model which is pre-trained in a large-scale data set, the recognition accuracy is high, and the generalization capability is strong; respectively solving driving strategies of a driver in different states by utilizing maximum entropy inverse reinforcement learning; the driving tracks of the driver in different states are predicted through the driving strategy, and the formed tracks are more accurately predicted.
Drawings
FIG. 1 is a diagram of an application environment of a driving trajectory prediction method provided in an embodiment;
FIG. 2 is a diagram illustrating steps of a method for predicting a driving trajectory in an embodiment;
FIG. 3 is a schematic structural diagram of a trajectory prediction system provided in an embodiment;
fig. 4 is a schematic diagram of an internal structure of a computer device provided in an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, a first xx unit may be referred to as a second xx unit, and similarly, a second xx unit may be referred to as a first xx unit, without departing from the scope of the present application.
Fig. 1 is a diagram of an application environment of a data processing method provided in an embodiment, as shown in fig. 1, in the application environment, a driver image acquisition device 110, a vehicle driving data acquisition device 120, and a controller 130 are included.
The driver image capturing device 110 may be a fixed camera mounted on the vehicle, or may be an intelligent camera device such as a mobile phone and a tablet computer with a camera function.
The vehicle driving data collection device 120 is used for collecting various data during the driving process of the vehicle, such as data of a vehicle speed, an acceleration, and a driving direction of the vehicle, the vehicle driving data collection device 120 includes, but is not limited to, a speed sensor, an acceleration sensor, and a direction sensor, and of course, the vehicle driving data collection device may directly read the vehicle driving data in the central control of the vehicle.
The controller 130 is a central controller of the vehicle, and may be an independent physical server or terminal, or a server cluster formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud server, a cloud database, a cloud storage, and a Content Delivery Network (CDN).
Fig. 2 is a step diagram illustrating a method for predicting a driving trajectory of a vehicle according to an embodiment of the present invention, which is described in detail below with reference to the controller 130 as a main body.
In step S201, a driver image and vehicle travel data including at least vehicle current state data are acquired.
In the embodiment of the invention, the driver image is a face image when a driver is in a driving position when the driver drives a vehicle, and at least comprises clear face and upper half body limb information of the driver; the vehicle driving data refers to all data related to the driving and the vehicle state of the vehicle during the driving process of the vehicle, and in the embodiment of the present invention, at least the current vehicle state data should be included, wherein the current vehicle state data refers to the driving data of the vehicle when the driver image is captured, and includes, but is not limited to, the vehicle speed, the acceleration, and the driving direction of the vehicle.
According to the embodiment of the invention, the image of the driver when driving and the driving data of the vehicle at the moment corresponding to the image are acquired, wherein the image includes but is not limited to the speed, the acceleration and the driving direction of the vehicle, and the moment of the image of the driver is opposite to the moment of the driving data of the vehicle, so that the accuracy and the real-time performance of the subsequent track prediction are ensured.
In step S202, the driver image is recognized by the driver state recognition model, and the current driving state data of the driver is obtained.
In the embodiment of the invention, the driver state identification model is a deep neural network model and is obtained by offline learning through a data set, wherein the data set is a historical driving picture of the driver, the deep neural network model continuously learns the historical driving picture of the driver to continuously identify the historical driving state corresponding to the historical driving picture of the driver, and finally the current driving state of the driver can be identified according to the current driving picture of the driver. Wherein the drivers do not require the same driver, that is, the deep neural network model has the capability of recognizing the driving states of different drivers according to the driving images of different drivers. The driver's current driving status data refers to the driver's current driving status, including but not limited to attentive driving status and visually distracted driving status.
As an embodiment of the present invention, after the driver image is acquired, the driver image is input to the driver state recognition model, as a preferred embodiment of the present invention, the driver state recognition model is stored in a central controller of the vehicle, the driver image acquisition device is a camera mounted on the vehicle, that is, after the camera acquires the driver image, the image is input to the central controller of the vehicle, and the central controller of the vehicle recognizes the driving state of the driver by operating the driver state recognition model, so as to obtain the current driving data of the driver. As a preferred embodiment of the present invention, after the central controller recognizes the driving state of the driver, the current driving state of the driver may be selectively output or not output according to the requirement, for example, when a passenger takes a rear row on a taxi, the driving state of the driver may be known according to the prompt of the central controller, so as to ensure the safety of the driver.
According to the embodiment of the invention, the driver image is identified through the driver state identification model to obtain the current driving state of the driver, and the driver state identification model adopts the deep neural network model to ensure the accuracy of the driver image identification.
In step S203, the driving strategy prediction model is used to process the current driving state data of the driver, so as to obtain the driving strategy data of the current state of the driver.
In the embodiment of the present invention, the driving strategy prediction model is a model capable of predicting the future driving action of the driver on the vehicle according to the driving state of the driver and the current driving state of the vehicle, and is obtained by learning the historical driving track of the driver.
According to the embodiment of the invention, the driving strategy model of the driver is trained offline by storing the driving track data under the states that the driver is attentive to driving and the vision is distracted in the controller, and the driving strategy model of the driver can predict the driving strategy of the driver according to the driving state of the driver. As a preferred embodiment of the present invention, when the controller detects that the driving state of the driver corresponding to the driver image is the attentive driving state, the controller calls a driving strategy prediction model in the attentive driving state of the driver to predict the driving strategy of the driver; and when the driving state of the driver in the image of the driver is detected to be the vision distraction driving, calling a driving strategy prediction model of the driver in the vision distraction state to predict the driving strategy of the driver to obtain the driving strategy of the driver.
According to the embodiment of the invention, two driving states of the driver are separately identified, and different driving states adopt different driving strategy prediction models, so that the driving strategies of the driver in different states can be accurately predicted, and the accuracy of driving strategy prediction is ensured.
In step S204, the current vehicle state data and the current driver state driving strategy data are processed through a driving trajectory prediction model to obtain future driving trajectory information adapted to the current driver state.
In the embodiment of the invention, the driving track prediction model refers to a model capable of predicting the future driving track of the vehicle according to the current driving state of the vehicle and the driving strategy model of the driver, and the invention adopts an iterative model, namely, the next state of the vehicle is predicted according to the previous state of the vehicle and the driving strategy model of the driver in the previous state. The trajectory information includes at least data such as a traveling trajectory, a vehicle speed, and a traveling direction of the vehicle.
As an embodiment of the invention, a driver state recognition model integrated in a controller judges the current state of a driver according to a picture acquired by a vehicle-mounted camera in real time, and a driving track prediction model integrated in a vehicle-mounted industrial personal computer calls a driving strategy model of the driver stored in the controller according to the current state of the driver, namely when the driver state recognition model recognizes that the driver is in a visual distraction state at present, the driving strategy model stored in the controller under the visual distraction state of the driver is called immediately. The driving data acquisition system acquires the state of the current vehicle in real time, and the driving track prediction model iteratively calculates the driving track of the driver in a certain time period in the current state according to the current vehicle state and the current driving strategy model.
According to the embodiment of the invention, the driving strategy of the driver obtained according to the driving state of the driver, the current driving state of the vehicle and the driving track of the vehicle are combined through the driving track prediction model, so that the accuracy of vehicle driving track prediction is greatly improved.
The method for predicting the formed track provided by the embodiment of the invention further comprises the following steps:
the method for recognizing the driver image through the driver state recognition model comprises the following steps of before obtaining the current driving state data of the driver:
acquiring historical images of a driver in a state of focusing on driving and a state of visual distraction to form a first data set;
training the pre-trained convolutional neural network model through the first data set by adopting a transfer learning algorithm to obtain the driver state recognition model for recognizing the driving state of the driver
In the embodiment of the invention, the historical driving image of the driver is obtained, the driving state of the driver in the driver image is classified into the concentration driving state and the vision distraction driving state, a data set is formed, and the pre-trained convolutional neural network model is trained to obtain the driver state recognition model.
As an embodiment of the invention, the camera acquires face and posture images of a driver in states of focusing on driving and visual distraction, and the visual distraction state of the driver is excited by a visual distraction subtask, so that the driver sends a short message during driving, sets vehicle navigation and the like. The number of the collected pictures is larger than 300, and the corresponding driver state categories of the pictures are balanced. The collected pictures are classified into two types according to the states of drivers: the vision is distracted, the driving is concentrated, and the vision is stored in the controller. The driving data acquisition system acquires the driving track data of the driver in the states of focusing on driving and visual distraction
Figure DEST_PATH_IMAGE001
Wherein
Figure 729DEST_PATH_IMAGE002
Is composed oftThe vehicle running state information such as vehicle speed, lane position and the like collected at any moment,
Figure DEST_PATH_IMAGE003
is composed oftAnd the control information of the driver, such as the steering wheel angle, the position of an accelerator pedal and the like, is acquired at all times. The total duration of the driving track data is more than 100min, and the state class of the driver corresponding to the track data is balanced. The collected track data is segmented according to 50s time intervals, so that the number of the driving tracks is obtainedm=120, and the driving tracks are classified into two types according to the states of drivers: the vision is distracted and focused on driving, and is stored in a storage medium of the controller. The driver state recognition model is trained by adopting a transfer learning method. Firstly, the number of pictures in the controller is checked, and when the number of pictures is larger than 300, the training of the model is carried out. The accuracy of the task can be improved by using a model pre-trained in a large data set under the condition that the scale of the existing data set is small. The convolutional neural network model VGG-16 for image recognition, which is trained in the largest image recognition database ImageNet in the world, is adopted as the migration source model in the example. And establishing a main body framework of the VGG-16 model in a Tensorflow environment, and loading the downloaded VGG-16 model parameters. The output layer of the VGG-16 model full connection layer is replaced by a softmax layer containing two neurons which are used for outputting the probabilities that the categories are focused on driving and distracted from vision respectively when a certain driver picture is given. The two neuron parameters in the output layer are updated through the acquired picture training so as to realize accurate identification of the visual distraction of the driver. As an embodiment of the invention, the specific training method of the model is to input the collected pictures to the input layer of the model in batches by fixing the neuron parameters of the other layers except the last layer of the model, and iteratively update the neuron parameters of the last layer by adopting a back propagation algorithm and random gradient descent to reduce the cross entropy loss of the model
Figure 183448DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
For the category of the jth input picture,
Figure 38884DEST_PATH_IMAGE006
and (4) corresponding prediction results of the model. And updating parameters of the neurons until the cross entropy loss is converged, training to obtain a driver vision distraction state identification model, and identifying the current state of the driver through a driver picture acquired by the camera in real time by the model. And storing the trained driver visual distraction state recognition model in a storage medium of the controller.
The embodiment of the invention realizes the training of the driver state recognition model by adopting transfer learning, and the obtained driver state recognition model can inherit the knowledge of the model which is pre-trained in a large-scale data set, so that the recognition accuracy is high and the generalization capability is strong.
The method for predicting the driving track provided by the embodiment of the invention further comprises the following steps:
the method for processing the current driving state data of the driver through the driving strategy prediction model comprises the following steps of:
acquiring historical driving data of a driver driving a vehicle; the historical driving data at least comprises historical driving state data of the driver and historical driving track data of the vehicle corresponding to the historical driving state;
simulating a driving process of a driver by adopting a Markov decision process; wherein, the five elements in the Markov decision process are respectively: the state of the vehicle, the driving action of the driver, a reward function, a vehicle motion dynamic equation and a discount factor; determining five elements of the Markov decision process according to the historical driving track data of the vehicle, and enabling the state of the vehicle to correspond to the driving action of the driver one by one to form a driving strategy distribution model of the driver;
and utilizing a maximum entropy inverse reinforcement learning algorithm to enable the historical driving state of the driver to correspond to the vehicle state and the driving action of the driver in the driving strategy distribution model one by one so as to form the driving strategy prediction model.
In the embodiment of the invention, the driving strategy model is obtained by training preset data, the preset data are historical driving data of the vehicle, the historical driving data of the vehicle at least comprise the speed, the acceleration, the driving direction and the current driving state of the vehicle, the Markov decision process is an optimal decision process of a stochastic dynamic system based on the Markov process theory and is a quintuple, and the Markov decision process is applied to the driving of the vehicle, so that the state of the vehicle, the driving action of a driver, the driving action of the driver, a vehicle state change equation and a discount factor correspond to five elements in the Markov decision process one by one, and then the 5 elements and the driving strategy distribution of the driver are determined through the historical driving data of the vehicle, wherein the driving strategy distribution of the driver at least comprises the probability of various operations of the driver on the vehicle when the vehicle is in any driving state, and the maximum entropy learning algorithm is used for solving the problem of the driver which can not learn the driving strategy in a clear.
As an embodiment of the invention, reinforcement learning is generally described by a Markov decision process, which can be described by a five-tuple, i.e., { state space, motion space, driver's driving motion, dynamic equation, discount factor }. For a driving task, a state, an action space, a vehicle motion dynamic equation and a discount factor can be determined, but the driving action of a driver is not clear generally, and the reverse reinforcement learning method can learn the driving action of the driver through demonstration of the driver, namely historical driving track, and simultaneously solve the strategy of the driverAnd (4) distribution. As a preferred embodiment of the invention, the vehicle has some features during driving when it is in a certain state s, i.e. the vehicle is in a state of being driven
Figure DEST_PATH_IMAGE007
In which
Figure 699673DEST_PATH_IMAGE008
The feature may be a current vehicle speed, acceleration, lane position, or other parameters. The driver's driving behavior in the current state can be defined as a linear combination of the characteristic components, i.e.
Figure DEST_PATH_IMAGE009
In which
Figure 327094DEST_PATH_IMAGE010
Is the weight coefficient of the feature. The goal of maximum entropy inverse reinforcement learning is to solve
Figure 946295DEST_PATH_IMAGE010
And then the driving action of the driver is obtained.
Since the behavior of a person is random, its behavior can be represented by a probability distribution. Certain driving track in driving process of driver
Figure DEST_PATH_IMAGE011
The probability of occurrence is noted asp(
Figure 275645DEST_PATH_IMAGE011
). Estimating the probability of the random event by the maximum entropy principle, ensuring the unbiasedness of the estimation of the probability of the event under the condition of meeting the known conditions, and finally obtaining:
Figure 208966DEST_PATH_IMAGE012
. Maximizing likelihood probability of driver collecting track data to solve through maximum likelihood estimation method
Figure 159735DEST_PATH_IMAGE010
. Establishing a target letterCounting:
Figure DEST_PATH_IMAGE013
mthe number of the driving tracks. For in the objective function
Figure 480995DEST_PATH_IMAGE014
Obtaining the gradient by calculating the partial derivative, and finally obtaining the gradient through gradient descent
Figure 969745DEST_PATH_IMAGE014
The optimum value of (c).
In one embodiment of the present invention, since the driving strategy distributions of the driver in the state of concentration driving and the state of visual distraction are different, the driving strategy distributions of the driver in the state of concentration driving and the state of visual distraction are determined separately. The following describes the trajectory data using the visual distraction of the driver
Figure DEST_PATH_IMAGE015
Training and solving the driving strategy distribution step under the visual distraction state of the driver, wherein
Figure 785386DEST_PATH_IMAGE002
Is composed oftThe vehicle running state information such as vehicle speed, lane position and the like collected at any moment,
Figure 839930DEST_PATH_IMAGE003
is composed oftAnd the control information of the driver, such as steering wheel angle, accelerator pedal position and the like, is acquired at all times. Solving the strategy distribution method of the driver under the state of focusing on driving is similar, but the training data should adopt the trajectory data of the driver under the state of focusing on driving: the controller checks the stored driver track data in real time, and when the total length of the stored track is more than 100min, training is started: random initialization
Figure 4195DEST_PATH_IMAGE014
Then, then
Figure 42558DEST_PATH_IMAGE016
Figure 848840DEST_PATH_IMAGE008
Is in a statesA certain characteristic component of; by using
Figure DEST_PATH_IMAGE017
Determining a value function under the current driver action according to dynamic programming
Figure 505693DEST_PATH_IMAGE018
. Function of state action value
Figure DEST_PATH_IMAGE019
And strategy
Figure 168755DEST_PATH_IMAGE020
(probability of taking some action a in state s), the iterative formula is as follows:
Figure DEST_PATH_IMAGE021
Figure 163256DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE023
in order to be a factor of the discount,
Figure 320699DEST_PATH_IMAGE024
Tis a state transition matrix by which the state of the driver's driving vehicle can be calculatedsLower samplingaThe next state to which the action will transition,Tcan be given by the automotive kinematics equation. Function of current valueV(s)After the convergence condition is satisfied, it can be obtained
Figure DEST_PATH_IMAGE025
According to the current driving strategy
Figure 84256DEST_PATH_IMAGE020
Determination of state access probability by dynamic programming
Figure 387061DEST_PATH_IMAGE026
The specific iteration steps are as follows:
Figure DEST_PATH_IMAGE027
wherein, the first and the second end of the pipe are connected with each other,
Figure 416328DEST_PATH_IMAGE023
in order to be a factor of the discount,
Figure 33254DEST_PATH_IMAGE028
. Iterate through the above formula until
Figure 916897DEST_PATH_IMAGE026
And (6) converging.
And solving according to the stored driving track data of the visual distraction state of the driver:
Figure DEST_PATH_IMAGE029
the gradient of the objective function can be obtained according to the above process
Figure 938073DEST_PATH_IMAGE030
By gradient descent pairs
Figure 110429DEST_PATH_IMAGE014
Are updated, i.e.
Figure DEST_PATH_IMAGE031
Wherein
Figure 858942DEST_PATH_IMAGE032
Is the learning rate. When the temperature is higher than the set temperature
Figure 65932DEST_PATH_IMAGE014
When the convergence condition is satisfied, the result is
Figure 710540DEST_PATH_IMAGE014
Can obtain the driving of the driver at the same time
Figure DEST_PATH_IMAGE033
That is to say that the driver is in the vehicle statesTake various actionsaThe probability value of (2).
Finally, according to the maximum entropy inverse reinforcement learning algorithm, the strategies under the corresponding states of the driver can be respectively obtained according to the driving track data absorbed by the driver and the track data under the visual distraction states:
Figure 723188DEST_PATH_IMAGE034
and
Figure DEST_PATH_IMAGE035
and stored in the controller.
The embodiment of the invention considers that the driving strategies of the driver are different in the states of focusing on driving and visual distraction, so that the driving strategies of the driver in different states are respectively solved by utilizing maximum entropy inverse reinforcement learning, and the timely action of the driver on the vehicle can be accurately predicted according to different driving states of the driver.
The method for predicting the driving track provided by the embodiment of the invention further comprises the following steps:
the processing the current vehicle state data and the current state driving strategy data through the driving track prediction model to obtain future driving track information adaptive to the current state comprises the following steps:
according to the driving strategy and the vehicle driving data, the driving track of a future vehicle is iterated through an iteration function within the time T, and the driving track of the vehicle is predicted, wherein the formula of the iteration function is as follows:
Figure 213076DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE037
wherein, among others,
Figure 353201DEST_PATH_IMAGE038
is composed ofiThe driving state of the driver at that time,
Figure DEST_PATH_IMAGE039
is composed ofiThe running state of the vehicle at that time,
Figure 903131DEST_PATH_IMAGE040
for the driver atiVehicle control actions taken at the moment.
In the embodiment of the invention, the driving track prediction model continuously iterates the driving track of the vehicle according to the driving strategy of the driver predicted by the driving strategy prediction model and the current driving state of the vehicle, so as to simulate the driving process of the vehicle.
As an embodiment of the present invention, the driver state recognition model stored in the controller can distinguish the current state of the driver by the driver picture collected by the vehicle-mounted camera in real time, and record the current statetThe driver state recognition model detects that the state of the driver is the driver state according to the driver picture acquired by the camera at any moment
Figure DEST_PATH_IMAGE041
Note the booktThe current vehicle state at that time is
Figure 643554DEST_PATH_IMAGE002
In the vehicle state of
Figure 405974DEST_PATH_IMAGE002
The driver operates the vehicle as
Figure 135026DEST_PATH_IMAGE003
The driving track prediction model is based on the current state of the driver
Figure 855858DEST_PATH_IMAGE042
Calling the driving strategy of the driver in the corresponding state stored in the controller
Figure DEST_PATH_IMAGE043
Setting the predicted driving track duration as T, and iteratively calculating the predicted driving track of the driver in the duration T, wherein the predicted driving track can be obtained through the following two steps:
(1) Determining a driver's vehicle control actions at time i
Figure 21260DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE045
Figure 665999DEST_PATH_IMAGE046
The probability of n vehicle control actions,
random function
Figure DEST_PATH_IMAGE047
Returns the corresponding action according to the probability, e.g. for the nth vehicle control action, the probability of the driver selecting it is
Figure 233247DEST_PATH_IMAGE048
The random function has
Figure DEST_PATH_IMAGE049
(2) Calculating the vehicle state at time i +1
Figure 921717DEST_PATH_IMAGE037
And repeating iteration within the T time to obtain the driving track of the vehicle.
According to the embodiment of the invention, the driving strategy of the driver obtained according to the instant state of the driver, namely the driving state of the vehicle, is iterated with the current driving state of the vehicle, so that the driving track of the vehicle in a period of time in the future is predicted, different operations on the vehicle are considered when the driver is in different driving states, and the driving track of the vehicle can be more accurately predicted by combining the current driving state of the vehicle.
The method for predicting the driving track provided by the embodiment of the invention further comprises the following steps:
and when the driving state of the driver is detected to be changed within the time T, stopping the current iteration process, and predicting the driving track of the vehicle again according to the driving state of the driver after the driving state is changed.
In the embodiment of the invention, the face image of the driver is acquired in real time by the first face image acquisition device, the face image is transmitted to the controller, the controller continuously identifies the face image, when the controller identifies that the driving state of the driver corresponding to the face image is changed, the iteration of the vehicle driving track is stopped, and the vehicle driving track is predicted again according to the driving state of the driver.
The method for predicting the driving track provided by the embodiment of the invention further comprises the following steps:
estimating the safety factor of the vehicle according to the future driving track information of the vehicle by combining the current driving road condition of the vehicle;
and carrying out safety reminding on the driver according to the safety factor.
In the embodiment of the invention, the safety factor refers to whether a car accident or other conditions harmful to the car and a driver occur when the car continues to run according to the predicted track, and when the unsafe conditions occur, the car gives an alarm to the driver.
As an embodiment of the present invention, the vehicle has driving recording capability, and can monitor the driving road condition of the vehicle and the position of other vehicles or obstacles on the road in real time, and send information to the controller, and the controller combines the road condition with the predicted driving track of the vehicle. Judge whether dangerous condition can appear, when dangerous condition appears, can remind the driver, concrete warning mode just can be through the siren to send out the alarm sound to the driver, perhaps reports the dangerous condition that probably appears through the voice broadcast ware, also can remind the driver through the effect of light, remind the back to the driver, after the driver operates the vehicle, at the traffic track of forecasting the measurement again to judge again whether there is danger in the vehicle driving.
As a preferred embodiment of the invention, the safety factor of the vehicle is estimated according to the future running track information of the vehicle by combining the current running road condition of the vehicle;
and directly controlling the driving state of the vehicle according to the safety factor.
In the embodiment of the present invention, the driving state of the vehicle should include at least a driving speed, an acceleration, and a driving direction of the vehicle.
In combination with the above embodiment, when the controller predicts that the vehicle may be dangerous, the driving state of the vehicle may be directly controlled according to the driving road condition of the vehicle, for example, operations such as deceleration and turning are performed, so as to prevent the vehicle from being dangerous during driving.
According to the embodiment of the invention, the safety degree of the vehicle is judged by combining the driving road condition of the vehicle and the predicted driving track of the vehicle, and the driver is reminded or the driving state of the vehicle is directly controlled according to the future safety degree of the vehicle, so that the vehicle is ensured not to be dangerous, and the personal and property safety of the driver is ensured.
Fig. 3 shows a schematic structural diagram of a driving trajectory prediction system suitable for the embodiment of the present invention, which is detailed as follows:
the driver image acquisition device 310 is used for acquiring a driver image when the vehicle runs and sending the driver image to the controller.
In the embodiment of the invention, the driver image acquisition device can be a camera device installed on a vehicle, and can also be an intelligent device such as a mobile phone with a camera function, a computer and the like, the camera acquires the driver image when the driver drives the vehicle, the driver image refers to the face image when the driver is in the driving position when the driver drives the vehicle, and at least the clear face and upper body limb information of the driver are required to be included.
And a vehicle driving data collecting device 320 for collecting driving data of a vehicle and transmitting the driving data of the vehicle to the controller.
In the embodiment of the present invention, the vehicle driving data refers to all data related to the driving of the vehicle and the vehicle state during the driving of the vehicle, but in the embodiment of the present invention, at least the current vehicle state data shall be included, wherein the current vehicle state data refers to the driving data of the vehicle when the driver image is captured, and includes, but is not limited to, the vehicle speed, the acceleration of the vehicle, and the driving direction of the vehicle.
The controller 330 is configured to receive the driver image and the vehicle driving data, analyze the driver image and the vehicle driving data, and predict a driving trajectory of the vehicle.
Acquiring a driver image and vehicle running data, wherein the vehicle running data at least comprises current vehicle state data;
identifying the driver image through a driver state identification model to obtain current driving state data of a driver;
processing the current driving state data of the driver through a driving strategy prediction model to obtain the driving strategy data of the current state of the driver;
processing the current vehicle state data and the current driver state driving strategy data through a driving track prediction model to obtain future driving track information adaptive to the current driver state
In the embodiment of the invention, the driver state identification model is a deep neural network model and is obtained by offline learning through a data set, wherein the data set is a historical driving picture of the driver, the deep neural network model continuously learns the historical driving picture of the driver to continuously identify the historical driving state corresponding to the historical driving picture of the driver, and finally the current driving state of the driver can be identified according to the current driving picture of the driver. Wherein the drivers do not require the same driver, that is, the deep neural network model has the capability of recognizing the driving states of different drivers according to the driving images of different drivers. The driver's current driving status data refers to the driver's current driving status, including but not limited to attentive driving status and visually distracted driving status.
As an embodiment of the present invention, after the driver image is acquired, the driver image is input to the driver state recognition model, as a preferred embodiment of the present invention, the driver state recognition model is stored in a central controller of the vehicle, the driver image acquisition device is a camera mounted on the vehicle, that is, after the camera acquires the driver image, the image is input to the central controller of the vehicle, and the central controller of the vehicle recognizes the driving state of the driver by operating the driver state recognition model, so as to obtain the current driving state of the driver. As a preferred embodiment of the present invention, after the central controller recognizes the driving state of the driver, the current driving state of the driver may be selectively output or not output according to the requirement, for example, when a passenger takes a rear row on a taxi, the driving state of the driver may be known according to the prompt of the central controller, so as to ensure the safety of the driver.
According to the embodiment of the invention, the driver image is identified through the driver state identification model to obtain the current driving state of the driver, and the driver state identification model adopts the deep neural network model to ensure the accuracy of the driver image identification.
In the embodiment of the invention, the driver state identification model is a deep neural network model and is obtained by offline learning through a data set, wherein the data set is a historical driving picture of the driver, the deep neural network model continuously learns the historical driving picture of the driver to continuously identify the historical driving state corresponding to the historical driving picture of the driver, and finally the current driving state of the driver can be identified according to the current driving picture of the driver. Wherein the drivers do not require the same driver, that is, the deep neural network model has the capability of recognizing the driving states of different drivers according to the driving images of different drivers. The driver's current driving status data refers to the driver's current driving status, including but not limited to concentration driving status and visual distraction driving status.
As a preferred embodiment of the present invention, after the driver image is acquired, the driver image is input to the driver state recognition model, and the driver image acquisition device is a camera mounted on the vehicle, that is, a central controller of the vehicle acquires the driver image, and then inputs the image to the central controller of the vehicle, and the central controller of the vehicle recognizes the driving state of the driver by operating the driver state recognition model, so as to obtain the current driving data of the driver. As a preferred embodiment of the present invention, after the central controller recognizes the driving state of the driver, the current driving state of the driver may be selectively output or not output according to the requirement, for example, when a passenger takes a rear row on a taxi, the driving state of the driver may be known according to the prompt of the central controller, so as to ensure the safety of the driver.
According to the embodiment of the invention, the driver image is identified through the driver state identification model to obtain the current driving state of the driver, and the driver state identification model adopts the deep neural network model to ensure the accuracy of the driver image identification.
In the embodiment of the present invention, the driving strategy prediction model is a model capable of predicting the future driving action of the driver on the vehicle according to the driving state of the driver and the current driving state of the vehicle, and is obtained by learning the historical driving track of the driver.
According to the embodiment of the invention, the driving strategy model of the driver is trained offline by storing the driving track data under the states that the driver is attentive to driving and the vision is distracted in the controller, and the driving strategy model of the driver can predict the driving strategy of the driver according to the driving state of the driver. As a preferred embodiment of the present invention, when the control detects that the driving state of the driver in the driver image is the attentive driving state, a driving strategy prediction model in the attentive driving state of the driver is called to predict the driving strategy of the driver; and when the driving state of the driver in the driver image is detected to be the vision distraction driving, calling the driving strategy prediction model of the driver in the vision distraction state to predict the driving strategy of the driver to obtain the driving strategy of the driver.
According to the embodiment of the invention, two driving states of the driver are separately identified, and different driving states adopt different driving strategy prediction models, so that the driving strategies of the driver in different states can be accurately predicted, and the accuracy of driving strategy prediction is ensured.
In the embodiment of the invention, the driving track prediction model is a model capable of predicting the future driving track of the vehicle according to the current driving state of the vehicle and the driving strategy model of the driver, and the invention adopts an iterative model, namely, the next state of the vehicle is predicted according to the previous state of the vehicle and the driving strategy model of the driver in the previous state. The trajectory information includes at least data such as a traveling trajectory, a vehicle speed, and a traveling direction of the vehicle.
As an embodiment of the invention, a driver state recognition model integrated in a controller judges the current state of a driver according to a picture acquired by a vehicle-mounted camera in real time, and a driving track prediction model integrated in a vehicle-mounted industrial personal computer calls a driving strategy model of the driver stored in the controller according to the current state of the driver, namely when the driver state recognition model recognizes that the driver is in a visual distraction state at present, the driving strategy model stored in the controller under the visual distraction state of the driver is called immediately. The driving data acquisition system acquires the current vehicle state in real time, and the driving track prediction model iteratively calculates the driving track of a driver in a certain time period in the current state according to the current vehicle state and the current driving strategy model.
According to the embodiment of the invention, the driving strategy of the driver obtained according to the driving state of the driver, the current driving state of the vehicle and the driving track of the vehicle are combined through the driving track prediction model, so that the accuracy of vehicle driving track prediction is greatly improved.
Fig. 4 shows a diagram of the internal structure of a computer device suitable for an embodiment of the present invention, including a memory 401, a processor 402, a communication module 403, and a user interface 404.
The memory 401 has stored therein an operating system 405 for processing various basic system services and programs for performing hardware-related tasks; application software 406 is also stored for implementing the steps of the driving trajectory prediction method in the embodiment of the present invention.
In embodiments of the present invention, memory 401 may be a high speed random access memory such as DRAM, SRAM, DDR, RAM, or other random access solid state memory device, or a non-volatile memory such as one or more hard disk storage devices, optical disk storage devices, memory devices, or the like.
In an embodiment of the present invention, the processor 402 may receive and transmit data through the communication module 403 to implement a blockchain network communication or a local communication.
The user interface 404 may include one or more input devices 407, such as a keyboard, mouse, touch screen display, and the user interface 404 may also include one or more output devices 408, such as a display, microphone, and the like.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the processor is enabled to execute the steps of the above-mentioned driving trajectory prediction method.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a non-volatile computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
All possible combinations of the technical features of the above embodiments may not be described for the sake of brevity, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A method for predicting a driving trajectory, the method comprising:
acquiring a driver image and vehicle driving data, wherein the vehicle driving data at least comprises current vehicle state data;
identifying the driver image through a driver state identification model to obtain current driving state data of a driver;
processing the current driving state data of the driver through a driving strategy prediction model to obtain the driving strategy data of the current state of the driver;
processing the current state data of the vehicle and the current state driving strategy data of the driver through a driving track prediction model to obtain future driving track information adaptive to the current state of the driver;
the method for recognizing the driver image through the driver state recognition model comprises the following steps of before obtaining the current driving state data of the driver:
acquiring historical images of a driver in a state of focusing on driving and a state of visual distraction to form a first data set;
training a pre-trained convolutional neural network model through the first data set by adopting a transfer learning algorithm to obtain a driver state identification model for identifying the driving state of a driver;
the method for processing the current driving state data of the driver through the driving strategy prediction model comprises the following steps of:
acquiring historical driving data of a driver driving a vehicle; the historical driving data at least comprises historical driving state data of the driver and historical driving track data of the vehicle corresponding to the historical driving state;
simulating a driving process of a driver by adopting a Markov decision process; wherein, the five elements in the Markov decision process are respectively: the state of the vehicle, the driving action of the driver, a vehicle state change equation and a discount factor; the driving action of the driver refers to probability values of the state of the vehicle transferring to various next states when the vehicle is in the state;
determining five elements of the Markov decision process according to the historical vehicle driving track data, and enabling the vehicle state and the driving action of the driver to correspond one to form a driving strategy distribution model of the driver;
utilizing a maximum entropy inverse reinforcement learning algorithm to enable the historical driving state of the driver to correspond to the vehicle state and the driving action of the driver in the driving strategy distribution model one by one so as to form the driving strategy prediction model;
the processing the current vehicle state data and the current state driving strategy data through the driving track prediction model to obtain future driving track information adaptive to the current state comprises the following steps:
iterating the driving track of the future vehicle through an iteration function within the time T according to the driving strategy and the vehicle driving data, and predicting the driving track of the vehicle, wherein the formula of the iteration function is as follows:
Figure DEST_PATH_IMAGE002AA
Figure DEST_PATH_IMAGE004AA
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE006AA
is composed ofiThe driving state of the driver at that time,
Figure DEST_PATH_IMAGE008AA
is composed ofiThe running state of the vehicle at that time,
Figure DEST_PATH_IMAGE010AA
for the driver atiVehicle control actions taken at the moment.
2. The method of claim 1, further comprising:
and when the driving state of the driver is detected to be changed within the time T, stopping the current iteration process, and predicting the driving track of the vehicle again according to the driving state of the driver after the driving state is changed.
3. The method of claim 1, further comprising:
estimating the safety factor of the vehicle according to the future driving track information of the vehicle by combining the current driving road condition of the vehicle;
and carrying out safety reminding on the driver according to the safety factor.
4. The method of claim 3, further comprising:
estimating the safety factor of the vehicle according to the future driving track information of the vehicle by combining the current driving road condition of the vehicle;
and directly controlling the driving state of the vehicle according to the safety factor.
5. A computer arrangement, comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, causes the processor to carry out the steps of a method of trajectory prediction according to any one of claims 1 to 4.
6. A computer-readable storage medium, having a computer program stored thereon, which, when being executed by a processor, causes the processor to carry out the steps of a method of trajectory prediction according to any one of claims 1 to 4.
CN201811653153.0A 2018-12-28 2018-12-28 Driving track prediction method, computer equipment and storage medium Active CN109711557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811653153.0A CN109711557B (en) 2018-12-28 2018-12-28 Driving track prediction method, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811653153.0A CN109711557B (en) 2018-12-28 2018-12-28 Driving track prediction method, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109711557A CN109711557A (en) 2019-05-03
CN109711557B true CN109711557B (en) 2022-10-14

Family

ID=66259763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811653153.0A Active CN109711557B (en) 2018-12-28 2018-12-28 Driving track prediction method, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109711557B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321811B (en) * 2019-06-17 2023-05-02 中国工程物理研究院电子工程研究所 Target detection method in unmanned aerial vehicle aerial video for deep reverse reinforcement learning
CN110293968B (en) * 2019-06-18 2021-09-28 百度在线网络技术(北京)有限公司 Control method, device and equipment for automatic driving vehicle and readable storage medium
US11618481B2 (en) * 2019-07-03 2023-04-04 Waymo Llc Agent trajectory prediction using anchor trajectories
CN110555476B (en) * 2019-08-29 2023-09-26 华南理工大学 Intelligent vehicle lane change track prediction method suitable for man-machine hybrid driving environment
CN111208838B (en) * 2020-04-20 2020-11-03 北京三快在线科技有限公司 Control method and device of unmanned equipment
CN114019947B (en) * 2020-07-15 2024-03-12 广州汽车集团股份有限公司 Method and system for controlling vehicle to travel at intersection and computer readable storage medium
CN112329657B (en) * 2020-11-10 2022-07-01 易显智能科技有限责任公司 Method and related device for sensing upper body movement of driver
CN112581026B (en) * 2020-12-29 2022-08-12 杭州趣链科技有限公司 Joint path planning method for logistics robot on alliance chain
CN112927117B (en) * 2021-03-22 2022-08-23 上海京知信息科技有限公司 Block chain-based vehicle management communication method, management system, device and medium
CN113345229B (en) * 2021-06-01 2022-04-19 平安科技(深圳)有限公司 Road early warning method based on federal learning and related equipment thereof
CN113570595B (en) * 2021-08-12 2023-06-20 上汽大众汽车有限公司 Vehicle track prediction method and optimization method of vehicle track prediction model
CN115320631B (en) * 2022-07-05 2024-04-09 西安航空学院 Method for identifying driving intention of front vehicle of intelligent driving automobile adjacent lane
CN117213501B (en) * 2023-11-09 2024-02-02 成都工业职业技术学院 Robot obstacle avoidance planning method based on distributed model prediction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103661375A (en) * 2013-11-25 2014-03-26 同济大学 Lane departure alarming method and system with driving distraction state considered
WO2017167801A1 (en) * 2016-03-29 2017-10-05 Avl List Gmbh Driver assistance system for supporting a driver when driving a vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103661375A (en) * 2013-11-25 2014-03-26 同济大学 Lane departure alarming method and system with driving distraction state considered
WO2017167801A1 (en) * 2016-03-29 2017-10-05 Avl List Gmbh Driver assistance system for supporting a driver when driving a vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于驾驶员操纵及车辆运动轨迹信息的驾驶分心辨识方法;王加等;《汽车技术》;20131024(第10期);全文 *

Also Published As

Publication number Publication date
CN109711557A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109711557B (en) Driving track prediction method, computer equipment and storage medium
CN109572550B (en) Driving track prediction method, system, computer equipment and storage medium
US11878720B2 (en) Method and system for risk modeling in autonomous vehicles
US10739773B2 (en) Generative adversarial inverse trajectory optimization for probabilistic vehicle forecasting
US10592785B2 (en) Integrated system for detection of driver condition
US10595037B2 (en) Dynamic scene prediction with multiple interacting agents
US20210309183A1 (en) Intelligent Detection and Alerting of Potential Intruders
US10479328B2 (en) System and methods for assessing the interior of an autonomous vehicle
CN110850861A (en) Attention-based hierarchical lane change depth reinforcement learning
US11189171B2 (en) Traffic prediction with reparameterized pushforward policy for autonomous vehicles
US20210086798A1 (en) Model-free reinforcement learning
US11242050B2 (en) Reinforcement learning with scene decomposition for navigating complex environments
US11467579B2 (en) Probabilistic neural network for predicting hidden context of traffic entities for autonomous vehicles
CN112085165A (en) Decision information generation method, device, equipment and storage medium
Xu et al. Aggressive driving behavior prediction considering driver’s intention based on multivariate-temporal feature data
US11465611B2 (en) Autonomous vehicle behavior synchronization
US20200365140A1 (en) Detection of anomalies in the interior of an autonomous vehicle
KR102196027B1 (en) LSTM-based steering behavior monitoring device and its method
CN111830962A (en) Interpretation data for reinforcement learning agent controller
CN112435466A (en) Method and system for predicting take-over time of CACC vehicle changing into traditional vehicle under mixed traffic flow environment
US20220261519A1 (en) Rare event simulation in autonomous vehicle motion planning
US20230347932A1 (en) Evaluation of components of autonomous vehicles based on driving recommendations
US20230304800A1 (en) Method of augmenting human perception of the surroundings
Menendez et al. Detecting and Predicting Smart Car Collisions in Hybrid Environments from Sensor Data
Leelavathy et al. Effective traffic model for intelligent traffic monitoring enabled deep RNN algorithm for autonomous vehicles surveillance systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240122

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Guo jiahuodiqu after: Zhong Guo

Address before: 710077 No. 259, West Second Ring Road, Lianhu District, Xi'an, Shaanxi

Patentee before: XI'AN AERONAUTICAL University

Guo jiahuodiqu before: Zhong Guo