CN110488821A - A kind of method and device of determining unmanned vehicle Motion - Google Patents

A kind of method and device of determining unmanned vehicle Motion Download PDF

Info

Publication number
CN110488821A
CN110488821A CN201910741637.9A CN201910741637A CN110488821A CN 110488821 A CN110488821 A CN 110488821A CN 201910741637 A CN201910741637 A CN 201910741637A CN 110488821 A CN110488821 A CN 110488821A
Authority
CN
China
Prior art keywords
road conditions
feature vector
unmanned vehicle
attention
current time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910741637.9A
Other languages
Chinese (zh)
Other versions
CN110488821B (en
Inventor
朱炎亮
任冬淳
钱德恒
付圣
丁曙光
王志超
周奕达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910741637.9A priority Critical patent/CN110488821B/en
Publication of CN110488821A publication Critical patent/CN110488821A/en
Application granted granted Critical
Publication of CN110488821B publication Critical patent/CN110488821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Abstract

Subject description discloses a kind of method and devices of determining unmanned vehicle Motion, unmanned vehicle continuous collecting image in the process of moving, therefore it can determine the image of current time acquisition, the image is input to the coding side of decision model trained in advance later, to obtain current time corresponding road conditions feature vector.Later, the corresponding road conditions feature vector of each historical juncture that will be obtained in history by the coding side of decision model, current time corresponding road conditions feature vector and the Motion at the unmanned vehicle current time, the decoding end for inputting decision model, obtains the Motion of the unmanned vehicle subsequent time.

Description

A kind of method and device of determining unmanned vehicle Motion
Technical field
This application involves a kind of method of automatic driving vehicle technical field more particularly to determining unmanned vehicle Motion and Device.
Background technique
Automatic driving vehicle refers to the sensor-based system carried by itself, perceives surrounding road environment, and planning row automatically Bus or train route line simultaneously controls a kind of intelligent vehicle that vehicle reaches predeterminated target.
In the prior art, a kind of paths planning method of unmanned vehicle is using based on shot and long term memory network (Long Short-Term Memory, LSTM) encoding-decoder structure, by the reality for inputting the barrier at M moment in history Coordinate, the prediction coordinate of the barrier at output following N number of moment, and according to preset barrier-avoiding method, control unmanned vehicle traveling.
The maximum advantage of this method is that model is simple, while determining the mode of unmanned vehicle avoidance.Disadvantage is also obvious, by In the location information that each barrier is only used only, so that the effect of avoidance and Path selection is poor.
Also, the different brings for also not accounting for the traffic information at each moment influence, for example, the change of road quantity Changing bring influences, and type of vehicle bring influences etc..So that existing method determines unmanned vehicle Motion, inflexible standard Really.
Summary of the invention
This specification embodiment provides a kind of method and device of determining unmanned vehicle Motion, existing with the solution of part The above problem existing for technology.
This specification embodiment adopts the following technical solutions:
A kind of method for determining unmanned vehicle Motion that this specification provides, comprising:
Unmanned vehicle continuous collecting image in the process of moving, which comprises
Determine the image of unmanned vehicle current time acquisition;
Described image is input to the coding side of decision model trained in advance, to obtain the current time corresponding road Condition feature vector;
By the corresponding road conditions feature vector of each historical juncture obtained by the coding side, it is described current when The Motion for carving unmanned vehicle described in corresponding road conditions feature vector and the current time inputs the solution of the decision model Code end, to obtain the unmanned vehicle in the Motion of subsequent time.
Optionally, the coding side includes convolutional neural networks CNN, the first shot and long term memory network LSTM;
Described image is input to the coding side of decision model trained in advance, to obtain the current time corresponding road Condition feature vector, specifically includes:
By the image at the current time, it is input to the CNN, obtains image feature vector;
By obtained described image feature vector and the last moment at the current time corresponding road conditions feature to Amount, is input to the first LSTM, obtains the current time corresponding road conditions feature vector.
Optionally, the decoding end includes attention layer, the second shot and long term memory network LSTM;
By the corresponding road conditions feature vector of each historical juncture obtained by the coding side, it is described current when The Motion for carving unmanned vehicle described in corresponding road conditions feature vector and the current time inputs the solution of the decision model Code end is specifically included with obtaining the unmanned vehicle in the Motion of subsequent time:
According to the position of unmanned vehicle described in the Motion of unmanned vehicle described in the current time and the current time It sets, determines the attention matrix of the attention layer;
By each historical juncture corresponding road conditions feature vector and the current time corresponding road conditions feature Vector is input to the attention layer, the road conditions feature vector of the power that gains attention weighting;
The road conditions feature vector that the attention is weighted inputs the 2nd LSTM, obtains the unmanned vehicle in lower a period of time The Motion at quarter.
Optionally, by each historical juncture corresponding road conditions feature vector and the current time corresponding road Condition feature vector is input to the attention layer, obtains the road conditions feature vector of the attention weighting, specifically includes:
It is special according to each historical juncture corresponding road conditions feature vector and the current time corresponding road conditions Vector is levied, determines road conditions eigenmatrix;
According to the road conditions eigenmatrix and the attention matrix, the road conditions eigenmatrix of the power that gains attention weighting;
The road of attention weighting is determined by maximum pond method according to the road conditions eigenmatrix that the attention weights Condition feature vector.
Optionally, the decoding end further include: road constraint layer;
The road conditions feature vector that the attention is weighted inputs the 2nd LSTM, obtains the unmanned vehicle in lower a period of time The Motion at quarter, specifically includes:
The position of unmanned vehicle described in the target position travelled according to the unmanned vehicle and the current time determines at least One planning path;
The coordinate of specified point in each planning path is acquired, and route characteristic matrix is determined according to the coordinate of acquisition;
According to the attention matrix of the route characteristic matrix and the road constraint layer, the path of attention weighting is determined Eigenmatrix;
The road of attention weighting is determined by maximum pond method according to the route characteristic matrix that the attention weights Condition feature vector;
Described in the road conditions feature vector input of the road conditions feature vector that the attention is weighted and attention weighting 2nd LSTM obtains the unmanned vehicle in the Motion of subsequent time.
This specification provides a kind of device of determining unmanned vehicle Motion, and described device is in the unmanned vehicle driving process Middle continuous collecting image, described device include:
Acquisition module determines the image of the unmanned vehicle current time acquisition;
Described image is input to the coding side of decision model trained in advance, when obtaining described current by coding module Carve corresponding road conditions feature vector;
Tactful determining module, by the corresponding road conditions feature of each historical juncture obtained by the coding side to Described in the Motion input of unmanned vehicle described in amount, the current time corresponding road conditions feature vector and the current time The decoding end of decision model, to obtain the unmanned vehicle in the Motion of subsequent time.
Optionally, the coding side includes convolutional neural networks CNN, the first shot and long term memory network LSTM, the coding The image at the current time is input to the CNN, obtains image feature vector by module, the described image feature that will be obtained The last moment at vector and the current time corresponding road conditions feature vector, is input to the first LSTM, obtains described Current time corresponding road conditions feature vector.
Optionally, the decoding end includes attention layer, the second shot and long term memory network LSTM, and the strategy determines mould Block is determined according to the position of unmanned vehicle described in the Motion of unmanned vehicle described in the current time and the current time The attention matrix of the attention layer, by each historical juncture corresponding road conditions feature vector and it is described current when Corresponding road conditions feature vector is carved, the attention layer, the road conditions feature vector of the power that gains attention weighting, by the note are input to The road conditions feature vector of power of anticipating weighting inputs the 2nd LSTM, obtains the unmanned vehicle in the Motion of subsequent time.
Optionally, the tactful determining module, according to each historical juncture corresponding road conditions feature vector and Current time corresponding road conditions feature vector, determines road conditions eigenmatrix, according to the road conditions eigenmatrix and the note Meaning torque battle array, the road conditions eigenmatrix of the power that gains attention weighting, according to the road conditions eigenmatrix that the attention weights, by most Great Chiization method determines the road conditions feature vector of attention weighting.
Optionally, the decoding end further include: road constraint layer, the strategy determining module, according to the unmanned garage The position of unmanned vehicle described in the target position sailed and the current time, determines at least one planning path, acquires each planning The coordinate of specified point in path, and route characteristic matrix is determined according to the coordinate of acquisition, according to the route characteristic matrix and institute The attention matrix for stating road constraint layer determines the route characteristic matrix of attention weighting, the road weighted according to the attention Diameter eigenmatrix determines the road conditions feature vector of attention weighting, the road that the attention is weighted by maximum pond method Condition feature vector and the road conditions feature vector of attention weighting input the 2nd LSTM, obtain the unmanned vehicle next The Motion at moment.
A kind of computer readable storage medium that this specification provides, which is characterized in that the storage medium is stored with meter Calculation machine program, the computer program realizes above-mentioned determining unmanned vehicle Motion method when being executed by processor.
A kind of unmanned vehicle that this specification provides, including memory, processor and storage on a memory and can handled The computer program run on device, which is characterized in that the processor realizes above-mentioned determining unmanned vehicle fortune when executing described program The method for moving strategy.
This specification embodiment use at least one above-mentioned technical solution can reach it is following the utility model has the advantages that
Firstly, unmanned vehicle continuous collecting image in the process of moving, therefore can determine the image of current time acquisition, later The image is input to the coding side of decision model trained in advance, to obtain current time corresponding road conditions feature vector.It Afterwards, the corresponding road conditions feature vector of each historical juncture that will be obtained in history by the coding side of decision model, when current Corresponding road conditions feature vector and the Motion at the unmanned vehicle current time are carved, the decoding end of decision model is inputted, obtains The Motion of the unmanned vehicle subsequent time.According to the image of current time acquisition and the image of acquisition of each historical juncture, divide The road conditions feature not obtained by the coding side of decision model, determines the input of the decoding end of decision model, then based on it is current when The Motion for carving vehicle determines the Motion of the unmanned vehicle subsequent time by the decoding end of decision model.Different from existing There is technology that the location information of each barrier is only used only to determine Motion, by the image of acquisition in addition to that can determine obstacle The location information of object can also determine the road conditions on current unmanned vehicle travel route, keep the Motion determined more flexible. And the image of the acquisition of historical juncture is made full use of, can determine road condition change trend.Defect in the prior art is avoided, Keep the Motion determined more accurate.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 is a kind of process for determining unmanned vehicle Motion that this specification provides;
Fig. 2 is the structure for the decision model coding side that this specification provides;
Fig. 3 is the structure of this specification the decision model coding side provided and decoding end;
Fig. 4 is the different schematic diagram of the importance for the different moments that this specification provides;
Fig. 5 is the schematic diagram that the road conditions feature vector that this specification provides inputs decoding end;
Fig. 6 is the schematic diagram in the optional path that this specification provides;
Fig. 7 is a kind of apparatus structure schematic diagram for determining unmanned vehicle Motion that this specification embodiment provides;
Fig. 8 is the unmanned vehicle schematic diagram corresponding to Fig. 1 that this specification embodiment provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of this specification clearer, it is embodied below in conjunction with this specification Technical scheme is clearly and completely described in example and corresponding attached drawing.Obviously, described embodiment is only this Shen Please a part of the embodiment, instead of all the embodiments.The embodiment of base in this manual, those of ordinary skill in the art exist Every other embodiment obtained under the premise of creative work is not made, shall fall in the protection scope of this application.
Below in conjunction with attached drawing, the technical scheme provided by various embodiments of the present application will be described in detail.
Fig. 1 is a kind of process for determining unmanned vehicle Motion that this specification embodiment provides, and specifically may include following Step:
S102: the image of unmanned vehicle current time acquisition is determined.
In the present specification, which can be held in unmanned vehicle driving process by self-contained imaging sensor Continuous acquisition image, and in order to determine the Motion of unmanned vehicle subsequent time, it may be determined that the image of current time acquisition.Specifically may be used This process is executed by the equipment in the unmanned vehicle for determining Motion, this specification without limitation, describes for convenience It is subsequent to be illustrated by executing subject of unmanned vehicle.
Specifically, the unmanned vehicle can acquire the image in direction of advance by sensor.Also, due to its on road His vehicle, barrier, road conditions, all gradually change, therefore the sustainable acquisition image of the sensor at any time.Acquire image Interval can be set as needed, this specification without limitation, for example, be within 1/24 second interval acquire image, to be within 1/60 second between Every acquisition image, etc..The then time difference at current time and previous moment exactly acquires the interval time of image.
S104: described image is input to the coding side of decision model trained in advance, to obtain the current time pair The road conditions feature vector answered.
In the present specification, after the image at collected current time, which can be input to pre- by unmanned vehicle The first coding side of trained decision model, to obtain current time corresponding road conditions feature vector.
Specifically, in the present specification, the structure of the decision model can be as shown in Figure 2, wherein left side is coding side, right Side is decoding end.It can be seen that the coding side include: convolutional neural networks (Convolutional Neural Networks, CNN) with And the first LSTM.Unmanned vehicle can by the image at the current time input greatly into the CNN, obtain CNN output characteristics of image to Amount, later by the last moment of obtained image feature vector and current time corresponding road conditions feature vector, is input to this In first LSTM, the road conditions feature vector at current time is obtained.
That is, can memorize important information for a long time using LSTM, and forget the characteristic of garbage, often The road conditions feature vector at a moment is all not only that the image acquired based on the moment is determined, but there are also when several history During each road conditions feature vector of quarter determination participates in determining road conditions feature vector.
Such as it is shown in Fig. 2, the numerical value of t indicates different moments, and t=0s indicates current time, at the time of less than 0s all It is each historical juncture.Such as, t=-0.2s indicates the historical juncture of 0.2s before current time.After encoder acquires image every time, It is all that image feature vector is extracted using the same CNN.And according to the timing of acquisition image, by corresponding figure of each historical juncture As feature vector input LSTM, by LSTM constantly in each moment exports coding as a result, when this coding result is exactly each Carve the road conditions feature vector of LSTM output.
Certainly, due to the image that the data source of entire decision model is acquisition, and in addition to containing on road in image Other than barrier (e.g., other vehicles), further comprise road markings (e.g., lane line, road and traffic sign plates etc.) and other Environmental information (e.g., road shoulder, guardrail, lamp stand lamp) etc..Therefore the information for including in the image feature vector obtained by CNN Contain the feature of these road markings, other environmental information.Road conditions feature vector is then obtained also just by the LSTM of encoder Include only not only the feature vector for being barrier in the prior art, further comprises other feature vectors in road surface.Also, Since candidate needs the decoder by decision model, the Motion of unmanned vehicle is determined, therefore wrap in the road conditions feature vector What is contained contributes to determine the feature vector of the Motion of unmanned vehicle.
It should be noted that the decision model can be sequence to sequence (Sequence in the present specification Sequence, Seq2Seq) model, then Fig. 2 is considered as the coding side and decoding end of Seq2Seq model.
S106: by the corresponding road conditions feature vector of each historical juncture obtained by the coding side, described The Motion of unmanned vehicle described in current time corresponding road conditions feature vector and the current time inputs the decision model The decoding end of type, to obtain the unmanned vehicle in the Motion of subsequent time.
In the present specification, after the coding side by decision model, the road conditions feature vector for determining current time, nothing People's vehicle can be by obtained each historical juncture corresponding road conditions feature vector, the road at the current time that previous step obtains The Motion of condition feature vector and the current time unmanned vehicle inputs the decoding end of the decision model, to obtain the decoding end The Motion of the unmanned vehicle subsequent time of output.Wherein, what which exported is the Motion of subsequent time, therefore The Motion at current time is exactly the Motion that the decoding end was exported in a upper historical juncture at current time.
Specifically, the decoding end of the decision model includes attention layer and the 2nd LSTM, then the decision model can be such as Fig. 3 It is shown.Input the 2nd LSTM is the road conditions feature vector at each moment of coding side output, wherein when each moment includes current Quarter and each historical juncture.And since the road conditions feature vector of different moments is different for the importance for determining Motion, Therefore it needs to carry out attention weighting to each road conditions feature vector by attention layer, then inputs the 2nd LSTM.
Fig. 4 is the different schematic diagram of the importance of different moments that this specification provides, wherein one when co-existing in 4 history It carves and current time, Filled Rectangle indicates that other vehicles on road, hollow rectangle indicate that the unmanned vehicle, solid line indicate road The road shoulder of two sides, dotted line indicate the dotted yellow line in road sign, and arrow indicates the driving direction of each vehicle.Upper figure is come It says, other vehicles and unmanned vehicle are all to travel in the same direction, and the historical juncture importance of t=-1.5s and t=-1.0s is relatively low, can Determine that unmanned vehicle needs follow the bus to move ahead, and historical juncture t=-0.5s and current time t=0s are comparatively important, it may be determined that under The Motion (e.g., being to be overtaken other vehicles also to be to continue with follow the bus) of the unmanned vehicle at one moment, similarly there is together other vehicles in the following figure Also there is Facing Movement to traveling, historical juncture t=-1.5s and t=-1.0s relative importance is higher.And attention layer Effect is exactly to carry out attention weighting for the road conditions feature vector of different moments, to distinguish the important of different road conditions feature vectors Property.
In the present specification, firstly, the unmanned vehicle can Motion according to current time unmanned vehicle and current time The position of unmanned vehicle determines the attention matrix of the attention layer of decoding end.
Later, by corresponding road conditions feature vector of each historical juncture and the current time corresponding road conditions feature to Amount, is input in the attention layer, by the attention matrix, the road conditions feature vector of the power that gains attention weighting.
Finally, the road conditions feature vector of attention weighting is inputted the 2nd LSTM, the unmanned vehicle is obtained in subsequent time Motion.
Specifically, the quantity of the road conditions feature vector inputted in the attention layer is predetermined, and can be according to week Phase constantly updates the historical juncture for needing to input.For example, it is assumed that the quantity of the road conditions feature vector of the historical juncture at least needed Be 10, then in each update cycle, the road conditions feature vector for inputting attention layer for the first time be 11 (that is, 10 current periods it The road conditions feature vector at 1 current time of the road conditions feature vector and current period of preceding historical juncture).And assume more The new period is 10 moment, then the road conditions feature vector of each update cycle last time input attention layer is 20 (that is, 10 The road conditions feature vector of historical juncture before a current period, 9 current periods historical juncture road conditions feature vector and The road conditions feature vector at 1 current time of current period).Such as road conditions feature vector shown in fig. 5 inputs schematic diagram, In, arrow indicates time shaft, in the current update cycle, before the vector of first moment (i.e. t10) input includes current time 10 historical junctures road conditions feature vector and the road conditions feature vector at first moment, second moment (i.e. t11) The vector of input include current time before 10 historical junctures road conditions feature vector, first moment and at second The road conditions feature vector at quarter, until at the last one moment (i.e. t19) of current update cycle, input the road conditions feature at 20 moment Vector.First moment (that is, t20) of next update cycle is using t10 to t19 as the historical juncture later.Certainly, wherein removing The road conditions feature vector at current time, remaining is all the road conditions feature vector of historical juncture.Certainly, specific road conditions feature vector The quantity of the attention layer is inputted, and is updated according to what period, can be set as needed, this specification is without limitation.
Further, the quantity for inputting the road conditions feature vector of the attention layer due to different moments is not exactly the same, and The length that different moments input the attention weighting road conditions feature vector of the 2nd LSTM needs unanimously, therefore the attention layer also needs Each road conditions feature vector of input decoder and attention matrix are subjected to multiplication cross, the attention for inputting the 2nd LSTM is weighted The length of road conditions feature vector is unified.That is, the attention matrix further comprises other than the effect that attention weights By the effect that the length of attention weighting road conditions feature vector is unified.
Specifically, first unmanned vehicle can according to corresponding road conditions feature vector of each historical juncture and this it is current when Corresponding road conditions feature vector is carved, determines a road conditions eigenmatrix.Then, by the road conditions eigenmatrix and the attention matrix Carry out multiplication cross, the road conditions eigenmatrix of the power that gains attention weighting.Finally, being weighted by maximum pond method from the attention In road conditions eigenmatrix, the road conditions feature vector of attention weighting is determined, and be exactly the vector for inputting the 2nd LSTM.
Wherein, unmanned vehicle can Motion according to current time and the position at unmanned vehicle current time, pass through mapping letter Number generates.
Specifically, unmanned vehicle can determine the variation of the position at current time relative to the position of the previous moment at current time Value, with dXT=0And dYT=0It indicates, wherein X and Y indicates two coordinate values of transverse and longitudinal (for example, longitude and latitude), and t=0 expression is worked as Preceding moment, d indicate that this is changing value, for example, it is assumed that unmanned vehicle is that every 0.5s acquires an image, then between each historical juncture Interval be 0.5s, then dXT=0What is indicated is the displacement relative to t=-.05s unmanned vehicle at the t=0s moment in X-coordinate axle, Similarly dYT=0What is indicated is the displacement relative to t=-.05s unmanned vehicle at the t=0s moment in Y-coordinate axle.
By mapping function g, unmanned vehicle can be according to Xt, Yt, dXt, dYtDetermine the attention matrix at current time.Formula table Up to g (X specific as followst, Yt, dXt, dYt), wherein g function is used to 4 dimension datas mapping to hyperspace, and wherein the multidimensional is empty Between dimension, according to the demand of attention matrix determine.For example it is to be noted that torque battle array needs to be converted to each road conditions feature vector The attention that length is 32 after weighting weights the matrix of road conditions feature vector composition, then the g function is needed according to Xt, Yt, dXt, dYtGenerate corresponding attention matrix.
Further, when unmanned vehicle gain attention power weighting road conditions eigenmatrix after, maximum pond side can be passed through Method determines that the attention for needing to input the 2nd LSTM weights road conditions feature vector from the road conditions eigenmatrix that attention weights. Certainly, due to maximum pond method be it is existing more mature, for from multiple data (or, feature, or, vector), Determine the technology for maximizing result, therefore specific process this specification for determining attention weighting road conditions feature vector is no longer superfluous It states.
In addition, the data that the 2nd LSTM also as the first LSTM, is inputted every time are in addition to newly generated attention weights road It further include the previous moment at current time other than condition feature vector, the Motion of the 2nd LSTM output, and export current time Subsequent time unmanned vehicle Motion.
That is, the decoding end of the decision model determines current time by the motion state of current time unmanned vehicle Attention matrix, and by maximum pond method, from corresponding road conditions of each moment (including historical juncture and current time) In feature vector, determine that attention weights road conditions feature vector, then input in the 2nd LSTM, it is comprehensive by the characteristic using LSTM The Motion for closing each historical juncture output, determines the Motion of current time subsequent time unmanned vehicle.
It should be noted that in the present specification, the CNN, the first LSTM, g function, the 2nd LSTM are entire decision models The mathematical expression of the part for playing different role in type, is actually considered as the model of an entirety.Therefore it should determine in training When plan model, and be integrally trained.
Specifically, the structure of the decision model is as previously shown, training sample is that manned vehicle is run over several times The image of continuous collecting in journey, the image of driving process acquisition is as a training sample, to drive in the secondary driving process The actual path for the person of sailing is as supervision.That is, with the moving line of vehicle under the practical operation of each moment driver, and it is corresponding The Motion that the moment decision model exports every time, the training decision model.It can be appreciated that training is all together every time Have trained the CNN, the first LSTM, g function and the 2nd LSTM.
Based on the method for determining unmanned vehicle Motion shown in FIG. 1, unmanned vehicle continuous collecting image in the process of moving, Therefore it can determine the image of current time acquisition, which be input to the coding side of decision model trained in advance later, with Obtain current time corresponding road conditions feature vector.Later, each history that will be obtained in history by the coding side of decision model Moment corresponding road conditions feature vector, current time corresponding road conditions feature vector and the fortune at the unmanned vehicle current time Dynamic strategy, inputs the decoding end of decision model, obtains the Motion of the unmanned vehicle subsequent time.According to current time acquisition Image and the image of acquisition of each historical juncture, the road conditions feature obtained respectively by the coding side of decision model determine decision The input at solution to model code end, then the Motion based on current time vehicle determine the nothing by the decoding end of decision model The Motion of people's vehicle subsequent time.The location information of each barrier is used only only different from the prior art to determine movement plan Slightly, the location information by the image of acquisition in addition to that can determine barrier can also determine on current unmanned vehicle travel route Road conditions, keep the Motion determined more flexible.And the image of the acquisition of historical juncture is made full use of, can determine road Condition variation tendency.Defect in the prior art is avoided, keeps the Motion determined more accurate.
In addition, in the present specification, the decoding end of the decision model may also include that road constraint layer.The road constraint layer, For the traveling destination according to unmanned vehicle, which is generally oriented to mesh by the Motion for constraining the 2nd LSTM output Ground.
Specifically, firstly, unmanned vehicle can be determined according to the target position of traveling and the position of the current time unmanned vehicle At least one planning path.Due to can usually have much from any to the path of another point, nobody can per moment all into The planning of walking along the street condition determines several paths for reaching target position.It should be noted that path planning packet described in this specification The selection to lane is contained, for example, unmanned vehicle traveling then in planning path, can at least wrap on the road of two-way six-lane Include that unmanned vehicle continues to travel path on current lane and unmanned vehicle doubling is travelled to the path in another two lanes, such as Fig. 6 It is shown.
Fig. 6 is the schematic diagram in optional path that this specification provides, wherein it is visible from unmanned current location to target position It sets, although only two roads, since Yi Tiaolu has 3 lanes, another has 2 lanes, therefore can determine 5 paths.
Later, unmanned vehicle can acquire the coordinate of specified point in each planning path, and determine path spy according to the coordinate of acquisition Levy matrix.Wherein, the quantity of specified point, the spacing between specified point can be set as needed.For example, the quantity of specified point It is 10, spacing is 10 meters, and for a path, unmanned vehicle can be acquired on the path in the unmanned vehicle direction of advance, 10 spacing For the coordinate of 10 meters of point.Then, a route characteristic matrix can be obtained in unmanned vehicle, every row corresponding one in the route characteristic matrix Several coordinate values in a path are effectively equivalent to symbolize the path that the unmanned vehicle subsequent time can choose, also The constraint to the Motion of unmanned vehicle is accomplished.
Then, unmanned vehicle can determine attention according to the attention matrix of the route characteristic matrix and the road constraint layer The route characteristic matrix of weighting.The road conditions eigenmatrix that this process and the determination attention described in step S104 weight Process is consistent.The attention matrix can be determined using same method, certainly, due to the road conditions feature square of attention weighting Battle array, it is different from the route characteristic matrix demand of attention weighting, therefore two attention matrixes that training obtains can incomplete phase Together.
Finally, by maximum pond method, determining that attention weights according to the route characteristic matrix that the attention weights Road conditions feature vector.Equally, using maximum pond method, the road conditions feature vector of attention weighting is determined.
So far, unmanned vehicle can by obtained attention weight road conditions feature vector and attention weighting road conditions feature to Amount inputs the 2nd LSTM, obtains Motion of the unmanned vehicle in subsequent time of path constraint.
Specifically, unmanned vehicle can according to preset coefficient, by two vectors by attentions weighting respectively with it is preset The product addition of coefficient obtains the vector for finally entering the 2nd LSTM.Wherein the two coefficients are normalized, and specific value can It sets as needed, the application is without limitation.For example, it is desired to path constraint is stronger, then increase the route characteristic of attention weighting to The factor v of amount.
In addition, if unmanned vehicle can just start to acquire image after actuation in the present specification, at the time of unmanned starting, Unmanned vehicle not yet collects the image of historical juncture, therefore unmanned vehicle can not obtain and pass through the coding in step s 106 Obtained corresponding road conditions feature vector of each historical juncture is held, because of the upper image acquired there is no the historical juncture at this time, Just there is no historical juncture corresponding road conditions feature vector yet.At this point, unmanned vehicle first can move (example according to preset Motion Such as, with 0.1km/hde1 speed advance 10s), then Motion is determined by the method described in this manual.Alternatively, unmanned vehicle It can not move first, but after waiting one section of preset time, Motion is determined further according to the method described in this manual.Also It is to say, the Motion exported by decision model in this manual needs the image of several historical juncture acquisitions, and works as history When moment is less, the Motion accuracy of decision model output may be declined, therefore unmanned vehicle can wait one section Time makes the unmanned movement of movement Decision Control for recycling the decision model to export after decision model sufficiently " preheating ".
Based on the method for determining unmanned vehicle Motion shown in FIG. 1, this specification embodiment is also corresponding to provide one kind really Determine the structural schematic diagram of the device of unmanned vehicle Motion, as shown in Figure 7.
Fig. 7 is a kind of structural schematic diagram of the device for determining unmanned vehicle Motion that this specification embodiment provides, institute Stating device continuous collecting image, described device in the unmanned vehicle driving process includes:
Acquisition module 200 determines the image of the unmanned vehicle current time acquisition;
Described image is input to the coding side of decision model trained in advance by coding module 202, described current to obtain Moment corresponding road conditions feature vector;
Tactful determining module 204, corresponding road conditions of each historical juncture obtained by the coding side are special The Motion input of unmanned vehicle described in sign vector, the current time corresponding road conditions feature vector and the current time The decoding end of the decision model, to obtain the unmanned vehicle in the Motion of subsequent time.
Optionally, the coding side includes convolutional neural networks CNN, the first shot and long term memory network LSTM, the coding The image at the current time is input to the CNN, obtains image feature vector by module 202, the described image that will be obtained The last moment at feature vector and the current time corresponding road conditions feature vector, is input to the first LSTM, obtains Current time corresponding road conditions feature vector.
Optionally, the decoding end includes attention layer, the second shot and long term memory network LSTM, the strategy determining module 204, according to the position of unmanned vehicle described in the Motion of unmanned vehicle described in the current time and the current time, really The attention matrix of the fixed attention layer, will each historical juncture corresponding road conditions feature vector and it is described currently Moment corresponding road conditions feature vector is input to the attention layer, and the road conditions feature vector of the power that gains attention weighting will be described The road conditions feature vector of attention weighting inputs the 2nd LSTM, obtains the unmanned vehicle in the Motion of subsequent time.
Optionally, the tactful determining module 204, according to each historical juncture corresponding road conditions feature vector with And current time corresponding road conditions feature vector, determine road conditions eigenmatrix, according to the road conditions eigenmatrix with it is described Attention matrix, the road conditions eigenmatrix of the power that gains attention weighting pass through according to the road conditions eigenmatrix that the attention weights Maximum pond method determines the road conditions feature vector of attention weighting.
Optionally, the decoding end further include: road constraint layer, the strategy determining module 204, according to the unmanned vehicle The position of unmanned vehicle described in the target position of traveling and the current time, determines at least one planning path, acquires each rule The coordinate for drawing specified point in path, and determines route characteristic matrix according to the coordinate of acquisition, according to the route characteristic matrix with The attention matrix of the road constraint layer determines the route characteristic matrix of attention weighting, according to attention weighting Route characteristic matrix is determined the road conditions feature vector of attention weighting, the attention is weighted by maximum pond method Road conditions feature vector and the road conditions feature vector of attention weighting input the 2nd LSTM, obtain the unmanned vehicle under The Motion at one moment.
This specification embodiment additionally provides a kind of computer readable storage medium, which is stored with computer journey Sequence, computer program can be used for executing the determination unmanned vehicle Motion method that above-mentioned Fig. 1 is provided.
Based on the method for determining unmanned vehicle Motion shown in FIG. 1, this specification embodiment also proposed shown in Fig. 8 The schematic configuration diagram of unmanned vehicle.Such as Fig. 8, in hardware view, which includes processor, internal bus, network interface, memory And nonvolatile memory, it is also possible that hardware required for other business certainly.Processor is from nonvolatile memory It is middle to read the then operation into memory of corresponding computer program, to realize determination unmanned vehicle Motion described in above-mentioned Fig. 1 Method.
Certainly, other than software realization mode, other implementations, such as logical device suppression is not precluded in this specification Or mode of software and hardware combining etc., that is to say, that the executing subject of following process flow is not limited to each logic unit, It is also possible to hardware or logical device.
In the 1990s, the improvement of a technology can be distinguished clearly be on hardware improvement (for example, Improvement to circuit structures such as diode, transistor, switches) or software on improvement (improvement for method flow).So And with the development of technology, the improvement of current many method flows can be considered as directly improving for hardware circuit. Designer nearly all obtains corresponding hardware circuit by the way that improved method flow to be programmed into hardware circuit.Cause This, it cannot be said that the improvement of a method flow cannot be realized with hardware entities module.For example, programmable logic device (Programmable Logic Device, PLD) (such as field programmable gate array (Field Programmable Gate Array, FPGA)) it is exactly such a integrated circuit, logic function determines device programming by user.By designer Voluntarily programming comes a digital display circuit " integrated " on a piece of PLD, designs and makes without asking chip maker Dedicated IC chip.Moreover, nowadays, substitution manually makes IC chip, this programming is also used instead mostly " is patrolled Volume compiler (logic compiler) " software realizes that software compiler used is similar when it writes with program development, And the source code before compiling also write by handy specific programming language, this is referred to as hardware description language (Hardware Description Language, HDL), and HDL is also not only a kind of, but there are many kind, such as ABEL (Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL (Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language) etc., VHDL (Very-High-Speed is most generally used at present Integrated Circuit Hardware Description Language) and Verilog.Those skilled in the art also answer This understands, it is only necessary to method flow slightly programming in logic and is programmed into integrated circuit with above-mentioned several hardware description languages, The hardware circuit for realizing the logical method process can be readily available.
Controller can be implemented in any suitable manner, for example, controller can take such as microprocessor or processing The computer for the computer readable program code (such as software or firmware) that device and storage can be executed by (micro-) processor can Read medium, logic gate, switch, specific integrated circuit (Application Specific Integrated Circuit, ASIC), the form of programmable logic controller (PLC) and insertion microcontroller, the example of controller includes but is not limited to following microcontroller Device: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320 are deposited Memory controller is also implemented as a part of the control logic of memory.It is also known in the art that in addition to Pure computer readable program code mode is realized other than controller, can be made completely by the way that method and step is carried out programming in logic Controller is obtained to come in fact in the form of logic gate, switch, specific integrated circuit, programmable logic controller (PLC) and insertion microcontroller etc. Existing identical function.Therefore this controller is considered a kind of hardware component, and to including for realizing various in it The device of function can also be considered as the structure in hardware component.Or even, it can will be regarded for realizing the device of various functions For either the software module of implementation method can be the structure in hardware component again.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity, Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment The combination of equipment.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, implementing this The function of each unit can be realized in the same or multiple software and or hardware when specification.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method of element, commodity or equipment.
It will be understood by those skilled in the art that the embodiment of this specification can provide as the production of method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or implementation combining software and hardware aspects can be used in this specification The form of example.Moreover, it wherein includes the computer of computer usable program code that this specification, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
This specification can describe in the general context of computer-executable instructions executed by a computer, such as journey Sequence module.Generally, program module include routines performing specific tasks or implementing specific abstract data types, programs, objects, Component, data structure etc..This specification can also be practiced in a distributed computing environment, in these distributed computing environment In, by executing task by the connected remote processing devices of communication network.In a distributed computing environment, program module It can be located in the local and remote computer storage media including storage equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
The foregoing is merely the embodiments of this specification, are not limited to this specification.For art technology For personnel, this specification can have various modifications and variations.It is all made any within the spirit and principle of this specification Modification, equivalent replacement, improvement etc., should be included within the scope of the claims of this specification.

Claims (12)

1. a kind of method of determining unmanned vehicle Motion, which is characterized in that unmanned vehicle continuous collecting image in the process of moving, The described method includes:
Determine the image of unmanned vehicle current time acquisition;
Described image is input to the coding side of decision model trained in advance, it is special to obtain the current time corresponding road conditions Levy vector;
By the corresponding road conditions feature vector of each historical juncture obtained by the coding side, the current time pair The Motion of unmanned vehicle described in the road conditions feature vector answered and the current time inputs the decoding end of the decision model, To obtain the unmanned vehicle in the Motion of subsequent time.
2. the method as described in claim 1, which is characterized in that the coding side includes convolutional neural networks CNN, the first length Phase memory network LSTM;
Described image is input to the coding side of decision model trained in advance, it is special to obtain the current time corresponding road conditions Vector is levied, is specifically included:
By the image at the current time, it is input to the CNN, obtains image feature vector;
It is defeated by obtained described image feature vector and the last moment at the current time corresponding road conditions feature vector Enter to the first LSTM, obtains the current time corresponding road conditions feature vector.
3. the method as described in claim 1, which is characterized in that the decoding end includes attention layer, the memory of the second shot and long term Network LSTM;
By the corresponding road conditions feature vector of each historical juncture obtained by the coding side, the current time pair The Motion of unmanned vehicle described in the road conditions feature vector answered and the current time inputs the decoding end of the decision model, To obtain the unmanned vehicle in the Motion of subsequent time, specifically include:
According to the position of unmanned vehicle described in the Motion of unmanned vehicle described in the current time and the current time, really The attention matrix of the fixed attention layer;
By each historical juncture corresponding road conditions feature vector and the current time corresponding road conditions feature vector, It is input to the attention layer, the road conditions feature vector of the power that gains attention weighting;
The road conditions feature vector that the attention is weighted inputs the 2nd LSTM, obtains the unmanned vehicle in subsequent time Motion.
4. method as claimed in claim 3, which is characterized in that by each historical juncture corresponding road conditions feature vector And current time corresponding road conditions feature vector, it is input to the attention layer, obtains the road of the attention weighting Condition feature vector, specifically includes:
According to each historical juncture corresponding road conditions feature vector and the current time corresponding road conditions feature to Amount, determines road conditions eigenmatrix;
According to the road conditions eigenmatrix and the attention matrix, the road conditions eigenmatrix of the power that gains attention weighting;
According to the road conditions eigenmatrix that the attention weights, by maximum pond method, determine that the road conditions of attention weighting are special Levy vector.
5. method as claimed in claim 3, which is characterized in that the decoding end further include: road constraint layer;
The road conditions feature vector that the attention is weighted inputs the 2nd LSTM, obtains the unmanned vehicle in subsequent time Motion specifically includes:
The position of unmanned vehicle described in the target position travelled according to the unmanned vehicle and the current time, determines at least one Planning path;
The coordinate of specified point in each planning path is acquired, and route characteristic matrix is determined according to the coordinate of acquisition;
According to the attention matrix of the route characteristic matrix and the road constraint layer, the route characteristic of attention weighting is determined Matrix;
According to the route characteristic matrix that the attention weights, by maximum pond method, determine that the road conditions of attention weighting are special Levy vector;
The road conditions feature vector input described second of the road conditions feature vector that the attention is weighted and attention weighting LSTM obtains the unmanned vehicle in the Motion of subsequent time.
6. a kind of device of determining unmanned vehicle Motion, which is characterized in that described device is in the unmanned vehicle driving process Continuous collecting image, described device include:
Acquisition module determines the image of the unmanned vehicle current time acquisition;
Described image is input to the coding side of decision model trained in advance, to obtain the current time pair by coding module The road conditions feature vector answered;
Tactful determining module, by the corresponding road conditions feature vector of each historical juncture obtained by the coding side, The Motion input of unmanned vehicle described in the current time corresponding road conditions feature vector and the current time is described certainly Plan solution to model code end, to obtain the unmanned vehicle in the Motion of subsequent time.
7. device as claimed in claim 6, which is characterized in that the coding side includes convolutional neural networks CNN, the first length The image at the current time is input to the CNN by phase memory network LSTM, the coding module, obtain characteristics of image to Amount, by obtained described image feature vector and the last moment at the current time corresponding road conditions feature vector, input To the first LSTM, the current time corresponding road conditions feature vector is obtained.
8. device as claimed in claim 6, which is characterized in that the decoding end includes attention layer, the memory of the second shot and long term Network LSTM, the strategy determining module, according to the Motion of unmanned vehicle described in the current time and it is described current when The position for carving the unmanned vehicle determines the attention matrix of the attention layer, by each historical juncture corresponding road Condition feature vector and the current time corresponding road conditions feature vector, are input to the attention layer, the power that gains attention adds The road conditions feature vector of power, the road conditions feature vector that the attention is weighted input the 2nd LSTM, obtain it is described nobody Motion of the vehicle in subsequent time.
9. device as claimed in claim 8, which is characterized in that the strategy determining module, according to each historical juncture point Not corresponding road conditions feature vector and the current time corresponding road conditions feature vector, determine road conditions eigenmatrix, according to The road conditions eigenmatrix and the attention matrix, the road conditions eigenmatrix of the power that gains attention weighting, according to the attention The road conditions eigenmatrix of weighting determines the road conditions feature vector of attention weighting by maximum pond method.
10. device as claimed in claim 8, which is characterized in that the decoding end further include: road constraint layer, the strategy Determining module determines extremely according to the position of unmanned vehicle described in the target position of unmanned vehicle traveling and the current time A few planning path, acquires the coordinate of specified point in each planning path, and determine route characteristic matrix according to the coordinate of acquisition, According to the attention matrix of the route characteristic matrix and the road constraint layer, the route characteristic square of attention weighting is determined Battle array determines the road conditions feature of attention weighting by maximum pond method according to the route characteristic matrix that the attention weights The road conditions feature vector input described second of vector, the road conditions feature vector that the attention is weighted and attention weighting LSTM obtains the unmanned vehicle in the Motion of subsequent time.
11. a kind of computer readable storage medium, which is characterized in that the storage medium is stored with computer program, the meter The claims 1-5 any method is realized when calculation machine program is executed by processor.
12. a kind of unmanned vehicle including memory, processor and stores the computer that can be run on a memory and on a processor Program, which is characterized in that the processor realizes the claims 1-5 any method when executing described program.
CN201910741637.9A 2019-08-12 2019-08-12 Method and device for determining unmanned vehicle motion strategy Active CN110488821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910741637.9A CN110488821B (en) 2019-08-12 2019-08-12 Method and device for determining unmanned vehicle motion strategy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910741637.9A CN110488821B (en) 2019-08-12 2019-08-12 Method and device for determining unmanned vehicle motion strategy

Publications (2)

Publication Number Publication Date
CN110488821A true CN110488821A (en) 2019-11-22
CN110488821B CN110488821B (en) 2020-12-29

Family

ID=68550611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910741637.9A Active CN110488821B (en) 2019-08-12 2019-08-12 Method and device for determining unmanned vehicle motion strategy

Country Status (1)

Country Link
CN (1) CN110488821B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046981A (en) * 2020-03-17 2020-04-21 北京三快在线科技有限公司 Training method and device for unmanned vehicle control model
CN111152796A (en) * 2020-04-07 2020-05-15 北京三快在线科技有限公司 Vehicle motion state prediction method and device
CN111366150A (en) * 2020-04-20 2020-07-03 Oppo广东移动通信有限公司 Running direction detection method and device, electronic equipment and storage medium
CN111414852A (en) * 2020-03-19 2020-07-14 驭势科技(南京)有限公司 Image prediction and vehicle behavior planning method, device and system and storage medium
CN111552294A (en) * 2020-05-14 2020-08-18 山东师范大学 Outdoor robot path-finding simulation system and method based on time dependence
CN113359744A (en) * 2021-06-21 2021-09-07 暨南大学 Robot obstacle avoidance system based on safety reinforcement learning and visual sensor
CN116168362A (en) * 2023-02-27 2023-05-26 小米汽车科技有限公司 Pre-training method and device for vehicle perception model, electronic equipment and vehicle

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092254A (en) * 2017-04-27 2017-08-25 北京航空航天大学 A kind of design method for the Household floor-sweeping machine device people for strengthening study based on depth
CN107292229A (en) * 2017-05-08 2017-10-24 北京三快在线科技有限公司 A kind of image-recognizing method and device
CN107563332A (en) * 2017-09-05 2018-01-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the driving behavior for determining unmanned vehicle
CN108319909A (en) * 2018-01-29 2018-07-24 清华大学 A kind of driving behavior analysis method and system
CN109409497A (en) * 2017-08-15 2019-03-01 高德信息技术有限公司 A kind of road condition predicting method and device
US20190077398A1 (en) * 2017-09-14 2019-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for vehicle lane change prediction using structural recurrent neural networks
US20190113917A1 (en) * 2017-10-16 2019-04-18 Toyota Research Institute, Inc. System and method for leveraging end-to-end driving models for improving driving task modules
CN109697458A (en) * 2018-11-27 2019-04-30 深圳前海达闼云端智能科技有限公司 Control equipment mobile method, apparatus, storage medium and electronic equipment
CN109816027A (en) * 2019-01-29 2019-05-28 北京三快在线科技有限公司 Training method, device and the unmanned equipment of unmanned decision model
CN109976153A (en) * 2019-03-01 2019-07-05 北京三快在线科技有限公司 Control the method, apparatus and electronic equipment of unmanned equipment and model training
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092254A (en) * 2017-04-27 2017-08-25 北京航空航天大学 A kind of design method for the Household floor-sweeping machine device people for strengthening study based on depth
CN107292229A (en) * 2017-05-08 2017-10-24 北京三快在线科技有限公司 A kind of image-recognizing method and device
CN109409497A (en) * 2017-08-15 2019-03-01 高德信息技术有限公司 A kind of road condition predicting method and device
CN107563332A (en) * 2017-09-05 2018-01-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the driving behavior for determining unmanned vehicle
US20190077398A1 (en) * 2017-09-14 2019-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for vehicle lane change prediction using structural recurrent neural networks
US20190113917A1 (en) * 2017-10-16 2019-04-18 Toyota Research Institute, Inc. System and method for leveraging end-to-end driving models for improving driving task modules
CN108319909A (en) * 2018-01-29 2018-07-24 清华大学 A kind of driving behavior analysis method and system
CN109697458A (en) * 2018-11-27 2019-04-30 深圳前海达闼云端智能科技有限公司 Control equipment mobile method, apparatus, storage medium and electronic equipment
CN109816027A (en) * 2019-01-29 2019-05-28 北京三快在线科技有限公司 Training method, device and the unmanned equipment of unmanned decision model
CN109976153A (en) * 2019-03-01 2019-07-05 北京三快在线科技有限公司 Control the method, apparatus and electronic equipment of unmanned equipment and model training
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DEHENG QIAN等: "End-to-End Learning Driver Policy using Moments Deep Neural Network", 《2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)》 *
李俊杰等: "基于深度学习的工业车辆驾驶行为识别", 《信息通信技术》 *
王丹: "基于分支网络辅助任务的端到端自动驾驶", 《集成电路应用》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046981A (en) * 2020-03-17 2020-04-21 北京三快在线科技有限公司 Training method and device for unmanned vehicle control model
CN111414852A (en) * 2020-03-19 2020-07-14 驭势科技(南京)有限公司 Image prediction and vehicle behavior planning method, device and system and storage medium
CN111152796A (en) * 2020-04-07 2020-05-15 北京三快在线科技有限公司 Vehicle motion state prediction method and device
CN111366150A (en) * 2020-04-20 2020-07-03 Oppo广东移动通信有限公司 Running direction detection method and device, electronic equipment and storage medium
CN111366150B (en) * 2020-04-20 2022-03-18 Oppo广东移动通信有限公司 Running direction detection method and device, electronic equipment and storage medium
CN111552294A (en) * 2020-05-14 2020-08-18 山东师范大学 Outdoor robot path-finding simulation system and method based on time dependence
CN111552294B (en) * 2020-05-14 2024-03-26 山东师范大学 Outdoor robot path finding simulation system and method based on time dependence
CN113359744A (en) * 2021-06-21 2021-09-07 暨南大学 Robot obstacle avoidance system based on safety reinforcement learning and visual sensor
CN113359744B (en) * 2021-06-21 2022-03-01 暨南大学 Robot obstacle avoidance system based on safety reinforcement learning and visual sensor
CN116168362A (en) * 2023-02-27 2023-05-26 小米汽车科技有限公司 Pre-training method and device for vehicle perception model, electronic equipment and vehicle

Also Published As

Publication number Publication date
CN110488821B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN110488821A (en) A kind of method and device of determining unmanned vehicle Motion
CN111190427B (en) Method and device for planning track
CN111114543B (en) Trajectory prediction method and device
CN110989636B (en) Method and device for predicting track of obstacle
US20230100814A1 (en) Obstacle trajectory prediction method and apparatus
CN110262486A (en) A kind of unmanned equipment moving control method and device
CN112364997B (en) Method and device for predicting track of obstacle
CN111208838B (en) Control method and device of unmanned equipment
CN113805572A (en) Method and device for planning movement
CN111076739B (en) Path planning method and device
CN109094573A (en) For determining the method and system of the optimum coefficient of the controller of automatic driving vehicle
CN111126362B (en) Method and device for predicting obstacle track
CN109754636A (en) Parking stall collaborative perception identification, parking assistance method, device
CN111508258A (en) Positioning method and device
CN110287850A (en) A kind of model training and the method and device of object identification
CN113110526B (en) Model training method, unmanned equipment control method and device
CN111912423B (en) Method and device for predicting obstacle trajectory and training model
CN109670671A (en) Public transport network evaluation method and device
CN116001811A (en) Track planning method, device and equipment for automatic driving vehicle
CN113968243B (en) Obstacle track prediction method, device, equipment and storage medium
CN116476863A (en) Automatic driving transverse and longitudinal integrated decision-making method based on deep reinforcement learning
CN110530398A (en) A kind of method and device of electronic map accuracy detection
CN113074748B (en) Path planning method and device for unmanned equipment
CN116295415A (en) Map-free maze navigation method and system based on pulse neural network reinforcement learning
CN113848913B (en) Control method and control device of unmanned equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant