CN107487358A - A kind of driver's rotating direction control method based on brain emotion learning loop model - Google Patents
A kind of driver's rotating direction control method based on brain emotion learning loop model Download PDFInfo
- Publication number
- CN107487358A CN107487358A CN201710650187.3A CN201710650187A CN107487358A CN 107487358 A CN107487358 A CN 107487358A CN 201710650187 A CN201710650187 A CN 201710650187A CN 107487358 A CN107487358 A CN 107487358A
- Authority
- CN
- China
- Prior art keywords
- msub
- mrow
- mtd
- mtr
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D6/00—Arrangements for automatically controlling steering depending on driving conditions sensed and responded to, e.g. control circuits
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D15/00—Steering not otherwise provided for
- B62D15/02—Steering position indicators ; Steering position determination; Steering aids
- B62D15/025—Active steering aids, e.g. helping the driver by actively influencing the steering system after environment evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
Abstract
Present invention relates particularly to a kind of driver's rotating direction control method based on brain emotion learning loop model, it is characterised in that:This method uses the following steps:Pre- model of taking aim at is turned to using remote, nearly 2 drivers first and gathers sensory signal SI and emotion signal EC;Then by establishing brain emotion learning loop model on the basis of neuro-physiology, the sensory signal SI obtained by driver's steering model and emotion signal EC are input to brain emotion learning loop model and obtain steering wheel angle, and is constantly corrected to realize the accurate tracking of path locus;Finally establish linear two degrees of freedom vehicle dynamic models and the steering wheel angle after correction is input to auto model progress course changing control.Driver's steering model of the present invention can simulate the cognitive behavior of driver in reality so that model and the cognitive behavior of the mankind have good uniformity;So as to realize to the more preferable accurate tracking of vehicle, automobile can be made safer.
Description
Technical field
The invention belongs to intelligent vehicle drive simulating field, and in particular to a kind of driving based on brain emotion learning loop model
The person's of sailing rotating direction control method.
Background technology
The pilot model of early stage is proposed by McRuer in the seventies in last century, after this with the development of control theory
And maturation, driver's model have a faster development, people successively propose that single-point is optimal pre- to be taken aim at pilot model, be based on
LQR multipoint preview pilot model, pilot model based on Model Predictive Control etc..Although these pilot models can
The tracking of path locus is enough realized, but is all based on the pilot model that optimal solution in mathematical meaning is established, can not be simulated
The cognitive behavior of driver in reality, therefore evaluation of its model to vehicle safety performance is still not objective enough.
20 end of the centurys, biologist Neese et al. are found that emotion plays positive front in human cognitive behavior
Impetus, so as to overthrow the traditional view that emotion hinders the mankind to carry out rationality decision making action.So far, lot of domestic and foreign scholar
Start to carry out in-depth study to brain emotion learning loop.Moren in 2000 et al. is in proposing based on neuro-physiology
Brain emotion learning (Brain Emotional Learning, BEL) model.Modeling brain amygdaloid nucleus, the orbitofrontal cortex
Emotion information transfer mode between the two, brain emotion learning loop is added in general learning model so that learning model
There is good uniformity with the cognitive behavior of the mankind.
The content of the invention
1st, technical problem to be solved:
The present invention is for the problem of existing in the prior art to pilot model, by prior art combination brain emotion learning
Loop model, propose a kind of method that can simulate insufficient driver's steering model of the cognitive behavior of driver in reality.
2nd, technical scheme:
A kind of driver's rotating direction control method based on brain emotion learning loop model, comprises the following steps:Step 1:
Gather sensory signal SI and emotion signal EC.
Turned to using remote, nearly 2 drivers and take aim at model in advance, driver takes aim at the lateral displacement deviation of acquisition by near point in advance
eyAs sensory signal SI;Deviation of directivity θ is obtained by far taking aim at far point in advance, by lateral displacement deviation eyPass through mould with deviation of directivity θ
Fuzzy logic carries out fusion and produces emotion signal EC.Its process is as shown in Figure 1.
Step 2:Emotion signal EC and sensory signal SI are input to brain emotion learning loop model, draw brain feelings
The output E in sense study loop.
Establishing brain emotion learning loop model includes following brain emotion learning loop basic structure, amygdaloid body
The trimming process of habit process and orbitofrontal cortex, as shown in Figure 2.
(1) brain emotion learning loop basic structure
Brain emotion learning loop is made up of thalamus, sense organ cortex, orbitofrontal cortex, the part of amygdaloid body four.Environmental stimuli passes through
The incoming sense organ cortex of thalamus is crossed to carry out going deep into processing, output sensory signal SI.Sensory signal SI is passed to amygdaloid body in emotion signal
EC reference carries out memory inquiry learning, output signal A under implying;Sensory signal SI is passed to prison of the orbitofrontal cortex in emotion signal EC
Superintend and direct it is lower whole emotion learning loop is corrected, output signal O.Finally entirely the output in brain emotion learning loop is:
E=A-O (1)
(2) learning process of amygdaloid body
The sense organ input signal SI of each input amygdaloid bodyiAll correspond to an almond body node, each almond body segment
Point has a transformable connection weight factor GAi, sense organ input signal SIiIt is multiplied by weight factor GAiObtain the almond body segment
The output valve of point.The then output of whole amygdaloid body:
Wherein weight factor GAiRegulation rate be:
Wherein α is the learning rate of amygdaloid body weight, decides the pace of learning of amygdaloid body.
(3) trimming process of orbitofrontal cortex
The sense organ input signal SI of each input orbitofrontal cortexiAll correspond to an orbitofrontal cortex node, each socket of the eye volume
Cortex node has a transformable connection weight factor GOi, sense organ input signal SIiIt is multiplied by weight factor GOiObtain the socket of the eye
The output valve of volume cortex node.The then output of whole orbitofrontal cortex is:
Wherein weight factor GOiRegulation rate be:
Wherein β is the learning rate of orbitofrontal cortex weight;
(2), (4) formula are brought into (1) formula and obtained:
The output in brain emotion learning loop is:
Wherein, weight factor GAiRegulation rate be:
Weight factor GOiRegulation rate be:
Step 3:Using the output E in the brain emotion learning loop drawn in step 2 as steering wheel angle δswIt is input to
In auto model, so as to draw vehicle-state;Its process is as shown in Figure 3;
The auto model is to establish linear two degrees of freedom vehicle dynamic model, as shown in Figure 3, its state equation
For:
In formula:M is complete vehicle quality;cf、crThe respectively equivalent cornering stiffness of front and back wheel;lfAnd lrRespectively vehicle barycenter arrives
The distance of antero posterior axis;IzFor Vehicular yaw rotary inertia;vx、vyThe respectively longitudinal velocity and side velocity of vehicle;ψ is vehicle
Yaw angle;Y is length travel of the vehicle under earth coordinates;δswFor steering wheel angle;N is that steering wheel is driven to the angle of front-wheel
Than;
Because vehicle front wheel angle is equal to steering wheel angle divided by angular gear ratio, i.e. δf=δsw/ n, the present invention are by big
Control direction disk corner δ is carried out in brain emotion learning loopswSo as to control front wheel angle, to reach the purpose of course changing control.
Step 4:Remote, nearly 2 drivers steering that the vehicle-state that step 3 is drawn is input to step 1 takes aim at mould in advance
Type, repeat the above steps and one arrive step 3, you can obtain the accurate tracking of vehicle track on road.It is real during above-mentioned
Show and controlled front wheel angle to control the steering of vehicle by control direction disk corner.
Further, near point takes aim at lateral displacement deviation e in advance in step 1y, by with default sense organ input signal SI
Proportion adjustment factor k1The generation sensory signal SI that is multiplied is input in brain emotion learning loop model.
Further, the output in caused brain emotion learning loop is steering wheel angle δ in the step 2swPass through
With default emotion input signal EC proportion adjustment factor k2Be multiplied, then with after caused emotion signal EC is superimposed in step 1
Brain emotion learning loop model is inputted as emotion signal EC.
The present invention first gathers the lateral displacement deviation e that driver takes aim at acquisition by near point in advanceyAs sensory signal SI;Pass through
Pre- far point of taking aim at obtains deviation of directivity θ, by lateral displacement deviation eyBy fuzzy logic merge with deviation of directivity θ and produce emotion
Signal EC;Sensory signal SI and emotion signal EC are input to brain emotion learning loop model and draw brain emotion learning loop
Output, and the steering wheel angle δ using the output E in brain emotion learning loop as two degrees of freedom vehicle dynamic modelsw, and
It is achieved thereby that control direction disk corner δswSo as to control front wheel angle, to reach the purpose to Vehicular turn control.
2nd, beneficial effect:
The present invention by using remote, nearly 2 drivers turn to it is pre- take aim at added in model brain emotion learning loop so as to
Establish driver's steering model so that model and the cognitive behavior of the mankind have good uniformity;So as to realize pair
The more preferable accurate tracking of vehicle, increase the security performance of automobile.
Brief description of the drawings
Fig. 1 is that driver is remote, nearly 2 points of steerings take aim at model schematic in advance;
Fig. 2 is the basic structure block diagram in brain emotion learning loop (BEL);
Fig. 3 is linear two degrees of freedom vehicle dynamic model;
Fig. 4 is a kind of structured flowchart of driver's rotating direction control method based on brain emotion learning loop model.
Embodiment
The purpose of the present invention is:Driver is driven in order to ensure that the safety driven will be realized during the traveling of intelligent vehicle
The simulation sailed, so as to the driving trace of accurate perception vehicle.In order to realize the structural framing of method that the purpose present invention uses such as
Shown in Fig. 4.Wherein:Take aim to take aim in advance with far point in advance by near point and export SI and EC for above-mentioned step one;SI and EC signals input BEL
Model produces steering wheel angle δsw, by steering wheel angle δswThe state parameter of auto model output vehicle is inputted, circulates above-mentioned step
Suddenly with regard to the driving trace of accurate perception vehicle can be reached.
Embodiment is a kind of driver's rotating direction control method based on brain emotion learning loop model, its specific control
Process includes:
S1:Parameter initialization:Respectively set sense organ input signal SI and emotion input signal EC proportion adjustment factor k1,
k2;Set node weight GA、G0Initial value GA(0)=G0(0)=0;Set learning rate α, β of weight.
S2:Calculate sensory signal SI and emotion signal EC;Turned to using remote, nearly 2 drivers and take aim at model, driver in advance
Take aim at the lateral displacement deviation e of acquisition in advance by near pointyAs sensory signal SI;Deviation of directivity θ is obtained by taking aim at far point in advance, by side
To offset deviation eyBy fuzzy logic merge with deviation of directivity θ and produce emotion signal EC;As shown in Figure 1.
S4:Calculate weight conciliation rate Δ GAi、ΔGOi, amendment weight GA、GOSo as to calculate brain sentics
Practise the output E in loop;Inputted sensory signal SI and emotion signal EC as input signal in brain emotion learning loop model,
Formula (1) (2) (3) (4) (6) is passed sequentially through, calculates the output E in brain emotion learning loop.
S5:Using the output E in brain emotion learning loop as in such as Fig. 3 linear two degrees of freedom vehicle dynamic model
Steering wheel angle δsw, auto model is inputted, draws the transport condition of vehicle, it is achieved thereby that the course changing control to vehicle;
S6:Return to step S2, carry out the output δ in constantly correction brain emotion learning loopsw, finally realize path locus
Accurate tracking.
Although the present invention disclosed as above with preferred embodiment, they be not for limit the present invention, it is any ripe
This those skilled in the art is practised, without departing from the spirit and scope of the invention, can make various changes or retouch from working as, therefore the guarantor of the present invention
Shield scope should be defined with claims hereof protection domain.
Claims (3)
- A kind of 1. driver's rotating direction control method based on brain emotion learning loop model, it is characterised in that:Including following step Suddenly:Step 1:Gather sensory signal SI and emotion signal EC;Turned to using remote, nearly 2 drivers and take aim at model in advance, driver takes aim at the lateral displacement deviation e of acquisition by near point in advanceyAs Sensory signal SI;Deviation of directivity θ is obtained by far taking aim at far point in advance, by lateral displacement deviation eyPatrolled with deviation of directivity θ by fuzzy Collect and carry out fusion generation emotion signal EC;Step 2:Emotion signal EC and sensory signal SI are input to brain emotion learning loop model, draw brain sentics Practise the output in loop;(1) brain emotion learning loop basic structureBrain emotion learning loop is made up of thalamus, sense organ cortex, orbitofrontal cortex, the part of amygdaloid body four.Environmental stimuli is by mound Brain is passed to sense organ cortex and carries out going deep into processing, output sensory signal SI.Sensory signal SI is passed to amygdaloid body emotion signal EC's With reference to carrying out memory inquiry learning, output signal A under hint;Sensory signal SI is passed to orbitofrontal cortex under emotion signal EC supervision Whole emotion learning loop is corrected, output signal O.Finally entirely the output in brain emotion learning loop is:E=A-O (1)(2) learning process of amygdaloid bodyThe sense organ input signal SI of each input amygdaloid bodyiAn almond body node is all corresponded to, each almond body node has One transformable connection weight factor GAi, sense organ input signal SIiIt is multiplied by weight factor GAiObtain the defeated of the almond body node Go out value.The then output of whole amygdaloid body:<mrow> <mi>A</mi> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>SI</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <msub> <mi>G</mi> <msub> <mi>A</mi> <mi>i</mi> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>Wherein weight factor GAiRegulation rate be:<mrow> <msub> <mi>&Delta;G</mi> <msub> <mi>A</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>&alpha;</mi> <mo>&CenterDot;</mo> <msub> <mi>SI</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>E</mi> <mi>C</mi> <mo>-</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>Wherein α is the learning rate of amygdaloid body weight, decides the pace of learning of amygdaloid body;(3) trimming process of orbitofrontal cortexThe sense organ input signal SI of each input orbitofrontal cortexiAll correspond to an orbitofrontal cortex node, each orbitofrontal cortex section Point has a transformable connection weight factor GOi, sense organ input signal SIiIt is multiplied by weight factor GOiObtain the orbitofrontal cortex The output valve of node.The then output of whole orbitofrontal cortex is:<mrow> <mi>O</mi> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>O</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>SI</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <msub> <mi>G</mi> <msub> <mi>O</mi> <mi>i</mi> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>Wherein weight factor GOiRegulation rate be:<mrow> <msub> <mi>&Delta;G</mi> <msub> <mi>O</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>&beta;</mi> <mo>&CenterDot;</mo> <msub> <mi>SI</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>-</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>O</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>E</mi> <mi>C</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>Wherein β is the learning rate of orbitofrontal cortex weight;(2), (4) formula are brought into (1) formula and obtained:The output in brain emotion learning loop is:Wherein, weight factor GAiRegulation rate be:Weight factor GOiRegulation rate be:Step 3:Using the output E in the brain emotion learning loop drawn in step 2 as steering wheel angle δswIt is input to vehicle In model, so as to draw vehicle-state;To establish linear two degrees of freedom vehicle dynamic model, its state equation is the auto model:<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mover> <mi>v</mi> <mo>&CenterDot;</mo> </mover> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mover> <mi>&psi;</mi> <mo>&CenterDot;&CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <mi>y</mi> <mo>&CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <mi>&psi;</mi> <mo>&CenterDot;</mo> </mover> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mfrac> <mrow> <msub> <mi>c</mi> <mi>f</mi> </msub> <mo>+</mo> <msub> <mi>c</mi> <mi>r</mi> </msub> </mrow> <mrow> <msub> <mi>mv</mi> <mi>x</mi> </msub> </mrow> </mfrac> </mrow> </mtd> <mtd> <mrow> <mfrac> <mrow> <msub> <mi>c</mi> <mi>r</mi> </msub> <msub> <mi>l</mi> <mi>r</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>f</mi> </msub> <msub> <mi>l</mi> <mi>f</mi> </msub> </mrow> <mrow> <msub> <mi>mv</mi> <mi>x</mi> </msub> </mrow> </mfrac> <mo>-</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <msub> <mi>c</mi> <mi>r</mi> </msub> <msub> <mi>l</mi> <mi>r</mi> </msub> <mo>-</mo> <msub> <mi>c</mi> <mi>f</mi> </msub> <msub> <mi>l</mi> <mi>f</mi> </msub> </mrow> <mrow> <msub> <mi>I</mi> <mi>z</mi> </msub> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> </mfrac> </mtd> <mtd> <mrow> <mo>-</mo> <mfrac> <mrow> <msub> <mi>c</mi> <mi>r</mi> </msub> <msubsup> <mi>l</mi> <mi>r</mi> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>c</mi> <mi>f</mi> </msub> <msubsup> <mi>l</mi> <mi>f</mi> <mn>2</mn> </msubsup> </mrow> <mrow> <msub> <mi>I</mi> <mi>z</mi> </msub> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> </mfrac> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>v</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>v</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mover> <mi>&psi;</mi> <mo>&CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> <mtr> <mtd> <mi>&psi;</mi> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mfrac> <msub> <mi>c</mi> <mi>f</mi> </msub> <mi>m</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <msub> <mi>c</mi> <mi>f</mi> </msub> <msub> <mi>l</mi> <mi>f</mi> </msub> </mrow> <msub> <mi>I</mi> <mi>z</mi> </msub> </mfrac> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mfrac> <msub> <mi>&delta;</mi> <mrow> <mi>s</mi> <mi>w</mi> </mrow> </msub> <mi>n</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>In formula:M is complete vehicle quality;cf、crThe respectively equivalent cornering stiffness of front and back wheel;lfAnd lrRespectively vehicle barycenter is to antero posterior axis Distance;IzFor Vehicular yaw rotary inertia;vx、vyThe respectively longitudinal velocity and side velocity of vehicle;ψ is Vehicular yaw angle; Y is length travel of the vehicle under earth coordinates;δswFor steering wheel angle;N is angular gear ratio of the steering wheel to front-wheel;Realize by lateral displacement deviation eyReach the control of the steering to vehicle with deviation of directivity θ;Step 4:Remote, nearly 2 drivers steering that the vehicle-state that step 3 is drawn is input to step 1 takes aim at model in advance, weight Multiple above-mentioned steps one arrive step 3, you can obtain the accurate tracking of vehicle track on road.
- 2. a kind of driver's rotating direction control method based on brain emotion learning loop model according to claim 1, its It is characterised by:Near point takes aim at lateral displacement deviation e in advance in step 1y, by being adjusted with default sense organ input signal SI ratio Save factor k1The generation sensory signal SI that is multiplied is input in brain emotion learning loop model.
- 3. a kind of driver's rotating direction control method based on brain emotion learning loop model according to claim 1, its It is characterised by:The output E in caused brain emotion learning loop is steering wheel angle δ in the step 2swBy with presetting Emotion input signal EC proportion adjustment factor k2It is multiplied, then feelings is used as after being superimposed with caused emotion signal EC in step 1 Feel signal EC input brain emotion learning loop models.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710650187.3A CN107487358A (en) | 2017-08-02 | 2017-08-02 | A kind of driver's rotating direction control method based on brain emotion learning loop model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710650187.3A CN107487358A (en) | 2017-08-02 | 2017-08-02 | A kind of driver's rotating direction control method based on brain emotion learning loop model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107487358A true CN107487358A (en) | 2017-12-19 |
Family
ID=60644943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710650187.3A Pending CN107487358A (en) | 2017-08-02 | 2017-08-02 | A kind of driver's rotating direction control method based on brain emotion learning loop model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107487358A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1609745A (en) * | 2003-10-23 | 2005-04-27 | 阿尔卑斯电气株式会社 | Force-feedback input device |
CN102656613A (en) * | 2009-12-18 | 2012-09-05 | 本田技研工业株式会社 | A predictive human-machine interface using eye gaze technology, blind spot indicators and driver experience |
WO2016035268A1 (en) * | 2014-09-03 | 2016-03-10 | 株式会社デンソー | Travel control system for vehicle |
-
2017
- 2017-08-02 CN CN201710650187.3A patent/CN107487358A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1609745A (en) * | 2003-10-23 | 2005-04-27 | 阿尔卑斯电气株式会社 | Force-feedback input device |
CN102656613A (en) * | 2009-12-18 | 2012-09-05 | 本田技研工业株式会社 | A predictive human-machine interface using eye gaze technology, blind spot indicators and driver experience |
WO2016035268A1 (en) * | 2014-09-03 | 2016-03-10 | 株式会社デンソー | Travel control system for vehicle |
Non-Patent Citations (1)
Title |
---|
程慧: "两点预瞄驾驶员转向模型研究", 《中国优秀硕士学位论文全文数据库工程科技II辑》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107340772B (en) | Unmanned-oriented anthropomorphic reference trajectory planning method | |
CN110322017A (en) | Automatic Pilot intelligent vehicle Trajectory Tracking Control strategy based on deeply study | |
CN107380161B (en) | A kind of active steering control device for aiding in driver to realize desired ride track | |
CN105329238B (en) | A kind of autonomous driving vehicle lane-change control method based on monocular vision | |
CN108415245B (en) | The fault tolerant control method of autonomous fleet operations under the conditions of a kind of heterogeneous car networking | |
CN107512305A (en) | Wire-controlled steering system and its stability control method | |
CN108919837B (en) | Second-order sliding mode control method of automatic driving vehicle based on visual dynamics | |
CN110329347B (en) | Steering control system based on driver characteristics and control method thereof | |
CN109726804A (en) | A kind of intelligent vehicle driving behavior based on driving prediction field and BP neural network personalizes decision-making technique | |
CN112232490A (en) | Deep simulation reinforcement learning driving strategy training method based on vision | |
CN111717207A (en) | Cooperative steering control method considering human-vehicle conflict | |
Butz et al. | Optimized sensory-motor couplings plus strategy extensions for the torcs car racing challenge | |
CN108859648A (en) | A kind of suspension damper damping control neural network based switching weighting coefficient determines method | |
CN110920616A (en) | Intelligent vehicle lane changing track and lane changing track following control method | |
CN110497915A (en) | A kind of vehicle driving state estimation method based on Weighted Fusion algorithm | |
CN110320916A (en) | Consider the autonomous driving vehicle method for planning track and system of occupant's impression | |
Siddiqi et al. | Ergonomic path planning for autonomous vehicles-an investigation on the effect of transition curves on motion sickness | |
CN106828591A (en) | A kind of electric boosting steering system multi-mode method for handover control | |
CN107487358A (en) | A kind of driver's rotating direction control method based on brain emotion learning loop model | |
CN113104050B (en) | Unmanned end-to-end decision method based on deep reinforcement learning | |
CN109712424A (en) | A kind of automobile navigation method based on Internet of Things | |
CN113022702A (en) | Intelligent networking automobile self-adaptive obstacle avoidance system based on steer-by-wire and game result | |
CN106601076A (en) | Automobile self-help training device based on strapdown inertial navigation and area-array cameras and method | |
Siddiqi et al. | Motion sickness mitigating algorithms and control strategy for autonomous vehicles | |
CN106054877A (en) | Autonomous driving vehicle lane-line self-adaptive keeping method based on anti-saturation strategy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171219 |
|
RJ01 | Rejection of invention patent application after publication |