CN114822036A - Vehicle intelligent regulation and control method for preventing rear-end collision under multiple road conditions - Google Patents

Vehicle intelligent regulation and control method for preventing rear-end collision under multiple road conditions Download PDF

Info

Publication number
CN114822036A
CN114822036A CN202210531874.4A CN202210531874A CN114822036A CN 114822036 A CN114822036 A CN 114822036A CN 202210531874 A CN202210531874 A CN 202210531874A CN 114822036 A CN114822036 A CN 114822036A
Authority
CN
China
Prior art keywords
vehicle
layer
coordinate
end collision
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210531874.4A
Other languages
Chinese (zh)
Other versions
CN114822036B (en
Inventor
文强
江晓
王聿隽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong All Things Machinery Technology Co ltd
Original Assignee
Shandong Henghao Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Henghao Information Technology Co ltd filed Critical Shandong Henghao Information Technology Co ltd
Priority to CN202210531874.4A priority Critical patent/CN114822036B/en
Publication of CN114822036A publication Critical patent/CN114822036A/en
Application granted granted Critical
Publication of CN114822036B publication Critical patent/CN114822036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle intelligent regulation and control method for preventing rear-end collision under multiple road conditions, which utilizes a camera device of a vehicle data recorder to monitor the speed and the relative distance of a front vehicle in real time on the basis of an optical principle to obtain the coordinate and the relative speed of a target vehicle; calculating the friction coefficient of the vehicle under the environmental parameters; establishing a coordinate type neural network model: inputting a front vehicle coordinate, a front vehicle relative speed and a rolling friction coefficient, and outputting a probability value of vehicle rear-end collision, a rear-end collision prevention speed adjustment value, a rear-end collision prevention adjustment coordinate value of a self vehicle and a braking capacity value; and respectively setting threshold values according to the output of the coordinate type neural network for early warning and intelligently regulating and controlling the vehicle. The invention can prevent the rear-end collision under various road conditions in advance by intelligently regulating and controlling the speed of the vehicle, thereby reducing the occurrence rate of the rear-end collision to a greater extent.

Description

Vehicle intelligent regulation and control method for preventing rear-end collision under multiple road conditions
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to an intelligent regulation and control method for preventing rear-end collision under multiple road conditions.
Background
With the improvement of living standard of people, driving and traveling become a traveling mode selected by most people, but traffic events are frequently generated, wherein rear-end collisions have higher occurrence probability in the traffic events.
In order to deal with the rear-end collision event, a vehicle data recorder, road condition monitoring and the like become necessary devices for traffic control. In the prior art, excessive rear-end collision prevention is calibrated by referring to a road boundary or other large vehicles, and the road type has great limitation, so that an intelligent regulation and control method for preventing rear-end collision under multiple road conditions is needed.
Disclosure of Invention
The invention provides an intelligent regulation and control method for preventing rear-end collisions under multiple road conditions, which aims to prevent the occurrence of rear-end collisions under multiple road conditions in advance by intelligently regulating and controlling the speed of a vehicle and greatly reduce the occurrence rate of the rear-end collisions.
The invention relates to a vehicle intelligent regulation and control method for preventing rear-end collision under multiple road conditions, which comprises the following steps:
s1, monitoring the speed and the relative distance of the front vehicle in real time by using a camera of the automobile data recorder based on an optical principle to obtain the coordinate and the relative speed of the target vehicle;
s2, calculating the friction coefficient of the vehicle under the environmental parameters;
s3 coordinate type neural network model establishment: outputting the probability value of rear-end collision of the vehicle by the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane, the real-time speed V of the front vehicle relative to the self vehicle and the rolling friction coefficient mu of the vehicle;
and S4, respectively setting threshold values according to the output of the coordinate type neural network to perform early warning and intelligently regulate and control the vehicle.
Further, the step S1 includes:
s11 sets a two-dimensional image plane coordinate system formed after the shooting by the camera three-dimensional coordinate system:
the position of the camera is used as a coordinate origin, an x axis and a z axis of a three-dimensional coordinate are both in a plane of a road where the vehicle runs, the x axis is perpendicular to the advancing direction of the vehicle, the y axis is perpendicular to the road surface where the vehicle runs, and the z axis is parallel to the advancing direction of the vehicle. The position of the camera is on the axis of the set three-dimensional coordinates. The optical axis direction of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the optical axis direction of the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of a vehicle.
Let h be the height from the ground at which the camera is mounted, known height, using O (x) * ,h * ,z * ) And position coordinates of an arbitrary point on the road surface that can be captured by the camera.
Setting a two-dimensional image coordinate system formed after the shooting of a camera, setting a transverse coordinate axis i by taking an optical center G point of the camera as a coordinate origin * Longitudinal coordinate axis j * A shaft. Wherein i * Axis parallel to the x-axis, j * Shaft and i * The axes and the optical axis are perpendicular to each other, and the coordinates of a point of a two-dimensional image plane formed after the camera shooting are represented by O' (i, j).
Then O (x) can be expressed by the following formula * ,h * ,z * ) Mapping with O' (i, j):
Figure BDA0003645215500000021
d in the above formula represents the focal length of the camera.
Also available are O (x) * ,h * ,z * ) O' (i, j) is expressed by the following specific formula:
Figure BDA0003645215500000022
s12, estimating the distance between the vehicle and the front vehicle and the speed of the front vehicle relative to the vehicle in real time according to the images shot by the camera:
when the front vehicle is regarded as a point, the position coordinate of the point on the road surface can be O (x) * ,h * ,z * ) Represents; and calibrating the bottom center point of the vehicle shadow in the image, and marking as a point O 'to represent the position point of the front vehicle, so that the plane coordinates of a two-dimensional image formed by the point after being shot by the camera can also be represented by O' (i, j).
The distance between the self vehicle and the front vehicle is recorded as d, and the imaging relationship of the camera can obtain:
Figure BDA0003645215500000031
alpha represents the acute angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the vehicle and the front vehicle and the corresponding vehicle speed V of the front vehicle relative to the vehicle can be measured in real time through laser ranging.
Is obtained from
Figure BDA0003645215500000032
S13, calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely x, respectively according to the calculated alpha * 、z * Is given by the following formula:
Figure BDA0003645215500000033
the expression of the two-dimensional plane image coordinate O' (i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula:
Figure BDA0003645215500000034
and using the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the real-time vehicle speed V of the front vehicle relative to the self vehicle as a group of parameters for building the neural network model in the step S2.
Further, the step S2 includes:
the degree of friction factor generated between the vehicle and the ground in the environment is represented by gamma car 1 、γ car 2 、γ car 3 、γ car 4 The friction factors of four tires of a vehicle are represented, where n ═ {1, 2, 3, 4} represents a tire, and any tire is represented by n. The friction factor of any tire can be represented by gamma car n The forward pressure value of any tire and the ground is represented by F n And (4) performing representation. Let e be {1, 2, 3, 4}, F e And (4) showing. Gamma ray road The friction factor of the road is represented, and the total rolling friction coefficient of the four tires of the vehicle and the road is represented by mu.
The expression for the total rolling friction coefficient of the vehicle and the road is then:
Figure BDA0003645215500000041
wherein gamma is car n 、γ road 、F n The signals are collected in real time by a wireless sensor and transmitted to a computer of a vehicle for calculation, sigma F Indicating the value of the positive pressure F of the tyre against the ground n Variance of (a) car Representing the variance of the friction factors of the four tires of the vehicle.
Further, the step S3 includes:
s31 obtains a variable X ═ i, j, V, μ from the lateral coordinate value i and the longitudinal coordinate value j of the preceding vehicle in the two-dimensional plane, the vehicle speed V of the preceding vehicle with respect to the own vehicle in real time, and the rolling friction coefficient μ of the vehicle.
Data normalization preprocessing was performed on X ═ i, j, V, μ ]:
Figure BDA0003645215500000042
where t is a parameter and t → ∞.
The normalized data X' is obtained as an input variable and input to the coordinate-type neural network established in the present invention.
S32 the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is a data input layer C, and the input variables are X ' ═ i ', j ', V ', μ '](ii) a Layer 2 is a rule selection layer and shows that the input data processing rules are selected; layer 3 is a first hidden layer; layer 4 is data fusionA layer; layer 5 is an output layer, Y 1 The output is the probability value of the occurrence of the rear-end collision event.
S321 layer 1: with 4 neurons, i.e., C4, C {1, 2, 3, 4}, any one neuron can be represented by C.
The input to the input layer is X ' ═ i ', j ', V ', μ ' ], and the output is equal to the input.
S322 layer 2 has M neurons, and M ═ {1, 2, 3.., M }, then any one neuron is denoted by M:
the generating rule function is as follows:
Figure BDA0003645215500000051
where u is {1, 2, 3, 4}, u denotes a dimension of an input quantity, and v is {1, 2, 3 u },v={1,2,3,...,C u V denotes the accuracy of the input quantity, C u Denotes the group C u Precision, g uv Representing the center of a regular function, theta uv Width of the regular function, a 1 、a 2 Is constant, and a 1 <a 2
The output of layer 2 is
Figure BDA0003645215500000052
Wherein w cm And b cm Weights and offsets for layer 1 as described herein through layer 2 as described herein.
S323, layer 3 is a hidden layer and has L neurons, and any neuron can be represented by L.
The output of any neuron in layer 3 is
Figure BDA0003645215500000053
Wherein w ml And b ml Respectively the connection weights and offsets of the mth neuron of layer 2 and the lth neuron of layer 3,
Figure BDA0003645215500000054
as a function of excitation, eye
Figure BDA0003645215500000055
Figure BDA0003645215500000056
And
Figure BDA0003645215500000057
is a set of parameters.
The output of any neuron in layer 3 is representable as
Figure BDA0003645215500000058
S324 layer 4 is a data fusion layer having Q neurons, and Q ═ 1, 2, 3.
Normalizing the data input by the data fusion layer, wherein the processing mode is the prior art and is not described more herein, and the obtained normalized data is recorded as
Figure BDA0003645215500000059
Respectively find out
Figure BDA00036452155000000510
Is respectively marked as xi q
Figure BDA00036452155000000511
The way it is calculated is prior art and will not be elaborated upon herein.
The output of the fused layer 4 data is recorded as:
Figure BDA0003645215500000061
wherein
Figure BDA0003645215500000062
As an excitation function, w lq And b lq The connection weights and offsets for the l-th neuron of layer 3 and the q-th neuron of layer 4 respectively,
Figure BDA0003645215500000063
where k is a constant.
From the above, the output of layer 4 is:
Figure BDA0003645215500000064
s325 layer 5 has 4 neurons, where r ═ {1, 2, 3.., 4}, and any one neuron is denoted by r.
Wherein Y is 1 The output is probability value of rear-end collision, Y 2 The output is the rear-end collision prevention speed adjustment value Y 3 Adjusting coordinate value Y for preventing self vehicle from rear-end collision 4 The braking capability value is obtained. The specific calculation method is as follows:
Y r =f 1 (Q q )×w qr +b qr
Figure BDA0003645215500000065
wherein w qr And b qr Respectively is the connection weight and the offset of the qth neuron of the layer 4 and the r neuron of the layer 5, and t is a parameter;
from the above expression, one can obtain:
Figure BDA0003645215500000066
further, the step S4 includes:
and S4, respectively setting threshold values according to the output of the coordinate type neural network to perform early warning and intelligently regulate and control the vehicle.
Through a coordinate type neural network, Y is obtained 1 As a probability value of the rear-end collision.
Setting of rear-end collision prevention early warning threshold tau 1 When Y is 1 ≥τ 1 And in time, early warning is carried out, and a driver is warned to carry out vehicle intelligent adjustment on the vehicle.
The intelligent regulation and control are carried out to the vehicle through the safe adjustment scheme of preventing knocking into the back that the vehicle set up in advance, and the concrete mode is:
the speed of the vehicle is adjusted through the set standard speed value, the direction of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
The invention has at least the following beneficial effects:
1. compared with the existing formula, the formula for expressing the relationship between the coordinates of the same point on the road surface and the coordinates of the imaged two-dimensional image plane is more accurate and fine, so that the subsequent detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the occurrence of rear-end collision events is better prevented.
2. The invention adopts the wireless sensor to collect and transmit the friction factor and the pressure value, avoids the data representation caused by the abrasion of tires and the changeability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running of multiple road conditions, and leads the regulation and control of the vehicle speed to be more intelligent and accurate.
3. Data fusion in layer 4 according to the invention
Figure BDA0003645215500000071
The excitation function utilizes the mean value and the variance of the data and combines constant parameters to calculate the data, thereby reducing the complexity of neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
4. Compared with the prior art, the intelligent vehicle control system has the advantages that the vehicle is intelligently controlled in an all-round way, the rear-end collision is avoided in an all-round way, and the use efficiency is higher.
Drawings
FIG. 1 is a flow chart of the intelligent regulation and control of a vehicle for preventing rear-end collision under multiple conditions according to the present invention;
FIG. 2 is a diagram of a coordinate type neural network according to the present invention.
Detailed Description
For a more clear description of the invention, reference is now made to the accompanying drawings, which together with the detailed description, serve to explain the principles of the invention.
Referring to fig. 1, the invention provides a vehicle intelligent regulation and control method for preventing rear-end collision under multiple conditions, comprising the following steps:
s1, monitoring the speed and the relative distance of the front vehicle in real time based on the optical principle by utilizing the camera device of the automobile data recorder, and obtaining the coordinates and the relative speed of the target vehicle.
S11 sets a two-dimensional image plane coordinate system formed after the shooting by the camera three-dimensional coordinate system:
the position of the camera is used as a coordinate origin, an x axis and a z axis of a three-dimensional coordinate are both in a plane of a road where the vehicle runs, the x axis is perpendicular to the advancing direction of the vehicle, the y axis is perpendicular to the road surface where the vehicle runs, and the z axis is parallel to the advancing direction of the vehicle. The position of the camera is on the axis of the set three-dimensional coordinates. The optical axis direction of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the optical axis direction of the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of a vehicle.
Let h be the height from the ground at which the camera is mounted, known height, using O (x) * ,h * ,z * ) And position coordinates of an arbitrary point on the road surface that can be captured by the camera.
Setting a two-dimensional image coordinate system formed after the shooting of a camera, setting a transverse coordinate axis i by taking an optical center G point of the camera as a coordinate origin * Longitudinal coordinate axis j * A shaft. Wherein i * Axis parallel to the x-axis, j * Shaft and i * The axes and the optical axis are perpendicular to each other, and O' (i, j) represents a plane of a two-dimensional image formed after the image is captured by the cameraCoordinates of points of the surface.
Then O (x) can be expressed by the following formula * ,h * ,z * ) Mapping with O' (i, j):
Figure BDA0003645215500000081
d in the above formula represents the focal length of the camera.
Then O (x) may also be used * ,h * ,z * ) O' (i, j) is expressed by the following specific formula:
Figure BDA0003645215500000082
compared with the existing formula, the formula for expressing the relationship between the coordinates of the same point on the road surface and the coordinates of the imaged two-dimensional image plane is more accurate and fine, so that the subsequent detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the occurrence of rear-end collision events is better prevented.
S12, estimating the distance between the vehicle and the front vehicle and the speed of the front vehicle relative to the self vehicle in real time according to the images shot by the camera.
Considering the front vehicle as a point, the position coordinate of the point on the road surface can be O (x) * ,h * ,z * ) Represents; and calibrating the bottom center point of the vehicle shadow in the image, and marking as a point O 'to represent the position point of the front vehicle, so that the plane coordinates of a two-dimensional image formed by the point after being shot by the camera can also be represented by O' (i, j).
The distance between the self vehicle and the front vehicle is recorded as d, and the imaging relationship of the camera can obtain:
Figure BDA0003645215500000091
alpha represents the acute angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the vehicle and the front vehicle and the vehicle speed V of the corresponding front vehicle relative to the vehicle can be measured in real time through laser ranging, and the laser ranging is the prior art and is not described in more detail herein.
Is obtained from
Figure BDA0003645215500000092
S13, calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely x, respectively according to the calculated alpha * 、z * Is given by the following formula:
Figure BDA0003645215500000093
the expression of the two-dimensional plane image coordinate O' (i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula:
Figure BDA0003645215500000094
and using the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the real-time vehicle speed V of the front vehicle relative to the self vehicle as a group of parameters for building the neural network model in the step S2.
S2 calculates the friction coefficient of the own vehicle under the environmental parameters.
The degree of friction factor generated between the vehicle and the ground in the environment is represented by gamma car 1 、γ car 2 、γ car 3 、γ car 4 The friction factors of four tires of a vehicle are represented, where n ═ {1, 2, 3, 4} represents a tire, and any tire is represented by n. The friction factor of any tire can be represented by gamma car n The forward pressure value of any tire and the ground is represented by F n And (4) performing representation. Let e be {1, 2, 3, 4}, F e And (4) showing. Gamma ray road The friction factor of the road is represented, and the total rolling friction coefficient of the four tires of the vehicle and the road is represented by mu.
The expression for the total rolling friction coefficient of the vehicle and the road is then:
Figure BDA0003645215500000101
wherein gamma is car n 、γ road 、F n The data are collected in real time by a wireless sensor and transmitted to a computer of a vehicle for calculation, sigma F Indicating the value of the positive pressure F of the tyre against the ground n Variance of (a) car Representing the variance of the friction factors of the four tires of the vehicle.
The invention adopts the wireless sensor to collect and transmit the friction factor and the pressure value, avoids the data representation caused by the abrasion of tires and the changeability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running of multiple road conditions, and leads the regulation and control of the vehicle speed to be more intelligent and accurate.
S3 coordinate type neural network model establishment: and outputting the probability value of the rear-end collision event of the vehicle by using the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane, the real-time speed V of the front vehicle relative to the self vehicle and the rolling friction coefficient mu of the vehicle.
Referring to fig. 2, S31 obtains the lateral coordinate value i and the longitudinal coordinate value j of the preceding vehicle in the two-dimensional plane, the real-time speed V of the preceding vehicle relative to the own vehicle, and the rolling friction coefficient μ of the vehicle from steps S1 and S2 according to the present invention, and obtains the variable X ═ i, j, V, μ.
Data normalization preprocessing was performed on X ═ i, j, V, μ ]:
Figure BDA0003645215500000102
where t is a parameter and t → ∞.
Compared with the prior art, the data standardization processing mode adopted by the invention abandons the defects caused by the use of the mean value and the variance of the data, and the data is standardized by utilizing the limit data processing mode, so that the calculation is simpler and more convenient.
The normalized data X' is obtained as an input variable and input to the coordinate-type neural network established in the present invention.
S32 the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is a data input layer C, and the input variables are X ' ═ i ', j ', V ', μ '](ii) a Layer 2 is a rule selection layer and shows that the input data processing rules are selected; layer 3 is a first hidden layer; layer 4 is a data fusion layer; layer 5 is an output layer, Y 1 The output is the probability value of the occurrence of the rear-end collision event.
S321 layer 1: with 4 neurons, i.e., C-4, C-1, 2, 3, 4, any one neuron can be represented by C.
The input to the input layer is X ' ═ i ', j ', V ', μ ' ], and the output is equal to the input.
S322 layer 2 has M neurons, and M {1, 2, 3., M }, any neuron is denoted by M:
the generating rule function is as follows:
Figure BDA0003645215500000111
where u is {1, 2, 3, 4}, u denotes a dimension of an input quantity, and v is {1, 2, 3 u V denotes the accuracy of the input quantity, C u Denotes the number C u Precision, g uv Representing the center of a regular function, theta uv Width of the regular function, a 1 、a 2 Is constant and α 1 <a 2
The output of layer 2 is
Figure BDA0003645215500000112
Wherein w cm And b cm Weights and offsets for layer 1 as described herein through layer 2 as described herein.
The rule function adopted in the construction of the coordinate type neural network can carry out accurate processing on the input data efficiently, and the convergence rate of the neural network is enhanced.
S323, layer 3 is a hidden layer and has L neurons, and any neuron can be represented by L.
The output of any neuron in layer 3 is
Figure BDA0003645215500000113
Wherein w ml And b ml Respectively the connection weights and offsets of the mth neuron of layer 2 and the lth neuron of layer 3,
Figure BDA0003645215500000121
as a function of excitation, eye
Figure BDA0003645215500000122
Figure BDA0003645215500000123
And
Figure BDA0003645215500000124
is a set of parameters.
The output of any neuron in layer 3 is representable as
Figure BDA0003645215500000125
The excitation function used in layer 3 of the present invention makes the calculation process simpler and more convenient, and prevents the excessive convergence problem of the neural network more effectively.
S324 layer 4 is a data fusion layer having Q neurons, and Q ═ 1, 2, 3.
Normalizing the data input by the data fusion layer, wherein the processing mode is the prior art and is not described more herein, and the obtained normalized data is recorded as
Figure BDA0003645215500000126
Respectively find out
Figure BDA0003645215500000127
Is respectively marked as xi q
Figure BDA0003645215500000128
The way it is calculated is prior art and will not be elaborated upon herein.
The output of the layer 4 after fusion is recorded as:
Figure BDA0003645215500000129
wherein
Figure BDA00036452155000001210
As an excitation function, w 2q And b lq The connection weights and offsets for the first neuron of layer 3 and the q neuron of layer 4 respectively,
Figure BDA00036452155000001211
where k is a constant.
From the above, the output of layer 4 is:
Figure BDA00036452155000001212
data fusion in layer 4 according to the invention
Figure BDA00036452155000001213
The formula utilizes the mean value and the variance of the data and combines constant parameters to calculate the data, thereby reducing the complexity of neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
S325 layer 5 has 4 neurons, where r ═ {1, 2, 3.., 4}, and any one neuron is denoted by r.
Wherein Y is 1 The output is probability value of rear-end collision, Y 2 The output is the rear-end collision prevention speed adjustment value Y 3 Adjusting coordinate value, Y, for preventing self vehicle from rear-end collision 4 The braking capability value is obtained. The specific calculation method is as follows:
Y r =f 1 (Q q )×w qr +b qr
Figure BDA0003645215500000131
wherein w qr And b qr The q-th neuron of the layer 4 and the r-th neuron of the layer 5 respectively, and t is a parameter.
From the above expression:
Figure BDA0003645215500000132
and S4, respectively setting threshold values according to the output of the coordinate type neural network to perform early warning and intelligently regulate and control the vehicle.
Through the coordinate type neural network established by the invention, Y is obtained 1 As a probability value of the rear-end collision.
Setting of rear-end collision prevention early warning threshold tau 1 When Y is 1 ≥τ 1 And in time, early warning is carried out, and a driver is warned to carry out vehicle intelligent adjustment on the vehicle.
The intelligent regulation and control are carried out to the vehicle through the safe adjustment scheme of preventing knocking into the back that the vehicle set up in advance, and the concrete mode is: the speed of the vehicle is adjusted through the set standard speed value, the direction of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
Compared with the prior art, the intelligent vehicle control system has the advantages that the vehicle is intelligently controlled in an all-round way, the rear-end collision is avoided in an all-round way, and the use efficiency is higher.
In conclusion, the intelligent regulation and control method for the vehicle capable of preventing rear-end collision under multiple road conditions is realized.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (5)

1. A vehicle intelligent regulation and control method for preventing rear-end collision under multiple conditions is characterized by comprising the following steps:
s1, monitoring the speed and the relative distance of the front vehicle in real time by using a camera device driven by a vehicle data recorder based on an optical principle to obtain the coordinate and the relative speed of the target vehicle;
s2, calculating the friction coefficient of the vehicle under the environmental parameters;
s3 coordinate type neural network model establishment: inputting a front vehicle coordinate, a front vehicle relative speed and a rolling friction coefficient, and outputting a probability value of vehicle rear-end collision, a rear-end collision prevention speed adjustment value, a rear-end collision prevention adjustment coordinate value of a self vehicle and a braking capacity value;
and S4, respectively setting threshold values according to the output of the coordinate type neural network to perform early warning and intelligently regulate and control the vehicle.
2. The intelligent regulation and control method for the vehicle with the multi-road condition rear-end collision prevention function as claimed in claim 1, wherein the step S1 comprises:
s11 sets a two-dimensional image plane coordinate system formed after the shooting by the camera three-dimensional coordinate system:
taking the position of the camera as a coordinate origin, setting an x axis and a z axis of a three-dimensional coordinate to be in a plane of a road where the vehicle runs, wherein the x axis is vertical to the advancing direction of the vehicle, the y axis is vertical to the road surface where the vehicle runs, and the z axis is parallel to the advancing direction of the vehicle; the position of the camera is on the axis of the set three-dimensional coordinate; the optical axis direction of the camera is in a coordinate plane formed by a y axis and a z axis, and the included angle between the optical axis direction of the camera and the road plane is
Figure FDA0003645215490000011
The distance between the camera and the road plane along the direction of the optical axis is E, wherein the included angle is
Figure FDA0003645215490000012
And the epsilon is an adjustable variable which can be adjusted according to the actual condition of the vehicle;
let h be the height from the ground at which the camera is mounted, known height, byO(x * ,h * ,z * ) Position coordinates representing an arbitrary point of the road surface that can be photographed by the camera;
setting a two-dimensional image coordinate system formed after the shooting of a camera, setting a transverse coordinate axis i by taking an optical center G point of the camera as a coordinate origin * Longitudinal coordinate axis j * A shaft; wherein i * Axis parallel to the x-axis, j * Shaft and i * The axes and the optical axis are in a perpendicular relationship, and the coordinates of points of a two-dimensional image plane formed after the camera shooting are represented by O' (i, j);
then O (x) can be expressed by the following formula * ,h * ,z * ) Mapping with O' (i, j):
Figure FDA0003645215490000021
d in the above formula represents the focal length of the camera;
also available are O (x) * ,h * ,z * ) O' (i, j) is expressed by the following specific formula:
Figure FDA0003645215490000022
s12, estimating the distance between the vehicle and the front vehicle and the speed of the front vehicle relative to the vehicle in real time according to the images shot by the camera:
considering the front vehicle as a point, the position coordinate of the point on the road surface can be O (x) * ,h * ,z * ) Represents; calibrating the bottom center point of the vehicle shadow in the image, marking as a point O 'to represent the position point of the front vehicle, and then representing the plane coordinate of a two-dimensional image formed by the point after being shot by a camera by using O' (i, j);
the distance between the vehicle and the front vehicle is recorded as d, and the imaging relationship of the camera can obtain:
Figure FDA0003645215490000023
alpha represents an acute included angle formed by a straight line GO and a road surface, namely a z-axis;
the distance d between the front vehicle and the corresponding vehicle speed V of the front vehicle relative to the vehicle can be measured in real time through laser ranging;
is obtained from
Figure FDA0003645215490000024
S13, calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely x, respectively according to the calculated alpha * 、z * Is given by the following formula:
Figure FDA0003645215490000025
the expression of the two-dimensional plane image coordinate O' (i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula:
Figure FDA0003645215490000031
and using the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the real-time vehicle speed V of the front vehicle relative to the self vehicle as a group of parameters for building the neural network model in the step S2.
3. The intelligent regulation and control method for the vehicle with the multi-road condition rear-end collision prevention function as claimed in claim 2, wherein the step S2 comprises:
the degree of friction factor generated between the vehicle and the ground in the environment is represented by gamma car 1 、γ car 2 、γ car 3 、γ car 4 The friction factors of four tires of a vehicle are respectively represented, and n is {1, 2, 3, 4} to represent the tire, and any tire is represented by n; the friction factor of any tire can be represented by gamma car n The forward pressure value of any tire and the ground is represented by F n Carrying out representation; let e be {1, 2, 3, 4}, F e Represents; gamma ray road Representing the friction factor of the road, and representing the total rolling friction coefficient of four tires of the vehicle and the road by mu;
the expression for the total rolling friction coefficient of the vehicle and the road is then:
Figure FDA0003645215490000032
wherein gamma is car n 、γ road 、F n The signals are collected in real time by a wireless sensor and transmitted to a computer of a vehicle for calculation, sigma F Indicating the value of the positive pressure F of the tyre against the ground n Variance of (a) car Representing the variance of the friction factors of the four tires of the vehicle.
4. The intelligent regulation and control method for the vehicle with the multi-road condition rear-end collision prevention function of claim 3, wherein the step S3 comprises the following steps:
s31 obtains a variable X ═ i, j, V, μ from a lateral coordinate value i and a longitudinal coordinate value j of the preceding vehicle in the two-dimensional plane, a vehicle speed V of the preceding vehicle relative to the own vehicle in real time, and a rolling friction coefficient μ of the vehicle;
data normalization preprocessing was performed on X ═ i, j, V, μ ]:
Figure FDA0003645215490000033
wherein t is a parameter, and t → ∞;
obtaining normalized data X '═ i', j ', V', μ '], and inputting the normalized data X' as an input variable into the coordinate type neural network established by the present invention;
s32 the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is a data input layer C, input variablesIs [ i ', j', V ', mu'](ii) a Layer 2 is a rule selection layer and shows that the input data processing rules are selected; layer 3 is a first hidden layer; layer 4 is a data fusion layer; layer 5 is an output layer, Y 1 The output is the probability value of the occurrence of the rear-end collision event;
s321 layer 1: with 4 neurons, i.e., C ═ 4, C ═ 1, 2, 3, 4, then any one neuron can be represented by C;
the input to the input layer is X ' ═ i ', j ', V ', μ ' ], the output equals the input;
s322 layer 2 has M neurons, and M ═ {1, 2, 3.., M }, then any one neuron is denoted by M:
the generating rule function is as follows:
Figure FDA0003645215490000041
where u is {1, 2, 3, 4}, u denotes a dimension of an input quantity, and v is {1, 2, 3 u },v={1,2,3,...,C u V denotes the accuracy of the input quantity, C u Denotes the group C u Precision, g uv Representing the center of a regular function, theta uv Width of the regular function, a 1 、d 2 Is constant, and a 1 <a 2
The output of layer 2 is
Figure FDA0003645215490000042
Wherein w cm And b cm Weights and offsets for layer 1 as described herein to layer 2 as described herein;
s323, the layer 3 is a hidden layer and is provided with L neurons, and any neuron can be represented by L;
the output of any neuron in layer 3 is
Figure FDA0003645215490000043
Wherein w ml And b ml Respectively the connection weights and offsets of the mth neuron of layer 2 and the lth neuron of layer 3,
Figure FDA0003645215490000051
is an excitation function, and
Figure FDA0003645215490000052
Figure FDA00036452154900000511
and
Figure FDA0003645215490000053
is a set of parameters;
the output of any neuron in layer 3 is representable as
Figure FDA0003645215490000054
S324 layer 4 is a data fusion layer, and has Q neurons, where Q ═ 1, 2, 3.., Q }, and any neuron can be represented by Q;
normalizing the data input by the data fusion layer, wherein the processing mode is the prior art and is not described more herein, and the obtained normalized data is recorded as
Figure FDA0003645215490000055
Respectively find out
Figure FDA0003645215490000056
Is respectively marked as xi q
Figure FDA0003645215490000057
The calculation mode is the prior art and is not elaborated herein;
the output of the layer 4 after fusion is recorded as:
Figure FDA0003645215490000058
wherein
Figure FDA00036452154900000512
As an excitation function, w lq And b lq The connection weights and offsets for the first neuron of layer 3 and the q neuron of layer 4 respectively,
Figure FDA0003645215490000059
wherein k is a constant;
from the above, the output of layer 4 is:
Figure FDA00036452154900000510
s325 layer 5 has 4 neurons, where r ═ {1, 2, 3, …, 4}, then any one neuron is denoted by r;
wherein Y is 1 The output is probability value of rear-end collision, Y 2 The output is the rear-end collision prevention speed adjustment value Y 3 Adjusting coordinate value, Y, for preventing self vehicle from rear-end collision 4 The brake capacity value is taken as the brake capacity value; the specific calculation method is as follows:
Y r =f 1 (Q q )×w qr +b qr
Figure FDA0003645215490000061
wherein w qr And b qr Respectively is the connection weight and the offset of the qth neuron of the layer 4 and the r neuron of the layer 5, and t is a parameter;
from the above expression, one can obtain:
Figure FDA0003645215490000062
5. the intelligent regulation and control method for the vehicle with the multi-road condition rear-end collision prevention function as claimed in claim 4, wherein the step S4 comprises the following steps:
s4, respectively setting threshold values according to the output of the coordinate type neural network for early warning and intelligently regulating and controlling the vehicle;
through a coordinate type neural network, Y is obtained 1 As probability value of rear-end collision;
setting of rear-end collision prevention early warning threshold tau 1 When Y is 1 ≥τ 1 When the vehicle is in use, early warning is carried out, and a driver is warned to carry out vehicle intelligent adjustment on the vehicle;
the intelligent regulation and control are carried out to the vehicle through the safe adjustment scheme of preventing knocking into the back that the vehicle set up in advance, and the concrete mode is:
the speed of the vehicle is adjusted through the set standard speed value, the direction of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
CN202210531874.4A 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions Active CN114822036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210531874.4A CN114822036B (en) 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210531874.4A CN114822036B (en) 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions

Publications (2)

Publication Number Publication Date
CN114822036A true CN114822036A (en) 2022-07-29
CN114822036B CN114822036B (en) 2024-06-14

Family

ID=82515516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210531874.4A Active CN114822036B (en) 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions

Country Status (1)

Country Link
CN (1) CN114822036B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105235681A (en) * 2015-11-11 2016-01-13 吉林大学 Vehicle rear-end collision preventing system and method based on road surface conditions
CN105938660A (en) * 2016-06-07 2016-09-14 长安大学 Automobile rear-end collision prevention early warning method and system
CN110091868A (en) * 2019-05-20 2019-08-06 合肥工业大学 A kind of longitudinal collision avoidance method and its system, intelligent automobile of man-machine coordination control
CN110682907A (en) * 2019-10-17 2020-01-14 四川大学 Automobile rear-end collision prevention control system and method
CN111994068A (en) * 2020-10-29 2020-11-27 北京航空航天大学 Intelligent driving automobile control system based on intelligent tire touch perception
CN112037159A (en) * 2020-07-29 2020-12-04 长安大学 Cross-camera road space fusion and vehicle target detection tracking method and system
CN112248986A (en) * 2020-10-23 2021-01-22 厦门理工学院 Automatic braking method, device, equipment and storage medium for vehicle
WO2021111944A1 (en) * 2019-12-02 2021-06-10 Toyo Tire株式会社 Vehicle safety support system and vehicle safety support method
CN114148322A (en) * 2022-01-04 2022-03-08 吉林大学 Pavement adhesion self-adaptive commercial vehicle air pressure automatic emergency braking control method
CN114194196A (en) * 2020-08-26 2022-03-18 现代摩比斯株式会社 Method and apparatus for controlling terrain mode using deep learning-based road condition determination model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105235681A (en) * 2015-11-11 2016-01-13 吉林大学 Vehicle rear-end collision preventing system and method based on road surface conditions
CN105938660A (en) * 2016-06-07 2016-09-14 长安大学 Automobile rear-end collision prevention early warning method and system
CN110091868A (en) * 2019-05-20 2019-08-06 合肥工业大学 A kind of longitudinal collision avoidance method and its system, intelligent automobile of man-machine coordination control
CN110682907A (en) * 2019-10-17 2020-01-14 四川大学 Automobile rear-end collision prevention control system and method
WO2021111944A1 (en) * 2019-12-02 2021-06-10 Toyo Tire株式会社 Vehicle safety support system and vehicle safety support method
CN112037159A (en) * 2020-07-29 2020-12-04 长安大学 Cross-camera road space fusion and vehicle target detection tracking method and system
CN114194196A (en) * 2020-08-26 2022-03-18 现代摩比斯株式会社 Method and apparatus for controlling terrain mode using deep learning-based road condition determination model
CN112248986A (en) * 2020-10-23 2021-01-22 厦门理工学院 Automatic braking method, device, equipment and storage medium for vehicle
CN111994068A (en) * 2020-10-29 2020-11-27 北京航空航天大学 Intelligent driving automobile control system based on intelligent tire touch perception
CN114148322A (en) * 2022-01-04 2022-03-08 吉林大学 Pavement adhesion self-adaptive commercial vehicle air pressure automatic emergency braking control method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUCHUAN FU: "Infrastructure-cooperative algorithm for effective intersection collision avoidance", TRANSPORTATION RESEARCH PART C *
张胜平;马香娟;: "恶劣天气下公路急弯路段追尾预警系统设计", 公路交通科技(应用技术版), no. 09 *
李倩: "基于智能车辆运动状态估计的前方碰撞预警研究", 中国优秀硕士学位论文全文数据库 *
杨维新: "基于神经网络的汽车四轮转向控制系统研究", 中国优秀硕士学位论文全文数据库 *

Also Published As

Publication number Publication date
CN114822036B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
DE102005009814B4 (en) Vehicle condition detection system and method
DE112012006465B4 (en) Driving characteristic estimating device and driving assistance system
CN113361121B (en) Road adhesion coefficient estimation method based on time-space synchronization and information fusion
CN107972662A (en) To anti-collision warning method before a kind of vehicle based on deep learning
CN110588623B (en) Large automobile safe driving method and system based on neural network
CN109074069A (en) Autonomous vehicle with improved vision-based detection ability
CN108197610A (en) A kind of track foreign matter detection system based on deep learning
CN107499262A (en) ACC/AEB systems and vehicle based on machine learning
DE102013200132A1 (en) Lane keeping system with active rear wheel steering
CN111860493A (en) Target detection method and device based on point cloud data
CN104662592A (en) Method for operating a driver assistance system of a vehicle
CN109299656B (en) Scene depth determination method for vehicle-mounted vision system
CN105632180B (en) A kind of bridge tunnel entrance model recognition system and method based on ARM
DE102011103795A1 (en) Method and system for collision assessment for vehicles
DE102017214053A1 (en) A method for determining a coefficient of friction for a contact between a tire of a vehicle and a road and method for controlling a vehicle function of a vehicle
CN110843781A (en) Vehicle curve automatic control method based on driver behavior
DE102018103473A1 (en) EFFECTIVE ROLL RADIUS
DE102020113421A1 (en) DEVICE AND METHOD FOR AUTONOMOUS DRIVING
CN111444891A (en) Unmanned rolling machine operation scene perception system and method based on airborne vision
CN109839175A (en) A kind of bridge mobile load Statistical error system
DE102023104789A1 (en) TRACKING OF MULTIPLE OBJECTS
CN114964445A (en) Multi-module dynamic weighing method based on vehicle identification
CN116691626B (en) Vehicle braking system and method based on artificial intelligence
CN110888441B (en) Gyroscope-based wheelchair control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240513

Address after: Room 214, Yijing Building, No.1 Hengshan Road, Yantai Economic and Technological Development Zone, Shandong Province, 264000

Applicant after: Shandong all things Machinery Technology Co.,Ltd.

Country or region after: China

Address before: No. 3203, block C, Range Rover mansion, No. 588, Gangcheng East Street, Laishan District, Yantai City, Shandong Province, 264003

Applicant before: SHANDONG HENGHAO INFORMATION TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant