Disclosure of Invention
The invention provides an intelligent regulation and control method for preventing rear-end collisions under multiple road conditions, which aims to prevent the occurrence of rear-end collisions under multiple road conditions in advance by intelligently regulating and controlling the speed of a vehicle and greatly reduce the occurrence rate of the rear-end collisions.
The invention relates to a vehicle intelligent regulation and control method for preventing rear-end collision under multiple road conditions, which comprises the following steps:
s1, monitoring the speed and the relative distance of the front vehicle in real time by using a camera of the automobile data recorder based on an optical principle to obtain the coordinate and the relative speed of the target vehicle;
s2, calculating the friction coefficient of the vehicle under the environmental parameters;
s3 coordinate type neural network model establishment: outputting the probability value of rear-end collision of the vehicle by the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane, the real-time speed V of the front vehicle relative to the self vehicle and the rolling friction coefficient mu of the vehicle;
and S4, respectively setting threshold values according to the output of the coordinate type neural network to perform early warning and intelligently regulate and control the vehicle.
Further, the step S1 includes:
s11 sets a two-dimensional image plane coordinate system formed after the shooting by the camera three-dimensional coordinate system:
the position of the camera is used as a coordinate origin, an x axis and a z axis of a three-dimensional coordinate are both in a plane of a road where the vehicle runs, the x axis is perpendicular to the advancing direction of the vehicle, the y axis is perpendicular to the road surface where the vehicle runs, and the z axis is parallel to the advancing direction of the vehicle. The position of the camera is on the axis of the set three-dimensional coordinates. The optical axis direction of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the optical axis direction of the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of a vehicle.
Let h be the height from the ground at which the camera is mounted, known height, using O (x) * ,h * ,z * ) And position coordinates of an arbitrary point on the road surface that can be captured by the camera.
Setting a two-dimensional image coordinate system formed after the shooting of a camera, setting a transverse coordinate axis i by taking an optical center G point of the camera as a coordinate origin * Longitudinal coordinate axis j * A shaft. Wherein i * Axis parallel to the x-axis, j * Shaft and i * The axes and the optical axis are perpendicular to each other, and the coordinates of a point of a two-dimensional image plane formed after the camera shooting are represented by O' (i, j).
Then O (x) can be expressed by the following formula * ,h * ,z * ) Mapping with O' (i, j):
d in the above formula represents the focal length of the camera.
Also available are O (x) * ,h * ,z * ) O' (i, j) is expressed by the following specific formula:
s12, estimating the distance between the vehicle and the front vehicle and the speed of the front vehicle relative to the vehicle in real time according to the images shot by the camera:
when the front vehicle is regarded as a point, the position coordinate of the point on the road surface can be O (x) * ,h * ,z * ) Represents; and calibrating the bottom center point of the vehicle shadow in the image, and marking as a point O 'to represent the position point of the front vehicle, so that the plane coordinates of a two-dimensional image formed by the point after being shot by the camera can also be represented by O' (i, j).
The distance between the self vehicle and the front vehicle is recorded as d, and the imaging relationship of the camera can obtain:
alpha represents the acute angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the vehicle and the front vehicle and the corresponding vehicle speed V of the front vehicle relative to the vehicle can be measured in real time through laser ranging.
S13, calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely x, respectively according to the calculated alpha * 、z * Is given by the following formula:
the expression of the two-dimensional plane image coordinate O' (i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula:
and using the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the real-time vehicle speed V of the front vehicle relative to the self vehicle as a group of parameters for building the neural network model in the step S2.
Further, the step S2 includes:
the degree of friction factor generated between the vehicle and the ground in the environment is represented by gamma car 1 、γ car 2 、γ car 3 、γ car 4 The friction factors of four tires of a vehicle are represented, where n ═ {1, 2, 3, 4} represents a tire, and any tire is represented by n. The friction factor of any tire can be represented by gamma car n The forward pressure value of any tire and the ground is represented by F n And (4) performing representation. Let e be {1, 2, 3, 4}, F e And (4) showing. Gamma ray road The friction factor of the road is represented, and the total rolling friction coefficient of the four tires of the vehicle and the road is represented by mu.
The expression for the total rolling friction coefficient of the vehicle and the road is then:
wherein gamma is car n 、γ road 、F n The signals are collected in real time by a wireless sensor and transmitted to a computer of a vehicle for calculation, sigma F Indicating the value of the positive pressure F of the tyre against the ground n Variance of (a) car Representing the variance of the friction factors of the four tires of the vehicle.
Further, the step S3 includes:
s31 obtains a variable X ═ i, j, V, μ from the lateral coordinate value i and the longitudinal coordinate value j of the preceding vehicle in the two-dimensional plane, the vehicle speed V of the preceding vehicle with respect to the own vehicle in real time, and the rolling friction coefficient μ of the vehicle.
Data normalization preprocessing was performed on X ═ i, j, V, μ ]:
where t is a parameter and t → ∞.
The normalized data X' is obtained as an input variable and input to the coordinate-type neural network established in the present invention.
S32 the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is a data input layer C, and the input variables are X ' ═ i ', j ', V ', μ '](ii) a Layer 2 is a rule selection layer and shows that the input data processing rules are selected; layer 3 is a first hidden layer; layer 4 is data fusionA layer; layer 5 is an output layer, Y 1 The output is the probability value of the occurrence of the rear-end collision event.
S321 layer 1: with 4 neurons, i.e., C4, C {1, 2, 3, 4}, any one neuron can be represented by C.
The input to the input layer is X ' ═ i ', j ', V ', μ ' ], and the output is equal to the input.
S322 layer 2 has M neurons, and M ═ {1, 2, 3.., M }, then any one neuron is denoted by M:
the generating rule function is as follows:
where u is {1, 2, 3, 4}, u denotes a dimension of an input quantity, and v is {1, 2, 3 u },v={1,2,3,...,C u V denotes the accuracy of the input quantity, C u Denotes the group C u Precision, g uv Representing the center of a regular function, theta uv Width of the regular function, a 1 、a 2 Is constant, and a 1 <a 2 。
Wherein w cm And b cm Weights and offsets for layer 1 as described herein through layer 2 as described herein.
S323, layer 3 is a hidden layer and has L neurons, and any neuron can be represented by L.
The output of any neuron in layer 3 is
Wherein w
ml And b
ml Respectively the connection weights and offsets of the mth neuron of layer 2 and the lth neuron of layer 3,
as a function of excitation, eye
And
is a set of parameters.
The output of any neuron in layer 3 is representable as
S324 layer 4 is a data fusion layer having Q neurons, and Q ═ 1, 2, 3.
Normalizing the data input by the data fusion layer, wherein the processing mode is the prior art and is not described more herein, and the obtained normalized data is recorded as
Respectively find out
Is respectively marked as xi
q 、
The way it is calculated is prior art and will not be elaborated upon herein.
The output of the fused layer 4 data is recorded as:
wherein
As an excitation function, w
lq And b
lq The connection weights and offsets for the l-th neuron of layer 3 and the q-th neuron of layer 4 respectively,
where k is a constant.
From the above, the output of layer 4 is:
s325 layer 5 has 4 neurons, where r ═ {1, 2, 3.., 4}, and any one neuron is denoted by r.
Wherein Y is 1 The output is probability value of rear-end collision, Y 2 The output is the rear-end collision prevention speed adjustment value Y 3 Adjusting coordinate value Y for preventing self vehicle from rear-end collision 4 The braking capability value is obtained. The specific calculation method is as follows:
Y r =f 1 (Q q )×w qr +b qr
wherein w qr And b qr Respectively is the connection weight and the offset of the qth neuron of the layer 4 and the r neuron of the layer 5, and t is a parameter;
from the above expression, one can obtain:
further, the step S4 includes:
and S4, respectively setting threshold values according to the output of the coordinate type neural network to perform early warning and intelligently regulate and control the vehicle.
Through a coordinate type neural network, Y is obtained 1 As a probability value of the rear-end collision.
Setting of rear-end collision prevention early warning threshold tau 1 When Y is 1 ≥τ 1 And in time, early warning is carried out, and a driver is warned to carry out vehicle intelligent adjustment on the vehicle.
The intelligent regulation and control are carried out to the vehicle through the safe adjustment scheme of preventing knocking into the back that the vehicle set up in advance, and the concrete mode is:
the speed of the vehicle is adjusted through the set standard speed value, the direction of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
The invention has at least the following beneficial effects:
1. compared with the existing formula, the formula for expressing the relationship between the coordinates of the same point on the road surface and the coordinates of the imaged two-dimensional image plane is more accurate and fine, so that the subsequent detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the occurrence of rear-end collision events is better prevented.
2. The invention adopts the wireless sensor to collect and transmit the friction factor and the pressure value, avoids the data representation caused by the abrasion of tires and the changeability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running of multiple road conditions, and leads the regulation and control of the vehicle speed to be more intelligent and accurate.
3. Data fusion in layer 4 according to the invention
The excitation function utilizes the mean value and the variance of the data and combines constant parameters to calculate the data, thereby reducing the complexity of neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
4. Compared with the prior art, the intelligent vehicle control system has the advantages that the vehicle is intelligently controlled in an all-round way, the rear-end collision is avoided in an all-round way, and the use efficiency is higher.
Detailed Description
For a more clear description of the invention, reference is now made to the accompanying drawings, which together with the detailed description, serve to explain the principles of the invention.
Referring to fig. 1, the invention provides a vehicle intelligent regulation and control method for preventing rear-end collision under multiple conditions, comprising the following steps:
s1, monitoring the speed and the relative distance of the front vehicle in real time based on the optical principle by utilizing the camera device of the automobile data recorder, and obtaining the coordinates and the relative speed of the target vehicle.
S11 sets a two-dimensional image plane coordinate system formed after the shooting by the camera three-dimensional coordinate system:
the position of the camera is used as a coordinate origin, an x axis and a z axis of a three-dimensional coordinate are both in a plane of a road where the vehicle runs, the x axis is perpendicular to the advancing direction of the vehicle, the y axis is perpendicular to the road surface where the vehicle runs, and the z axis is parallel to the advancing direction of the vehicle. The position of the camera is on the axis of the set three-dimensional coordinates. The optical axis direction of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the optical axis direction of the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of a vehicle.
Let h be the height from the ground at which the camera is mounted, known height, using O (x) * ,h * ,z * ) And position coordinates of an arbitrary point on the road surface that can be captured by the camera.
Setting a two-dimensional image coordinate system formed after the shooting of a camera, setting a transverse coordinate axis i by taking an optical center G point of the camera as a coordinate origin * Longitudinal coordinate axis j * A shaft. Wherein i * Axis parallel to the x-axis, j * Shaft and i * The axes and the optical axis are perpendicular to each other, and O' (i, j) represents a plane of a two-dimensional image formed after the image is captured by the cameraCoordinates of points of the surface.
Then O (x) can be expressed by the following formula * ,h * ,z * ) Mapping with O' (i, j):
d in the above formula represents the focal length of the camera.
Then O (x) may also be used * ,h * ,z * ) O' (i, j) is expressed by the following specific formula:
compared with the existing formula, the formula for expressing the relationship between the coordinates of the same point on the road surface and the coordinates of the imaged two-dimensional image plane is more accurate and fine, so that the subsequent detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the occurrence of rear-end collision events is better prevented.
S12, estimating the distance between the vehicle and the front vehicle and the speed of the front vehicle relative to the self vehicle in real time according to the images shot by the camera.
Considering the front vehicle as a point, the position coordinate of the point on the road surface can be O (x) * ,h * ,z * ) Represents; and calibrating the bottom center point of the vehicle shadow in the image, and marking as a point O 'to represent the position point of the front vehicle, so that the plane coordinates of a two-dimensional image formed by the point after being shot by the camera can also be represented by O' (i, j).
The distance between the self vehicle and the front vehicle is recorded as d, and the imaging relationship of the camera can obtain:
alpha represents the acute angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the vehicle and the front vehicle and the vehicle speed V of the corresponding front vehicle relative to the vehicle can be measured in real time through laser ranging, and the laser ranging is the prior art and is not described in more detail herein.
S13, calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely x, respectively according to the calculated alpha * 、z * Is given by the following formula:
the expression of the two-dimensional plane image coordinate O' (i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula:
and using the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the real-time vehicle speed V of the front vehicle relative to the self vehicle as a group of parameters for building the neural network model in the step S2.
S2 calculates the friction coefficient of the own vehicle under the environmental parameters.
The degree of friction factor generated between the vehicle and the ground in the environment is represented by gamma car 1 、γ car 2 、γ car 3 、γ car 4 The friction factors of four tires of a vehicle are represented, where n ═ {1, 2, 3, 4} represents a tire, and any tire is represented by n. The friction factor of any tire can be represented by gamma car n The forward pressure value of any tire and the ground is represented by F n And (4) performing representation. Let e be {1, 2, 3, 4}, F e And (4) showing. Gamma ray road The friction factor of the road is represented, and the total rolling friction coefficient of the four tires of the vehicle and the road is represented by mu.
The expression for the total rolling friction coefficient of the vehicle and the road is then:
wherein gamma is car n 、γ road 、F n The data are collected in real time by a wireless sensor and transmitted to a computer of a vehicle for calculation, sigma F Indicating the value of the positive pressure F of the tyre against the ground n Variance of (a) car Representing the variance of the friction factors of the four tires of the vehicle.
The invention adopts the wireless sensor to collect and transmit the friction factor and the pressure value, avoids the data representation caused by the abrasion of tires and the changeability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running of multiple road conditions, and leads the regulation and control of the vehicle speed to be more intelligent and accurate.
S3 coordinate type neural network model establishment: and outputting the probability value of the rear-end collision event of the vehicle by using the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane, the real-time speed V of the front vehicle relative to the self vehicle and the rolling friction coefficient mu of the vehicle.
Referring to fig. 2, S31 obtains the lateral coordinate value i and the longitudinal coordinate value j of the preceding vehicle in the two-dimensional plane, the real-time speed V of the preceding vehicle relative to the own vehicle, and the rolling friction coefficient μ of the vehicle from steps S1 and S2 according to the present invention, and obtains the variable X ═ i, j, V, μ.
Data normalization preprocessing was performed on X ═ i, j, V, μ ]:
where t is a parameter and t → ∞.
Compared with the prior art, the data standardization processing mode adopted by the invention abandons the defects caused by the use of the mean value and the variance of the data, and the data is standardized by utilizing the limit data processing mode, so that the calculation is simpler and more convenient.
The normalized data X' is obtained as an input variable and input to the coordinate-type neural network established in the present invention.
S32 the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is a data input layer C, and the input variables are X ' ═ i ', j ', V ', μ '](ii) a Layer 2 is a rule selection layer and shows that the input data processing rules are selected; layer 3 is a first hidden layer; layer 4 is a data fusion layer; layer 5 is an output layer, Y 1 The output is the probability value of the occurrence of the rear-end collision event.
S321 layer 1: with 4 neurons, i.e., C-4, C-1, 2, 3, 4, any one neuron can be represented by C.
The input to the input layer is X ' ═ i ', j ', V ', μ ' ], and the output is equal to the input.
S322 layer 2 has M neurons, and M {1, 2, 3., M }, any neuron is denoted by M:
the generating rule function is as follows:
where u is {1, 2, 3, 4}, u denotes a dimension of an input quantity, and v is {1, 2, 3 u V denotes the accuracy of the input quantity, C u Denotes the number C u Precision, g uv Representing the center of a regular function, theta uv Width of the regular function, a 1 、a 2 Is constant and α 1 <a 2 。
Wherein w cm And b cm Weights and offsets for layer 1 as described herein through layer 2 as described herein.
The rule function adopted in the construction of the coordinate type neural network can carry out accurate processing on the input data efficiently, and the convergence rate of the neural network is enhanced.
S323, layer 3 is a hidden layer and has L neurons, and any neuron can be represented by L.
The output of any neuron in layer 3 is
Wherein w
ml And b
ml Respectively the connection weights and offsets of the mth neuron of layer 2 and the lth neuron of layer 3,
as a function of excitation, eye
And
is a set of parameters.
The output of any neuron in layer 3 is representable as
The excitation function used in layer 3 of the present invention makes the calculation process simpler and more convenient, and prevents the excessive convergence problem of the neural network more effectively.
S324 layer 4 is a data fusion layer having Q neurons, and Q ═ 1, 2, 3.
Normalizing the data input by the data fusion layer, wherein the processing mode is the prior art and is not described more herein, and the obtained normalized data is recorded as
Respectively find out
Is respectively marked as xi
q 、
The way it is calculated is prior art and will not be elaborated upon herein.
The output of the layer 4 after fusion is recorded as:
wherein
As an excitation function, w
2q And b
lq The connection weights and offsets for the first neuron of layer 3 and the q neuron of layer 4 respectively,
where k is a constant.
From the above, the output of layer 4 is:
data fusion in layer 4 according to the invention
The formula utilizes the mean value and the variance of the data and combines constant parameters to calculate the data, thereby reducing the complexity of neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
S325 layer 5 has 4 neurons, where r ═ {1, 2, 3.., 4}, and any one neuron is denoted by r.
Wherein Y is 1 The output is probability value of rear-end collision, Y 2 The output is the rear-end collision prevention speed adjustment value Y 3 Adjusting coordinate value, Y, for preventing self vehicle from rear-end collision 4 The braking capability value is obtained. The specific calculation method is as follows:
Y r =f 1 (Q q )×w qr +b qr
wherein w qr And b qr The q-th neuron of the layer 4 and the r-th neuron of the layer 5 respectively, and t is a parameter.
From the above expression:
and S4, respectively setting threshold values according to the output of the coordinate type neural network to perform early warning and intelligently regulate and control the vehicle.
Through the coordinate type neural network established by the invention, Y is obtained 1 As a probability value of the rear-end collision.
Setting of rear-end collision prevention early warning threshold tau 1 When Y is 1 ≥τ 1 And in time, early warning is carried out, and a driver is warned to carry out vehicle intelligent adjustment on the vehicle.
The intelligent regulation and control are carried out to the vehicle through the safe adjustment scheme of preventing knocking into the back that the vehicle set up in advance, and the concrete mode is: the speed of the vehicle is adjusted through the set standard speed value, the direction of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
Compared with the prior art, the intelligent vehicle control system has the advantages that the vehicle is intelligently controlled in an all-round way, the rear-end collision is avoided in an all-round way, and the use efficiency is higher.
In conclusion, the intelligent regulation and control method for the vehicle capable of preventing rear-end collision under multiple road conditions is realized.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.