Display method for predicting driving track
Technical Field
The invention relates to the technical field of automobile electronics, in particular to a display method for predicting a driving track.
Background
After a driver drives for a long time, disorder of physiological function and psychological function occurs, and the driving skill is objectively decreased. The driver has poor or insufficient sleeping quality, and is easy to have fatigue when driving the vehicle for a long time. Driving fatigue affects the driver's attention, feeling, perception, thinking, judgment, consciousness, decision and movement. The judgment ability of a driver is reduced, the response operation is slow, the operation errors are increased, and traffic accidents are easy to happen. As in patent CN201110449673.1, a method for dynamically indicating a graphic on a driving scene of a vehicle using a substantially transparent windscreen head-up display is disclosed comprising: monitoring road conditions; identifying potential road hazards based on the road conditions; determining a pattern identifying a potential roadway hazard; dynamically indicating a location of the graphic on the substantially transparent windscreen head-up display corresponding to a driving scene of the vehicle; and displaying the graphic on the substantially transparent windscreen head-up display. The method can only remind the condition of the road, but because the reaction of the driver is slow after long-time driving, when the potential road danger is found, the driver does not have enough attention and reaction speed to deal with the potential road danger in time.
Disclosure of Invention
The invention aims to provide a display method of a predicted driving track capable of predicting a driving track of a vehicle ahead.
A display method for predicting a driving track comprises the following steps:
s1: the image collecting terminal collects road condition images in front of the vehicle and sends the road condition images to the central processing unit;
s2: the central processing unit analyzes the road condition image in front of the vehicle, acquires and analyzes the running information of the vehicle in front and sends related instructions to the head-mounted display;
s3: and the head-mounted display displays the predicted driving path image of the front vehicle.
Through the driving track of prejudging the place ahead vehicle and show on the head mounted display, let the car owner can know the possible track of driving of place ahead vehicle very directly perceivedly, make things convenient for the car owner in time to analyze and know the situation, make suitable operation to the situation and avoid the emergence of accident. In addition, the head-mounted display is worn on the head of the vehicle owner, and the vehicle owner can learn the running change of the front vehicle without looking down. And when the head-mounted display has avoided the projecting apparatus to project to windshield, because the distortion that windshield damage or crooked brought gives the more audio-visual experience of car owner.
In step S2, the traveling information of the front vehicle includes the traveling speed of the front vehicle in the right and left lanes, the wheel turning direction, and the on/off state of the brake light, the back-up light, and the rear turn light in step S2. In order to better predict the driving track of the front vehicle and avoid the collision accident, the central processing unit predicts whether the front vehicle wants to stop or change the line by intensively analyzing the factors, so that the risk of colliding with the front vehicle is reduced.
The method has the advantages that the front vehicle of the front lane can be predicted to avoid rear-end collision accidents caused by sudden braking of the front vehicle, whether the brake lamp of the front vehicle is turned on or not can be observed through the image collecting terminal, whether the front vehicle is braked urgently or not can be judged, the vehicle owner can be intelligently reminded to pay attention to the front vehicle, the fact that the front vehicles of the left lane and the right lane are predicted is that the front vehicle is prevented from changing the line suddenly, the steering direction of the front vehicle can be found by observing the wheel deflection direction of the front vehicle and the rear steering lamp, whether the front vehicle changes the line to the lane where the vehicle is located or not can be judged, and if yes, the vehicle owner can be reminded to pay attention to the line changing of the front vehicle in time.
The driving track image comprises a lane mark in front of the vehicle, a vehicle mark in front of the lane in front of the vehicle, lane marks on the left side and the right side of the vehicle, vehicle marks in front of the lane on the left side and the right side of the vehicle, and vehicle marks.
In order to express the position relation between the vehicle and the front vehicle more intuitively, the method adds the front vehicle mark and the lane mark into the driving track image. The front vehicle marking may be represented by dots, the lane marking may be represented by lines, and the host vehicle marking may be represented by a rectangle.
In step S2, the cpu determines whether the brake light is on or not in front of the lane, and if so, the head-mounted display adds an emergency stop sign to the image of the trajectory in step S3; if not, then continue not to execute.
In order to better analyze the running condition of the front vehicle, the central processing unit can predict whether the front vehicle can brake suddenly according to whether the brake lamp of the front vehicle is turned on, and if so, an emergency stop mark is sent to a vehicle owner to remind the vehicle owner of paying attention to the front vehicle to avoid the vehicle from colliding with the front vehicle.
Preferably, in step S2, the method further comprises the steps of:
s21, the central processing unit analyzes whether the front vehicle of the left and right lanes lights up the brake lamp, if so, the central processing unit executes the step
S22, if not, not continuing the execution;
s22, the central processing unit analyzes whether the following vehicle of the following front vehicle deflects and whether the rear steering lamp is turned on, if the following vehicle deflects and/or the rear steering lamp is turned on, the S23 is executed; if the wheels are not turning and the rear turn lights are off, then execution is not continued.
S23: the central processing unit analyzes whether the following vehicle turns to a lane right ahead, if so, in step S3, the head mounted display adds a steering arrow and an emergency stop sign in the driving track image and displays the driving track image; if not turning to the lane directly ahead, it is not executed.
When the following vehicle turns to the lane right ahead, the central processing unit sends out an instruction in time to instruct the head-mounted display to add a turning arrow and an emergency stop mark in the driving track image so as to avoid the collision of the vehicle with the following vehicle.
The image collecting terminal collects road condition images of left and right lanes at step S1, the central processing unit analyzes the road condition images of the left and right lanes, acquires and analyzes the driving information of the parallel vehicle and transmits related commands to the head mounted display at step S2, and the head mounted display displays the predicted driving path image of the parallel vehicle at step S3.
The image collecting terminal also collects road condition images of the left and right lanes because the situation that the parallel vehicles on the left and right lanes overtake and turn into the lane right ahead of the vehicle may occur, but the walking tracks of the parallel vehicles cannot be predicted only by collecting the road condition images of the lane ahead.
In the driving track image, the mark color of the vehicle mark is different from that of the front vehicle mark.
The color of the vehicle mark is different from that of the front vehicle mark in the driving track image, so that a vehicle owner can distinguish the vehicle from the front vehicle in the driving track image more easily, and confusion is avoided.
The head mounted display may emit different types of alarms including audible alarms as well as vibratory alarms.
In step S2, the cpu analyzes whether the preceding vehicle is suddenly stopped or turning, and if so, the cpu determines that the preceding vehicle is in front of the vehicle, either left, right, or right, and issues a vibration alarm or an audio alarm in the corresponding direction.
At step S2, the cpu analyzes the model of the vehicle ahead, and if the vehicle ahead is a van or a taxi, the vehicle ahead is identified by red, and other vehicles ahead are identified by yellow, green, and blue.
The display method further comprises the step of normally enabling the head-mounted display to be in a standby state, wherein when the central processing unit analyzes whether the front vehicle brakes suddenly or turns at step S2, if yes, the head-mounted display is automatically turned on at step S3, and the predicted driving path image of the front vehicle is displayed.
The method also includes the central processing unit being in signal connection with the cloud processor through the wireless communication device. Because the central processing unit can not input all the vehicle type information, the central processing unit can upload the road condition image to the cloud processor, and the cloud processor helps to analyze the vehicle types of the front vehicle and the parallel vehicle and learn the shapes of the brake lamp, the reversing lamp and the rear steering lamp, so that the central processing unit can analyze the opening and closing conditions of the brake lamp, the reversing lamp and the rear steering lamp conveniently.
Drawings
FIG. 1 is a schematic flow diagram of the process.
FIG. 2 is a flowchart illustrating the branching step of step S2 according to the present invention.
Fig. 3 is a schematic diagram of signal connection according to the present invention.
Fig. 4 is a schematic position diagram of a projection apparatus and a head-mounted display according to the present invention.
Fig. 5 is a driving trace image in a normal case.
Fig. 6 is a trajectory image of a preceding vehicle at the time of preparing for a curve.
Description of the drawings 1, a central processing unit; 2. an image collection terminal; 3. a projection device; 4. an audible alarm device; 5. a face recognition terminal in the vehicle; 6. a wireless communication device; 7. a head mounted display; 8. lane marking; 9. a front vehicle identification; 10. a host vehicle identification; 11. turning to the arrow.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention can be made by those skilled in the art after reading the teaching of the present invention, and these equivalents also fall within the scope of the claims appended to the present application.
A display method for predicting a driving track comprises the following steps:
s1: the image collecting terminal 2 collects road condition images in front of the vehicle and sends the road condition images to the central processing unit 1;
s2: the central processing unit 1 analyzes the road condition image in front of the vehicle, acquires and analyzes the running information of the vehicle in front and sends related instructions to the head-mounted display 7;
s3: the head mounted display 7 displays the predicted lane trace image of the preceding vehicle.
A prejudging head-mounted display 7 comprises an image collecting terminal 2 for collecting road condition images in front of a vehicle, a windshield, a head-mounted display 7 for playing driving track images of the vehicle in front and a central processing unit 1, wherein the central processing unit 1 is respectively in signal connection with the image collecting terminal 2 and the head-mounted display 7.
In step S2, the traveling information of the front vehicle includes the traveling speed of the front vehicle in the right and left lanes, the wheel turning direction, and the on/off state of the brake light, the back-up light, and the rear turn light in step S2. In order to better predict the driving track of the vehicle ahead and avoid the collision accident, the cpu 1 predicts whether the vehicle ahead wants to stop or change the lane by analyzing the above factors in a centralized manner, thereby reducing the risk of collision with the vehicle ahead.
The driving track image comprises a lane mark 8 in front, a front vehicle mark 9 on the lane in front, lane marks 8 on the left side and the right side, front vehicle marks 9 on the lane on the left side and the right side, and a vehicle mark 10.
In step S2, the cpu 1 determines whether the brake light is on or not in front of the lane, and if so, the head mounted display 7 adds an emergency stop sign to the image of the trajectory in step S3; if not, then continue not to execute.
Preferably, in step S2, the method further comprises the steps of:
s21, the central processing unit 1 analyzes whether the front vehicle of the left and right lanes lights up the brake lamp, if yes, the step is executed
S22, if not, not continuing the execution;
s22, the central processing unit 1 analyzes whether the following vehicle of the following front vehicle deflects and whether the rear steering lamp is turned on, if the following vehicle deflects and/or the rear steering lamp is turned on, the S23 is executed; if the wheels are not turning and the rear turn lights are off, then execution is not continued.
S23: the central processing unit 1 analyzes whether the following vehicle turns to a lane right ahead, and if so, the head mounted display 7 adds a steering arrow 11 and an emergency stop sign to the driving trace image and displays the driving trace image in step S3; if not turning to the lane directly ahead, it is not executed.
The image collecting terminal 2 collects road condition images of the left and right lanes at step S1, the central processing unit 1 analyzes the road condition images of the left and right lanes, acquires and analyzes the traveling information of the parallel vehicle and transmits the related instructions to the head mounted display 7 at step S2, and the head mounted display 7 displays the predicted traveling path image of the parallel vehicle at step S3.
In the traveling trajectory image, the own vehicle mark 10 and the preceding vehicle mark 9 are different in mark color.
The color of the vehicle mark 10 is different from the color of the front vehicle mark 9 in the driving track image, so that the vehicle owner can distinguish the vehicle from the front vehicle more easily in the driving track image, and confusion is avoided.
The head mounted display 7 may emit different types of alarms including audible alarms as well as vibratory alarms.
At step S2, the cpu 1 analyzes the model of the vehicle ahead, and if the vehicle ahead is a van or a taxi, the vehicle ahead is marked with red, and other vehicles ahead are marked with yellow, green, and blue.
At step S2, the central processing unit 1 analyzes whether the preceding vehicle is suddenly stopped or turning, and if so, the central processing unit 1 judges that the preceding vehicle is in front of the vehicle in the left, right, or right direction, and issues a vibration alarm or an audio alarm in the corresponding direction.
The method further comprises the central processing unit 1 being in signal connection with the cloud processor via the wireless communication means 6. Because the central processing unit 1 can not input all the vehicle type information, the central processing unit 1 can upload the road condition image to the cloud processor, the cloud processor helps to analyze the vehicle types of the front vehicle and the parallel vehicle and learn the shapes of the brake lamp, the reversing lamp and the rear steering lamp, and the central processing unit 1 can conveniently analyze the on-off conditions of the brake lamp, the reversing lamp and the rear steering lamp.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.