CN115447600B - Vehicle anti-congestion method based on deep learning, controller and storage medium - Google Patents

Vehicle anti-congestion method based on deep learning, controller and storage medium Download PDF

Info

Publication number
CN115447600B
CN115447600B CN202211388110.0A CN202211388110A CN115447600B CN 115447600 B CN115447600 B CN 115447600B CN 202211388110 A CN202211388110 A CN 202211388110A CN 115447600 B CN115447600 B CN 115447600B
Authority
CN
China
Prior art keywords
vehicle
driver
information
deep learning
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211388110.0A
Other languages
Chinese (zh)
Other versions
CN115447600A (en
Inventor
雷金亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weilai Automobile Technology Anhui Co Ltd
Original Assignee
Weilai Automobile Technology Anhui Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weilai Automobile Technology Anhui Co Ltd filed Critical Weilai Automobile Technology Anhui Co Ltd
Priority to CN202211388110.0A priority Critical patent/CN115447600B/en
Publication of CN115447600A publication Critical patent/CN115447600A/en
Application granted granted Critical
Publication of CN115447600B publication Critical patent/CN115447600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/28Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating front of vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • B60Q5/005Arrangement or adaptation of acoustic signal devices automatically actuated
    • B60Q5/006Arrangement or adaptation of acoustic signal devices automatically actuated indicating risk of collision between vehicles or with pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W50/16Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/802Longitudinal distance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The present invention relates to the field of intelligent driving, and more particularly, to a vehicle congestion prevention method based on deep learning, an on-vehicle controller, and a computer storage medium. The method comprises the following steps: A. receiving a vehicle front image acquired at a first frequency from a vehicle-mounted stereo camera; B. inputting the front image of the vehicle into a deep learning algorithm model, and identifying the distance information, traffic sign information and traffic light information of the vehicle and the front vehicle by using the deep learning algorithm model; C. judging whether a front vehicle driver is in a distraction state or not based on the vehicle distance information, the traffic light information and the vehicle speed of the vehicle; and D, if the driver of the front vehicle is in a distraction state, determining a prompting mode for the front vehicle based on the traffic sign information.

Description

Vehicle anti-congestion method based on deep learning, controller and storage medium
Technical Field
The present invention relates to the field of intelligent driving, and more particularly, to a vehicle congestion prevention method based on deep learning, an on-vehicle controller, and a computer storage medium.
Background
In recent years, with the rapid rise of the keeping amount of passenger cars in China, traffic jam occurs frequently in rush hours and main road sections of urban areas, wherein part of the traffic jam is caused by the reasons of mobile phone playing, talking, foolproof, smoking, invading, concentration, even getting off and the like of drivers in the process of queuing traffic at intersections or on the jammed road sections.
Currently, some prior art techniques alert congestion caused by the driver of the host vehicle in a high speed scenario by means of radar ranging devices. However, radar ranging devices such as millimeter wave radar and lidar have drawbacks of high cost, low refresh rate, and poor robustness to bad weather. Furthermore, the congestion identification and alert schemes employed in the prior art are directed only to congestion phenomena caused by driver distraction of the host vehicle and are not directed to identification and alert of the driver distraction status of the lead vehicle.
Disclosure of Invention
To solve or at least alleviate one or more of the above problems, the following solutions are provided. The embodiment of the invention provides a vehicle anti-congestion method, a vehicle-mounted controller and a computer storage medium based on deep learning, so that intelligent and humanized anti-congestion prompt experience under full-type road queuing and other vehicle scenes such as high speed, common roads and the like is provided for users at lower software and hardware cost.
According to a first aspect of the present invention, there is provided a vehicle congestion prevention method based on deep learning, comprising: A. receiving a vehicle front image acquired at a first frequency from a vehicle-mounted stereo camera; B. inputting the front image of the vehicle into a deep learning algorithm model, and identifying the distance information, traffic sign information and traffic light information of the vehicle and the front vehicle by using the deep learning algorithm model; C. judging whether a driver of a front vehicle is in a distraction state or not based on the vehicle distance information, the traffic light information and the vehicle speed of the driver of the front vehicle; and D, if the driver of the front vehicle is in a distraction state, determining a prompting mode aiming at the front vehicle based on the traffic sign information.
Alternatively or additionally to the above, in a method according to an embodiment of the invention, step B includes the following operations based on the vehicle front image: b1, generating first judgment information indicating whether a preceding vehicle exists in a lane where the vehicle is located by using the deep learning algorithm model; and B2, if the first judgment information indicates that the front vehicle exists, determining the vehicle distance information between the vehicle and the front vehicle further based on the parameters of the vehicle-mounted stereo camera.
Alternatively or additionally to the above, in a method according to an embodiment of the invention, step B includes the following operations based on the vehicle front image: b3, generating second judgment information indicating whether a red light or a yellow light exists in front of the vehicle by using the deep learning algorithm model; and B4, if the second judgment information indicates that the red light or the yellow light does not exist, further generating third judgment information indicating whether a whistle prohibition sign exists in front of the vehicle by using the deep learning algorithm model.
Alternatively or additionally to the above, in a method according to an embodiment of the invention, step C comprises: c1, acquiring real-time vehicle speed of the vehicle from a vehicle speed sensor; and C2, judging whether a front vehicle driver is in a distraction state after the preset time if the vehicle speed of the vehicle is zero at the first time, the vehicle distance information indicates that the vehicle distance between a front vehicle and the vehicle is in a first range, and the traffic light information and/or the navigation information acquired from the high-precision map indicates that a red light or a yellow light does not exist in front of the vehicle.
Alternatively or additionally to the above, in the method according to an embodiment of the present invention, in step C2, determining whether the front vehicle driver is in a distracted state after the preset time includes: and if the speed of the vehicle is still zero at the second moment and the vehicle distance variation between the second moment and the first moment is smaller than or equal to the first distance, judging that the driver of the front vehicle is in a distraction state.
Alternatively or additionally to the above, in the method according to an embodiment of the present invention, in step C2, determining whether the front vehicle driver is in a distracted state after the preset time includes: and if the speed of the vehicle is smaller than or equal to the first speed at the second moment and the distance between the vehicle and the front vehicle is smaller than or equal to the second distance, judging that the driver of the front vehicle is in a distraction state.
Alternatively or additionally to the above, a method according to an embodiment of the invention further comprises: E. if the vehicle speed of the vehicle is continuously zero from the first moment to the second moment, and the vehicle distance variation between the second moment and the first moment is greater than or equal to a third distance, further judging whether the driver of the vehicle is in a distraction state.
Alternatively or additionally to the above, in the method according to an embodiment of the present invention, determining whether the driver of the host vehicle is in a distracted state includes: inputting an in-vehicle image acquired by an in-vehicle camera into the deep learning algorithm model, identifying whether the sight line of a driver is in front of a road surface by using the deep learning algorithm model, and judging that the driver of the vehicle is in a distraction state if the sight line of the driver is not in front of the road surface; and/or the method further comprises: if the driver of the vehicle is in the distraction state, a third prompt signal is generated to trigger the in-vehicle prompt device to prompt the driver of the vehicle.
Alternatively or additionally to the above, in a method according to an embodiment of the invention, step D comprises: if the driver of the front vehicle is in a distraction state, and the traffic sign information and/or the navigation information obtained from the high-precision map indicate that a whistle prohibition sign does not exist in front of the vehicle, generating a first prompt signal to trigger a horn of the vehicle to remind the front vehicle; and D2, if the driver of the front vehicle is in a distraction state, and the traffic sign information and/or the navigation information obtained from the high-precision map indicate that a whistle prohibition sign exists in front of the vehicle, generating a second prompting signal to trigger a front headlight to remind the front vehicle.
Alternatively or additionally to the above, in a method according to an embodiment of the invention, step D further comprises: and D3, if the driver of the vehicle is in a distraction state, generating a third prompting signal to trigger the in-vehicle prompting device to prompt the driver of the vehicle.
Alternatively or additionally to the above, in a method according to an embodiment of the invention, the in-vehicle reminder device comprises one or more of the following: speaker, steering wheel vibrating device and main seat vibrating device in car.
According to a second aspect of the present invention, there is provided an in-vehicle controller comprising: a memory; a processor; and a computer program stored on the memory and executable on the processor, the execution of the computer program causing any one of the methods according to the first aspect of the invention to be performed.
According to a third aspect of the present invention there is provided a computer storage medium comprising instructions which, when executed, perform any of the methods according to the first aspect of the present invention.
Firstly, according to the vehicle anti-congestion scheme based on the deep learning, the vehicle-mounted stereo camera is combined with the deep learning algorithm, the recognition of the distraction state of the driver of the front vehicle is realized by utilizing the visual sensing and the deep learning technology, compared with the prior art scheme adopting radar and laser, the hardware cost is saved, the advantages of wide environmental coverage, abundant perception data, high separation rate and the like of the visual sensing technology are fully exerted, and the recognition accuracy of the distraction state is improved. In addition, according to the vehicle anti-congestion scheme based on deep learning, which is one or more embodiments of the invention, the prompt mode for the front vehicle can be determined based on the recognition of the traffic sign information in the environment in front of the vehicle, so that the anti-congestion prompt scheme is more humanized, and the user experience is improved.
Drawings
The foregoing and/or other aspects and advantages of the present invention will become more apparent and more readily appreciated from the following description of the various aspects taken in conjunction with the accompanying drawings in which like or similar elements are designated with the same reference numerals. In the drawings:
FIG. 1 is a schematic flow diagram of a deep learning based vehicle anti-congestion method 10 in accordance with one or more embodiments of the invention;
FIG. 2 is a schematic flow diagram of a deep learning based vehicle anti-congestion method 20 in accordance with one or more embodiments of the invention; and
FIG. 3 is a schematic block diagram of an onboard controller in accordance with one or more embodiments of the invention.
Detailed Description
The following description of the specific embodiments is merely exemplary in nature and is in no way intended to limit the disclosed technology or the application and uses of the disclosed technology. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, or the following detailed description.
In the following detailed description of embodiments, numerous specific details are set forth in order to provide a more thorough understanding of the disclosed technology. It will be apparent, however, to one skilled in the art that the disclosed techniques may be practiced without these specific details. In other instances, well-known features have not been described in detail so as not to unnecessarily complicate the description.
Terms such as "comprising" and "including" mean that in addition to having elements and steps that are directly and explicitly recited in the description, the inventive aspects also do not exclude the presence of other elements and steps not directly or explicitly recited. The terms such as "first" and "second" do not denote the order of units in terms of time, space, size, etc. but rather are merely used to distinguish one unit from another.
Hereinafter, various exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings.
FIG. 1 is a schematic flow diagram of a deep learning based vehicle anti-congestion method 10 in accordance with one or more embodiments of the invention.
As shown in fig. 1, in step S110, a vehicle front image acquired at a first frequency is received from an in-vehicle stereo camera. The in-vehicle stereo camera may be, for example, a front-view binocular camera, a multi-view camera, or a pan-around camera that is provided and fixed to the outside of the automobile. It should be understood that in the embodiments of the present invention, cameras, lenses, video cameras, etc. represent devices that can acquire images or videos within a coverage area, which are similar in meaning and interchangeable, and the present invention is not limited in this regard. During image acquisition, images in front of the vehicle may be acquired at a fixed frequency (e.g., 25 frames per second).
In step S120, the vehicle front image is input into a deep learning algorithm model, and the vehicle distance information, traffic sign information, and traffic light information between the host vehicle and the preceding vehicle are identified by using the deep learning algorithm model. Illustratively, after the vehicle front image acquired by the vehicle-mounted stereo camera is acquired, the vehicle front road condition information is extracted and processed by using a deep learning and image processing technology. For example, the deep learning algorithm model may be configured as a multi-channel Convolutional Neural Network (CNN) (e.g., a two-channel CNN) algorithm model to identify the presence, location, and size of each object of interest in the front image of the vehicle, thereby obtaining the distance information, traffic sign information, and traffic light information of the host vehicle and the preceding vehicle. The deep learning algorithm model may be trained in advance with sets of training data including sample images and corresponding annotation information indicating whether objects of interest such as vehicles, lane lines, traffic lights (e.g., red, yellow, and green), traffic signs (e.g., whistle disabled signs) are present in the sample images.
Optionally, step S120 includes: and generating first judgment information indicating whether a preceding vehicle exists in the lane where the vehicle is located by using the deep learning algorithm model. In one example, the identification and localization of a lead vehicle may be achieved by first extracting vehicle features in the front image of the vehicle using a deep convolutional neural network. If the front vehicle is identified, lane information is acquired to further judge whether the identified front vehicle is in the lane where the own vehicle is located. The lane information can be provided by navigation information acquired from a high-precision map, the lane where the vehicle is currently located in the front image of the vehicle acquired by the vehicle-mounted stereo camera can be identified through an image identification algorithm, and the lane information provided by the two can be fused to acquire accurate lane line position information. Specifically, the high-precision map provides the system with accurate position information of the markers in the vehicle and the driving environment, for example, semantics and images of the lane lines, the road signs, the traffic lights, the traffic signs, and the like can be directly acquired from the high-precision map. The in-vehicle image processing module may accomplish extraction of markers (e.g., lane lines, traffic signs, traffic lights) in the vehicle front image based on image processing functions including image preprocessing, image segmentation, region of interest identification, feature point extraction, feature classification, etc. In one example, lane line information provided by the high-precision map may be matched and fused with lane line information provided by the in-vehicle image processing module, thereby obtaining accurate lane line position information. Further, by comparing the detected positioning information of the front vehicle with the lane line position information, whether the front vehicle and the vehicle are located in the same lane or not is judged.
Optionally, step S120 includes: if the first judgment information indicates that the front vehicle exists in the lane where the vehicle is located, the vehicle distance information between the vehicle and the front vehicle is further determined based on the parameters of the vehicle-mounted stereo camera. Illustratively, the camera parameters include internal and external parameters determined by the vehicle-mounted stereo camera during calibration, including focal length, camera height, distortion parameters, and the like. It should be noted that the specific distance information may be based on a binocular/multi-camera ranging method known in the art, including estimating the distance based on two independent images of the same environment acquired from two adjacent perspectives (binocular cameras), which is not limited by the present invention.
Optionally, step S120 includes: and generating second judgment information indicating whether a red light or a yellow light exists in front of the vehicle by using the deep learning algorithm model. In one example, a deep convolutional neural network may be utilized to extract traffic light features in a vehicle front image to enable identification of vehicle front traffic light conditions (e.g., red, yellow, green).
Optionally, step S120 includes: if the second judgment information indicates that the red light or the yellow light does not exist, a deep learning algorithm model is further utilized to generate third judgment information indicating whether a whistle prohibition mark exists in front of the vehicle. It will be appreciated that if it is determined that there is no red or yellow light in the vehicle front image (e.g., traffic light is in a green state or no traffic light), it is indicative that the front vehicle in the lane in which the host vehicle is located is in a passable state. When the front vehicle is judged to be in a passable state, the deep convolutional neural network can be further utilized to extract traffic sign features in the front image of the vehicle so as to realize the identification of traffic signs (such as whistle prohibition signs).
In step S130, it is determined whether the driver of the preceding vehicle is in a distracted state based on the vehicle distance information, the traffic light information, and the own vehicle speed. For example, it may be determined whether the host vehicle is in a passive stagnation state, that is, whether the host vehicle speed is zero due to the front being unviewable, based on acquiring real-time vehicle speed and traffic light information from a vehicle speed sensor of the host vehicle. If the speed of the vehicle is zero because the vehicle cannot pass ahead, whether the front vehicle forms a congestion threat to the vehicle is further determined based on the vehicle distance information. If the front vehicle forms a congestion threat to the vehicle, continuously judging whether the vehicle is in a congestion state or not after the preset time and whether the congestion state is caused by distraction of a driver of the front vehicle or not.
Optionally, step S130 includes: acquiring real-time vehicle speed of the vehicle from a vehicle speed sensor; if the vehicle speed of the host vehicle is zero at the first moment, the vehicle distance information indicates that the vehicle distance between the front vehicle and the host vehicle is in the first range, and the traffic light information and/or the navigation information obtained from the high-precision map indicates that the red light or the yellow light does not exist in front of the vehicle, judging whether the driver of the front vehicle is in a distraction state after the preset time. In one example, if at T 1 And the speed of the vehicle at the moment is 0, and traffic light information and/or navigation information obtained from the high-precision map indicate that red light or yellow light does not exist in front of the vehicle, and the vehicle is determined to be in a passive stagnation state. At this time, if the distance L between the host vehicle and the preceding vehicle 1 And if the traffic is less than or equal to 3 meters, determining that the front vehicle forms a congestion threat to the vehicle.
Optionally, step S130 includes: and if the speed of the vehicle is still zero at the second moment and the vehicle distance variation between the second moment and the first moment is smaller than or equal to the first distance, judging that the driver of the front vehicle is in a distraction state. In one example, if at T 2 Time of day (e.g. distance T 1 After 2 seconds at time) the own vehicle speed is still 0, and T 2 Time of day and T 1 Moment of change L in vehicle distance 2 -L 1 Less than or equal to 2.5 meters, it is determined that the host vehicle is already in a congested state and that the congested state is caused by distraction of the driver of the lead vehicle.
Alternatively, step S130 includes: and if the speed of the vehicle is smaller than or equal to the first speed at the second moment and the distance between the vehicle and the front vehicle is smaller than or equal to the second distance, judging that the driver of the front vehicle is in a distraction state. In one example, if at T 2 The speed of the time host vehicle is less than or equal to 1.2 m/s and T 2 Distance L between the own vehicle and the front vehicle at the moment 2 Less than or equal to 1.5 meters, it is determined that the host vehicle is already in a congested state and that the congested state is caused by distraction of the driver of the lead vehicle.
In step S140, if the driver of the preceding vehicle is in a distracted state, a presentation style for the preceding vehicle is determined based on the traffic sign information. Exemplary modes of prompting for a lead vehicle include horn prompting and signal lamp prompting.
Optionally, if it is determined that the driver of the preceding vehicle is in a distraction state and traffic sign information and/or navigation information acquired from a high-precision map indicate that a whistle prohibition sign does not exist in front of the vehicle, generating a first prompt signal to trigger a horn of the vehicle to remind the preceding vehicle; and if the driver of the front vehicle is in a distraction state, and the traffic sign information and/or the navigation information acquired from the high-precision map indicate that a whistle prohibition sign exists in front of the vehicle, generating a second prompting signal to trigger a front headlight to remind the front vehicle.
Optionally, the method 30 may further include step S150: whether the vehicle is in a non-accident stagnation state is judged based on the vehicle distance information and the vehicle speed of the vehicle, and if so, whether the non-accident stagnation state is caused by distraction of a driver of the vehicle is further judged.
Optionally, in step S150, if the vehicle speed of the host vehicle is continuously zero during the period from the first time to the second time, and the vehicle distance variation between the second time and the first time is greater than or equal to the third distance, it is further determined whether the host driver is in a distraction state. In one example, if at T 1 From moment to T 2 The vehicle speed of the vehicle is continuously 0 during the moment, and T 2 Time of day and T 1 Time of day (e.g., T 2 -T 1 Distance change amount L of =2s) 2 -L 1 And if the distance is greater than or equal to 5 meters, determining that the vehicle is in a non-accident stagnation state.
Optionally, determining whether the driver of the host vehicle is in a distracted state includes: inputting an in-vehicle image acquired by an in-vehicle camera into a deep learning algorithm model, and identifying whether the sight of a driver is in front of a road by using the deep learning algorithm model; and if the driver's sight line is not in front of the road surface, judging that the driver of the vehicle is in a distraction state. For example, in-vehicle images acquired by a Driver Monitoring System (DMS) camera or an in-cabin Occupant Monitoring System (OMS) camera may be input into a deep learning algorithm model and driver gaze information extracted and processed using deep learning and image processing techniques. In one example, if none of the driver's line of sight is in front of the road surface during a period of time (e.g., within 3 s), then the host vehicle driver is determined to be in a distracted state.
Optionally, if it is determined that the driver of the host vehicle is in a distracted state, a third prompting signal is generated to trigger the in-vehicle prompting device to prompt the driver of the host vehicle. The in-vehicle cue device may be one or more of an in-vehicle speaker, a steering wheel vibration device, and a main drive seat vibration device, for example.
Optionally, if it is determined that the driver of the host vehicle is not in a distracted state (e.g., the driver's line of sight is in front of the road), the voice prompt, the steering wheel vibration, and the main driving seat vibration are weakened or not sent out, so that the user experience is more humanized.
It should be noted that, the numerical range of the preset values (for example, the first distance, the second distance, the first speed, the preset time, etc.) described herein may be set according to actual requirements, and is not limited to the numerical range shown in the present embodiment.
According to some embodiments, the method 10 combines a vehicle-mounted stereo camera (for example, a binocular camera and a multi-camera) with a deep learning algorithm, and utilizes visual sensing and deep learning technologies to recognize the distraction state of a driver of a front car, so that compared with the prior art scheme adopting radar and laser, the hardware cost is saved, the advantages of wide environmental coverage, abundant perception data, high separation rate and the like of the visual sensing technology are fully exerted, and the recognition accuracy of the distraction state is improved. In addition, the method 10 can determine the prompting mode for the front vehicle based on the identification of the traffic sign information (such as the whistle prohibition sign) in the front environment of the vehicle, so that the anti-congestion prompting scheme is more humanized, and the user experience is improved.
With continued reference now to FIG. 2, FIG. 2 is a schematic flow diagram of a deep learning based vehicle congestion prevention method 20 in accordance with one or more embodiments of the invention. It should be noted that the steps of the method 20 may be performed with reference to the specific processing manner of the method 10 described above, and the relevant content is incorporated herein by reference. Is limited to a space and is not described in detail herein.
In step S210, the vehicle speed is continuously detected, and if the vehicle speed is zero at the first time, step S220 is continued.
In step S220, it is recognized whether there is a preceding vehicle in the lane in which the own vehicle is located. If a preceding vehicle is identified, continuing to step S230; otherwise, the flow is ended.
In step S230, it is recognized whether there is a red light or a yellow light in front of the vehicle. If there is no red or yellow light, continuing to step S240; otherwise, the flow is ended.
In step S240, it is determined whether the distance between the host vehicle and the preceding vehicle at the first moment is within a first range (e.g., distance L 1 Whether less than or equal to 5 meters). If yes, continue step S250; otherwise, the flow is ended.
In step S250, it is determined that the own vehicle speed is less than or equal to the first speed (e.g., 1.2 m/S) at the second timing. If yes, continue step S260; otherwise, the flow is ended.
In step S260, it is determined whether the vehicle distance variation between the second time and the first time is less than or equal to the first distance, and whether the vehicle distance between the host vehicle and the preceding vehicle is less than or equal to the second distance at the second time. If one of them is yes, proceed to step S270; otherwise, the flow is ended.
In step S270, it is recognized whether or not a whistle prohibition flag is present in front of the vehicle. If yes, continue step S280; if not, step S290 is continued.
In step S280, a second prompting signal is generated to trigger the headlamp to remind the driver of the front vehicle. In step S290, a first prompt signal is generated to trigger the host vehicle horn to remind the host vehicle of the front vehicle.
The invention may also be implemented as an on-board controller. Fig. 3 shows a schematic block diagram of an onboard controller 30 in accordance with one or more embodiments of the invention. As shown in fig. 3, the in-vehicle controller 30 includes a memory 310, a processor 320, and a computer program 330 stored on the memory 310 and executable on the processor 320, the execution of the computer program 330 causing the method 10 shown in fig. 1 or the method 20 shown in fig. 2 to be performed. Other elements that work in conjunction with the on-board controller 30 are also shown in fig. 3 for clarity of illustration of the principles of the present invention.
As shown in fig. 3, the in-vehicle controller 30 may receive the vehicle front image from the in-vehicle stereo camera 40 and identify the vehicle distance information, traffic sign information, and traffic light information of the own vehicle and the preceding vehicle using the deep learning algorithm model stored in the memory 310. Alternatively, the in-vehicle controller 30 may acquire navigation information from the high-definition map 50 for identification of lane lines, traffic lights, and traffic sign information. The detailed determination process about the distraction state of the driver of the front vehicle and the distraction state of the driver of the host vehicle has been described above, and will not be repeated here.
The onboard controllers 30 may also be communicatively coupled to the in-vehicle reminder 60, the host vehicle horn 70, and the headlamp 80. When the onboard controller 30 determines that the driver of the front vehicle is in a distracted state, it may generate a first alert signal to trigger the host vehicle horn 70 to alert the front vehicle, or generate a second alert signal to trigger the headlamp 80 to alert the front vehicle. When the in-vehicle controller 30 determines that the driver of the host vehicle is in a distracted state, it may generate a third prompting signal to trigger the in-vehicle prompting device 30 to prompt the driver of the host vehicle. Illustratively, the in-vehicle reminder device 30 includes, but is not limited to, an in-vehicle speaker, a steering wheel vibration device, and a main drive seat vibration device.
In addition, as described above, the present invention can also be implemented as a computer storage medium in which a program for causing a computer to execute the method 10 shown in fig. 1 or the method 20 shown in fig. 2 is stored. Here, as the computer storage medium, various types of computer storage media such as disks (e.g., magnetic disks, optical disks, etc.), cards (e.g., memory cards, optical cards, etc.), semiconductor memories (e.g., ROM, nonvolatile memory, etc.), tapes (e.g., magnetic tape, magnetic cassette, etc.), and the like can be employed.
Where applicable, hardware, software, or a combination of hardware and software may be used to implement the various embodiments provided by the present invention. Moreover, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the scope of the present invention. Where applicable, the various hardware components and/or software components set forth herein can be separated into sub-components comprising software, hardware, or both without departing from the scope of the present invention. Further, where applicable, it is contemplated that software components may be implemented as hardware components, and vice versa.
Software in accordance with the present invention, such as program code and/or data, may be stored on one or more computer storage media. It is also contemplated that the software identified herein may be implemented using one or more general-purpose or special-purpose computers and/or computer systems that are networked and/or otherwise. Where applicable, the order of the various steps described herein may be changed, combined into composite steps, and/or divided into sub-steps to provide features described herein.
The embodiments and examples set forth herein are presented to best explain the embodiments consistent with the invention and its particular application and to thereby enable those skilled in the art to make and use the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purpose of illustration and example only. The description as set forth is not intended to cover various aspects of the invention or to limit the invention to the precise form disclosed.

Claims (9)

1. The vehicle anti-congestion method based on deep learning is characterized by comprising the following steps of:
A. receiving a vehicle front image acquired at a first frequency from a vehicle-mounted stereo camera;
B. inputting the front image of the vehicle into a deep learning algorithm model, and identifying the distance information, traffic sign information and traffic light information of the vehicle and the front vehicle by using the deep learning algorithm model;
C. judging whether a driver of a front vehicle is in a distraction state or not based on the vehicle distance information, the traffic light information and the vehicle speed of the vehicle; and
D. if the driver of the front vehicle is in a distraction state, determining a prompting mode for the front vehicle based on the traffic sign information,
wherein, step C includes:
c1, acquiring real-time vehicle speed of the vehicle from a vehicle speed sensor; and
and C2, judging whether a front vehicle driver is in a distraction state after a preset time if the vehicle speed is zero at a first time, the vehicle distance information indicates that the vehicle distance between the front vehicle and the vehicle is in a first range, and the traffic light information and/or the navigation information acquired from the high-precision map indicates that the red light or the yellow light is not present in front of the vehicle, and judging whether the front vehicle driver is in the distraction state after the preset time in step C2 comprises judging that the front vehicle driver is in the distraction state if the vehicle speed is still zero at a second time and the vehicle distance variation between the second time and the first time is smaller than or equal to the first distance.
2. The method of claim 1, wherein step B includes the following operations based on the vehicle front image:
b1, generating first judgment information indicating whether a preceding vehicle exists in a lane where the vehicle is located by using the deep learning algorithm model; and
and B2, if the first judgment information indicates that the front vehicle exists, determining the vehicle distance information between the vehicle and the front vehicle further based on the parameters of the vehicle-mounted stereo camera.
3. The method of claim 1, wherein step B includes the following operations based on the vehicle front image:
b3, generating second judgment information indicating whether a red light or a yellow light exists in front of the vehicle by using the deep learning algorithm model; and
and B4, if the second judgment information indicates that the red light or the yellow light does not exist, further generating third judgment information indicating whether a whistle prohibition mark exists in front of the vehicle by using the deep learning algorithm model.
4. The method of claim 1, in step C2, determining whether the front vehicle driver is in a distracted state after a preset time further comprises:
and if the speed of the vehicle is smaller than or equal to the first speed at the second moment and the distance between the vehicle and the front vehicle is smaller than or equal to the second distance, judging that the driver of the front vehicle is in a distraction state.
5. The method of claim 1, further comprising:
E. if the vehicle speed of the vehicle is continuously zero in the period from the first moment to the second moment and the vehicle distance variation between the second moment and the first moment is larger than or equal to a third distance, further judging whether the driver of the vehicle is in a distraction state or not.
6. The method of claim 5, wherein,
judging whether the driver of the host vehicle is in a distracted state includes: inputting an in-vehicle image acquired by an in-vehicle camera into the deep learning algorithm model, identifying whether the sight line of a driver is in front of a road surface by using the deep learning algorithm model, and judging that the driver of the vehicle is in a distraction state if the sight line of the driver is not in front of the road surface; and/or
The method further comprises the steps of: if the driver of the vehicle is in the distraction state, a third prompt signal is generated to trigger the in-vehicle prompt device to prompt the driver of the vehicle.
7. The method of claim 1, wherein step D comprises:
if the driver of the front vehicle is in a distraction state, and the traffic sign information and/or the navigation information obtained from the high-precision map indicate that a whistle prohibition sign does not exist in front of the vehicle, generating a first prompt signal to trigger a horn of the vehicle to remind the front vehicle; and
and D2, if the driver of the front vehicle is in a distraction state, and the traffic sign information and/or the navigation information obtained from the high-precision map indicate that a whistle prohibition sign exists in front of the vehicle, generating a second prompting signal to trigger a headlight to remind the front vehicle.
8. A vehicle-mounted controller, characterized by comprising: a memory; a processor; and a computer program stored on the memory and executable on the processor, the execution of the computer program causing the method according to any one of claims 1-7 to be performed.
9. A computer storage medium, characterized in that it comprises instructions which, when executed, perform the method according to any of claims 1-7.
CN202211388110.0A 2022-11-08 2022-11-08 Vehicle anti-congestion method based on deep learning, controller and storage medium Active CN115447600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211388110.0A CN115447600B (en) 2022-11-08 2022-11-08 Vehicle anti-congestion method based on deep learning, controller and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211388110.0A CN115447600B (en) 2022-11-08 2022-11-08 Vehicle anti-congestion method based on deep learning, controller and storage medium

Publications (2)

Publication Number Publication Date
CN115447600A CN115447600A (en) 2022-12-09
CN115447600B true CN115447600B (en) 2023-05-16

Family

ID=84311327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211388110.0A Active CN115447600B (en) 2022-11-08 2022-11-08 Vehicle anti-congestion method based on deep learning, controller and storage medium

Country Status (1)

Country Link
CN (1) CN115447600B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116092056B (en) * 2023-03-06 2023-07-07 安徽蔚来智驾科技有限公司 Target recognition method, vehicle control method, device, medium and vehicle

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202320296U (en) * 2011-11-21 2012-07-11 长安大学 Automatic car following system during traffic jam
JP6915502B2 (en) * 2017-11-09 2021-08-04 トヨタ自動車株式会社 Driver status detector
CN110068302A (en) * 2019-03-07 2019-07-30 中科院微电子研究所昆山分所 A kind of vehicle odometry method based on deep neural network
EP3730332A1 (en) * 2019-04-26 2020-10-28 Zenuity AB Driver distraction determination
CN111976588A (en) * 2019-05-24 2020-11-24 王磊 Driving prompt device and vehicle
CN112644506A (en) * 2021-01-05 2021-04-13 江苏大学 Method for detecting driver driving distraction based on model long-time memory neural network LSTM-NN
CN113119992A (en) * 2021-04-30 2021-07-16 东风汽车集团股份有限公司 Intelligent front vehicle driving-away reminding method and system

Also Published As

Publication number Publication date
CN115447600A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN109427199B (en) Augmented reality method and device for driving assistance
US11767024B2 (en) Augmented reality method and apparatus for driving assistance
US10262629B2 (en) Display device
US9070293B2 (en) Device and method for traffic sign recognition
CN110400478A (en) A kind of road condition notification method and device
US10671071B2 (en) Control method and control device of automatic driving vehicle
US10887568B2 (en) Image processing apparatus, and image processing method
EP2575078B1 (en) Front vehicle detecting method and front vehicle detecting apparatus
CN114375467B (en) System and method for detecting an emergency vehicle
US10552695B1 (en) Driver monitoring system and method of operating the same
JP2023010800A (en) Display device
US20200189459A1 (en) Method and system for assessing errant threat detection
US20170043720A1 (en) Camera system for displaying an area exterior to a vehicle
US20210316734A1 (en) Vehicle travel assistance apparatus
US20200198538A1 (en) Apparatus and method for around view monitoring using lidar
JP4951481B2 (en) Road marking recognition device
CN115447600B (en) Vehicle anti-congestion method based on deep learning, controller and storage medium
CN116438583A (en) Available parking space recognition device, available parking space recognition method, and program
CN113183758A (en) Auxiliary driving method and system based on augmented reality
KR20160133257A (en) Avoiding Collision Systemn using Blackbox Rear Camera for vehicle and Method thereof
JP6367531B2 (en) OBE
CN109895694B (en) Lane departure early warning method and device and vehicle
WO2021161614A1 (en) Image transmission system
CN115131749A (en) Image processing apparatus, image processing method, and computer-readable storage medium
CN113386771A (en) Road model generation method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant