CN117994750A - Driving assistance system, method and storage medium based on visual recognition - Google Patents

Driving assistance system, method and storage medium based on visual recognition Download PDF

Info

Publication number
CN117994750A
CN117994750A CN202410142214.6A CN202410142214A CN117994750A CN 117994750 A CN117994750 A CN 117994750A CN 202410142214 A CN202410142214 A CN 202410142214A CN 117994750 A CN117994750 A CN 117994750A
Authority
CN
China
Prior art keywords
vehicle
module
driver
abnormal
computing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410142214.6A
Other languages
Chinese (zh)
Inventor
赵宏远
曹连雨
宫团基
刘振波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hyperai Cloud Technology Beijing Co ltd
Chengdu Southwest Information Control Research Institute Co ltd
Original Assignee
Hyperai Cloud Technology Beijing Co ltd
Chengdu Southwest Information Control Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hyperai Cloud Technology Beijing Co ltd, Chengdu Southwest Information Control Research Institute Co ltd filed Critical Hyperai Cloud Technology Beijing Co ltd
Priority to CN202410142214.6A priority Critical patent/CN117994750A/en
Publication of CN117994750A publication Critical patent/CN117994750A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention relates to the field of automatic driving, and particularly discloses a driving assistance system, a driving assistance method and a storage medium based on visual recognition, wherein the system comprises a cloud platform and vehicle-mounted equipment, and the vehicle-mounted equipment comprises a calculation module, an external image acquisition module and an interaction module; the cloud platform is used for deploying the AI visual model to the computing module; the vehicle-mounted device is used for analyzing the external image through the AI visual model and marking the abnormal behavior vehicle according to the behavior characteristics of the vehicle; uploading information of the abnormal behavior vehicle to a cloud platform; the cloud platform is also used for updating a cloud abnormal vehicle database according to the received information of the abnormal behavior vehicle; updating a local abnormal vehicle database of the computing module based on the cloud abnormal vehicle database; the calculation module is also used for judging whether a vehicle recorded in the local abnormal vehicle database exists in the currently identified vehicles, and if so, generating prompt information of the vehicle. By adopting the technical scheme of the invention, early warning can be performed in advance, and the safety is improved.

Description

Driving assistance system, method and storage medium based on visual recognition
Technical Field
The present invention relates to the field of automatic driving, and in particular, to a driving assistance system, method and storage medium based on visual recognition.
Background
In modern society, the quality of life of people is continuously improved, and whether people own vehicles or not, driving becomes a necessary skill of people. Driving a vehicle can make people enjoy various funs and greatly facilitate their own lives. With the development of the automobile industry, vehicles are also becoming intelligent, and various vehicle control systems and auxiliary driving systems greatly improve the driving experience of people.
However, the existing driving assistance system mainly analyzes and judges through identifying real-time road conditions around the vehicle, then gives related prompts to the driver, and even if the sudden accident happens, the response time reserved for the driver is short, so that the decision of the driver is not facilitated.
Therefore, a driving assistance system, a driving assistance method and a storage medium based on visual recognition are needed, which can early warn in advance and improve safety.
Disclosure of Invention
One of the purposes of the invention is to provide a driving assistance system based on visual recognition, which can early warn in advance and improve safety.
In order to solve the technical problems, the application provides the following technical scheme:
The driving auxiliary system based on visual identification comprises a cloud platform and vehicle-mounted equipment, wherein the vehicle-mounted equipment comprises a calculation module, an external image acquisition module and an interaction module;
the cloud platform is used for deploying the AI visual model to the computing module;
The vehicle-mounted equipment is used for acquiring an external image acquired in the running process of the vehicle from the external image acquisition module; analyzing the external image through an AI visual model, identifying the vehicle and behavior characteristics of the vehicle, and marking the abnormal behavior vehicle according to the behavior characteristics of the vehicle; uploading information of the abnormal behavior vehicle to a cloud platform;
The cloud platform is also used for updating a cloud abnormal vehicle database according to the received information of the abnormal behavior vehicle; updating a local abnormal vehicle database of the computing module based on the cloud abnormal vehicle database at intervals of preset time;
the computing module is also used for judging whether a vehicle recorded in the local abnormal vehicle database exists in the currently identified vehicles, and generating prompt information of the vehicle if the vehicle exists;
the interaction module is used for displaying prompt information of the vehicle.
The basic scheme principle and the beneficial effects are as follows:
In the scheme, the vehicle-mounted equipment acquires and deploys the AI visual model from the cloud platform and receives the update of the AI visual model. The external image acquisition module acquires road images, analyzes behavior characteristics of surrounding vehicles in the images, such as information of types, positions, running directions and the like of the vehicles through the AI visual model, so as to identify illegal and abnormal behaviors of other vehicles on the road, marks the illegal behaviors as abnormal behavior vehicles, uploads the information of the abnormal behavior vehicles to the cloud platform, and realizes updating of an abnormal vehicle database through communication between the cloud platform and vehicle-mounted equipment;
through a large amount of data, the abnormal vehicle database can be continuously perfected, when the vehicle recorded in the local abnormal vehicle database exists at present, the prompt information of the vehicle is generated and displayed, compared with other vehicles, the vehicle has higher possibility of abnormal behavior, and potential traffic safety hazards can be timely identified in the driving process by prompting. Under the condition that a driver drives autonomously, the abnormal vehicle reminding information can enable the driver to pay attention to the abnormal vehicle in advance, so that the reaction time is shortened; in the automatic driving mode, the avoidance route can be planned in advance according to the abnormal vehicle reminding information, and the processing time is shortened.
In summary, the abnormal behavior vehicle which has occurred at present is stored by constructing the abnormal vehicle database, so that the possible behavior can be predicted in the future, early warning can be performed in advance, driving safety is improved, and accident occurrence probability is reduced.
Further, the number of the interaction modules is a plurality of the interaction modules, and the interaction modules are arranged at different positions of the vehicle;
the vehicle-mounted equipment further comprises an internal image acquisition module; the computing module is also used for acquiring an internal image of the vehicle from the internal image acquisition module and identifying a driver and the sight direction of the driver based on the internal image of the vehicle;
The computing module is also used for judging whether a vehicle recorded in the local abnormal vehicle database exists in the current identified vehicle, if so, determining an interaction module closest to the sight line direction of the driver according to the sight line direction of the driver and the pre-stored installation position of the interaction module, and pushing prompt information of the vehicle to the interaction module.
By introducing the internal image acquisition module and the recognition function of the driver and his gaze direction, the system is enabled to monitor the status of the driver more comprehensively. By analyzing the sight line direction of the driver, the system can more accurately determine the attention points and behaviors of the driver, so that when an abnormal vehicle is identified, prompt information is sent to an interaction module closest to the sight line direction of the driver, the driver is ensured to timely and effectively acquire key information, and the driving safety and experience are further improved.
Further, the vehicle-mounted equipment further comprises an interconnection module; the interconnection module is used for connecting the smart phone of the driver;
the computing module is further used for judging whether the sight line of the driver faces the smart phone or not when the interaction module closest to the sight line of the driver is not recognized; if the vehicle faces the smart phone, the prompt information of the vehicle is pushed to the smart phone through the interconnection module.
Through introducing interconnection module, make on-vehicle equipment can be connected with driver's smart mobile phone. Through connecting the smart mobile phone, the system can interact with the driver more flexibly, when the system does not recognize the interaction module closest to the sight direction of the driver, by analyzing whether the sight direction of the driver faces the smart mobile phone, the system can intelligently decide to send the prompt information to the smart mobile phone, another way for receiving the prompt information is provided, and the driver can acquire important information more conveniently and flexibly.
Further, the computing module is further used for determining the relative position relation between the current identified vehicle and the vehicle, classifying the vehicles according to the relative position relation, wherein the classification comprises a first distance vehicle, a second distance vehicle and a third distance vehicle;
the computing module is also used for acquiring a driving mode of the vehicle, wherein the driving mode comprises a motion mode and a normal mode;
the computing module is also used for determining corresponding classifications according to the driving modes; it is determined whether the vehicles in the corresponding classification are recorded in the local abnormal vehicle database.
And determining the relative position relation between the current identified vehicle and the vehicle by a calculation module, and classifying the vehicles according to the relative position relation, such as a first distance vehicle, a second distance vehicle and a third distance vehicle. Such classification can help the system more accurately evaluate the potential impact of other vehicles on the host vehicle, thereby more effectively planning driving strategies.
According to different driving modes, the system can adjust the classification of the identified vehicles to better adapt to different driving environments and requirements. For example, in a sport mode, the vehicle is frequently accelerated and decelerated, and abnormal vehicles with longer distances are displayed in advance, so that a driver can be helped to acquire information in advance, early warning is carried out in advance, and driving safety is enhanced.
Further, the computing module is further configured to identify a driver's attentiveness status from the vehicle interior image, the attentiveness status including an attentive status and a non-attentive status,
The computing module is also used for determining corresponding classifications according to the attention state of the driver; it is determined whether the vehicles in the corresponding classification are recorded in the local abnormal vehicle database.
The calculation module recognizes the attentiveness state of the driver from the vehicle interior image and classifies the attentiveness state into a concentration state and a non-concentration state. Such careful attention state recognition may help the system more accurately understand the behavior characteristics of the driver, adjusting the classification of the recognized vehicle based on the driver's attention state. For example, when the vehicle is not in concentration, abnormal vehicles at a longer distance are displayed in advance, so that the vehicle can be more accurately adapted to the actual situation of a driver, more intelligent driving assistance service is provided for the vehicle, and the driving safety is further enhanced.
Further, the calculation module is further used for identifying the passenger of the vehicle and the age of the passenger according to the image in the vehicle, recording the number of times of riding, judging whether the number of times of riding exceeds a threshold value or not, and if the number of times of riding exceeds the threshold value, marking the passenger as an auxiliary passenger;
The computing module is further used for judging whether the interaction module exists at the corresponding position of the auxiliary passenger when the interaction module closest to the sight line of the driver is not recognized and the sight line of the driver is not directed towards the smart phone, and pushing the prompt information of the vehicle to the interaction module at the corresponding position of the auxiliary passenger if the interaction module exists.
The system can more comprehensively know the conditions in the vehicle and mark the auxiliary passengers by the computing module according to the images in the vehicle to identify the passengers and the ages of the passengers and record the riding times.
When the system does not recognize the interaction module closest to the sight of the driver and the driver does not face the smart phone, the system can intelligently push the prompt information of the vehicle to the interaction module at the position corresponding to the auxiliary passenger by judging whether the interaction module exists at the position corresponding to the auxiliary passenger, and the auxiliary passenger reminds the driver.
Another object of the present invention is to provide a driving assistance method based on visual recognition, using the above system, comprising:
s1, the cloud platform deploys an AI visual model to a computing module;
S2, the vehicle-mounted equipment acquires an external image acquired in the running process of the vehicle from an external image acquisition module; analyzing the external image through an AI visual model, identifying the vehicle and behavior characteristics of the vehicle, and marking the abnormal behavior vehicle according to the behavior characteristics of the vehicle; uploading information of the abnormal behavior vehicle to a cloud platform;
S3, the cloud platform updates a cloud abnormal vehicle database according to the received information of the abnormal behavior vehicle; updating a local abnormal vehicle database of the computing module based on the cloud abnormal vehicle database at intervals of preset time;
S4, the calculation module judges whether a vehicle recorded in a local abnormal vehicle database exists in the currently identified vehicles, and if so, prompt information of the vehicle is generated;
S5, the interaction module displays prompt information of the vehicle.
According to the scheme, the abnormal behavior vehicles which are generated at present are stored by constructing the abnormal vehicle database and are used for predicting the behaviors which are possibly generated in the future, early warning can be carried out in advance, driving safety is improved, and accident occurrence probability is reduced.
Further, in the step S2, the calculation module acquires an image of the interior of the vehicle from the interior image acquisition module, and identifies the driver and the line of sight orientation of the driver based on the image of the interior of the vehicle;
In step S4, the computing module determines whether a vehicle recorded in the local abnormal vehicle database exists in the current identified vehicle, if so, determines an interaction module closest to the driver 'S sight line orientation according to the driver' S sight line orientation and the pre-stored installation position of the interaction module, and pushes prompt information of the vehicle to the interaction module.
Further, in the step S4, when the computing module does not recognize the interaction module closest to the sight line of the driver, it is determined whether the sight line of the driver is directed toward the smart phone; if the vehicle faces the smart phone, the prompt information of the vehicle is pushed to the smart phone through the interconnection module.
It is a further object of the present invention to provide a storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
Drawings
Fig. 1 is a logic block diagram of a first embodiment of a visual recognition-based driving assistance system.
Detailed Description
The following is a further detailed description of the embodiments:
Example 1
As shown in fig. 1, the driving assistance system based on visual recognition of the present embodiment includes a cloud platform and a vehicle-mounted device, where the vehicle-mounted device includes a computing module, an external image acquisition module, an internal image acquisition module, an interconnection module, and a plurality of interaction modules;
The cloud platform is used for deploying the AI visual model to the computing module. In this embodiment, the AI visual model employs a pre-trained YOLO model.
The vehicle-mounted equipment is used for acquiring an external image acquired in the running process of the vehicle from the external image acquisition module; analyzing the external image through an AI visual model, identifying the vehicle and behavior characteristics of the vehicle, and marking the abnormal behavior vehicle according to the behavior characteristics of the vehicle; and uploading the information of the abnormal behavior vehicle to the cloud platform. In this embodiment, when a vehicle is identified, the license plate of the vehicle is recorded. The abnormal behavior includes behavior that the solid line changes lanes, the lane changes do not turn the light, the traffic regulations are not violated such as driving according to the specified lane, or behavior that is easy to interfere with the normal vehicle, and the standard of the abnormal behavior is preset by the research staff. In other embodiments, the behavior characteristics of the vehicle and the distance of the vehicle may also be comprehensively analyzed in combination with the data of the vehicle-mounted lidar.
The cloud platform is also used for updating a cloud abnormal vehicle database according to the received information of the abnormal behavior vehicle; updating a local abnormal vehicle database of the computing module based on the cloud abnormal vehicle database at intervals of preset time; daily updates are made in this example.
The calculation module is also used for judging whether the current identified vehicle is recorded in a local abnormal vehicle database or not, and generating prompt information of the vehicle if the current identified vehicle is recorded in the local abnormal vehicle database;
The interactive modules are mounted at different locations of the vehicle. In this embodiment, the interactive modules are display screens, the number of which is 5, and are respectively installed on the main driving side, the auxiliary driving side, the middle between the main driving and the auxiliary driving, the left side of the rear row and the right side of the rear row of the instrument desk.
The interaction module is used for displaying prompt information of the vehicle. In this embodiment, the recognized vehicles are displayed by different color marks as prompt information, for example, in which a normal vehicle is displayed in white and the vehicle is displayed in red. The vehicle is displayed in red, so that the driver can conveniently and quickly recognize the vehicle.
The vehicle-mounted equipment further comprises an internal image acquisition module; the computing module is also used for acquiring an internal image of the vehicle from the internal image acquisition module and identifying the driver and the sight line orientation of the driver based on the internal image of the vehicle. In the present embodiment, a posture estimation model is employed, and information on the posture and the visual line orientation of the human body is provided by detecting the position (bounding box) and key points (e.g., head, eyes, mouth, etc.) of the human body.
The computing module is also used for judging whether a vehicle recorded in the local abnormal vehicle database exists in the current identified vehicle, if so, determining an interaction module closest to the sight line direction of the driver according to the sight line direction of the driver and the pre-stored installation position of the interaction module, and pushing prompt information of the vehicle to the interaction module. Specifically, the current observation visual field of the driver is analyzed according to the visual line direction of the driver and the preset visual field range, whether the interaction module is in the observation visual field is judged, and if the interaction module is in the observation visual field, the interaction module closest to the center of the observation visual field is used as the interaction module closest to the visual line direction of the driver. In this embodiment, if the interactive module is not the display screen of the main driving side, the prompt information of the vehicle is displayed on the display screen of the main driving side.
The interconnection module is used for connecting the smart phone of the driver; the computing module is further used for judging whether the sight line of the driver faces the smart phone (namely, whether the smart phone is in the observation field of the driver) or not when the interaction module closest to the sight line of the driver faces the nearest interaction module (namely, no display screen in the observation field of view) is not recognized; if the intelligent mobile phone is oriented, and the intelligent mobile phone is in an unlocking and screen-lighting state currently, the prompt information of the vehicle is pushed to the intelligent mobile phone through the interconnection module. The prompt information is always displayed in the visual field of the driver as far as possible, so that the driver can know the risk possibly appearing in advance.
Based on the above system, the present embodiment further provides a driving assistance method based on visual recognition, including the following:
s1, the cloud platform deploys an AI visual model to a computing module;
S2, the vehicle-mounted equipment acquires an external image acquired in the running process of the vehicle from an external image acquisition module; analyzing the external image through an AI visual model, identifying the vehicle and behavior characteristics of the vehicle, and marking the abnormal behavior vehicle according to the behavior characteristics of the vehicle; uploading information of the abnormal behavior vehicle to a cloud platform; the computing module acquires an internal image of the vehicle from the internal image acquisition module, and identifies a driver and the sight orientation of the driver based on the internal image of the vehicle;
S3, the cloud platform updates a cloud abnormal vehicle database according to the received information of the abnormal behavior vehicle; updating a local abnormal vehicle database of the computing module based on the cloud abnormal vehicle database at intervals of preset time;
S4, the calculation module judges whether a vehicle recorded in a local abnormal vehicle database exists in the current identified vehicle, and if so, prompt information of the vehicle is generated; and determining an interaction module closest to the sight direction of the driver according to the sight direction of the driver and the pre-stored installation position of the interaction module, and pushing the prompt information of the vehicle to the interaction module.
When the computing module does not recognize the interaction module closest to the sight line of the driver, judging whether the sight line of the driver faces the smart phone; if the vehicle faces the smart phone, the prompt information of the vehicle is pushed to the smart phone through the interconnection module.
S5, the interaction module displays prompt information of the vehicle.
The above-described driving assistance method based on visual recognition may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the method embodiment. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
Example two
The difference between the present embodiment and the first embodiment is that the computing module in the present embodiment is further configured to determine a relative positional relationship between the currently identified vehicle and the host vehicle, and classify the vehicles according to the relative positional relationship, where the classification includes a first distance vehicle, a second distance vehicle, and a third distance vehicle. In this embodiment, the first distance vehicle refers to a vehicle adjacent to the host vehicle (located on the left side, the right side or the front side of the host vehicle, the head of the vehicle exceeds the tail of the host vehicle or the tail of the vehicle does not exceed the head of the host vehicle, the left side or the right side is considered according to a specific positional relationship, and the front side is required to be in the same lane) and having a distance of less than 20 meters, and the second distance vehicle refers to a vehicle separated from the host vehicle by one vehicle and having a distance of less than 50 meters, or a vehicle adjacent to the host vehicle and having a distance of 20 meters or more and less than 50 meters; the third distance vehicle refers to a vehicle other than the above two cases.
The computing module is also used for acquiring a driving mode of the vehicle, wherein the driving mode comprises a motion mode and a normal mode;
The computing module is also used for determining corresponding classifications according to the driving modes; it is determined whether the vehicles in the corresponding classification are recorded in the local abnormal vehicle database. In this embodiment, the normal mode corresponds to the first distance vehicle, and the movement mode corresponds to the first distance vehicle and the second distance vehicle. Under the motion mode, the driver tends to drive violently, and the distance of suggestion is farther, can help the driver to decide in advance, and under the normal mode, reduces the distance of suggestion, can reduce the information volume that the driver accepted, and the driving process is lighter, and experience is better.
The computing module is also configured to identify a driver's attentiveness status from the vehicle interior image, the attentiveness status including an attentive status and a non-attentive status. In this embodiment, the concentration state means that the number of times the line of sight is separated from the front (the stay exceeding 1 second is recorded as 1 time, and the number of times the left and right rearview mirrors are not observed) is smaller than the set value (for example, 3 times) within the set time (for example, 1 minute).
The computing module is also used for determining corresponding classifications according to the attention state of the driver; it is determined whether the vehicles in the corresponding classification are recorded in the local abnormal vehicle database. In this embodiment, the concentration state corresponds to the first distance vehicle, and the non-concentration state corresponds to the first distance vehicle and the second distance vehicle. In the non-concentration state, the response time of the driver is longer, the prompting distance is longer, and the driver can be helped to make a decision in advance, so that risks brought by the longer response time are avoided.
The calculation module is also used for identifying the passenger of the vehicle and the age of the passenger according to the image in the vehicle, recording the riding times, judging whether the riding times exceed a threshold value or not, and if so, marking the passenger as an auxiliary passenger. In the present embodiment, analysis of the age of the occupant is performed by an age estimation algorithm. The preset age range is 20-50 years old; the threshold was 2 times.
The computing module is further used for judging whether the interaction module exists at the corresponding position of the auxiliary passenger when the interaction module closest to the sight line of the driver is not recognized and the sight line of the driver is not directed towards the smart phone, and pushing the prompt information of the vehicle to the interaction module at the corresponding position of the auxiliary passenger if the interaction module exists.
The driver can be screened out by the passengers which are familiar with the driver and have proper age (higher cognitive ability), and the passengers are used as auxiliary passengers, so that the driver is reminded by the auxiliary passengers when the driver cannot see the screen, and the safety is further improved.
The method of the present embodiment uses the above-described system.
Example III
The difference between the present embodiment and the second embodiment is that in the present embodiment, the calculation module is further configured to identify the degree of attention of the assist occupant to driving based on the vehicle interior image, and the degree of attention includes attention and non-attention. In the embodiment, whether the rearview mirror or the rear part is concerned when the vehicle changes lanes or the degree of concern for driving is identified by assisting the observation time ratio of the sight line to the front part in the unit time of the passenger and the use condition of the functions in the vehicle; among other things, the use of in-vehicle functions includes familiarity and familiarity. For example, the observation time of the line of sight to the front per unit time of the auxiliary passenger exceeds 70%, and the attention is recognized; if the ratio of the line of sight to the front in the unit time of the auxiliary passenger exceeds 50% and the service condition of functions in the vehicle is familiar, or the ratio of the line of sight to the front in the unit time of the auxiliary passenger exceeds 50% and the vehicle changes lanes, paying attention to the rearview mirror 1 time, and identifying as paying attention; otherwise, identifying as not paying attention. After the auxiliary passenger enters the vehicle, 2 or more settings (for example, adjusting an air conditioner, playing music, playing video through a display screen, etc., except for window adjustment and seat adjustment) are successfully adjusted, and the function use condition in the vehicle is judged to be familiar, otherwise, the function use condition in the vehicle is judged to be unfamiliar. The vehicle interior functions are known, the degree of understanding of the vehicle by passengers can be represented to a certain extent, and passengers interested in the vehicle are more concerned about driving.
The computing module is further used for judging whether the interaction module exists at the corresponding position of the auxiliary passenger when the interaction module closest to the sight line of the driver is not recognized and the sight line of the driver is not directed towards the smart phone, if so, generating prompt information of the vehicle in a targeted manner according to the attention degree of the auxiliary passenger to driving and pushing the prompt information to the interaction module at the corresponding position of the auxiliary passenger. In the present embodiment, when a plurality of auxiliary occupants including a secondary drive are included in the vehicle, the secondary drive is taken as an auxiliary occupant; if only the rear-row auxiliary passengers are included, the auxiliary passenger with the largest number of times of riding is selected preferentially, and if the number of times of riding is consistent, one bit is selected randomly.
When the auxiliary passenger is in the secondary driving and the attention degree of the driving is attention, the prompt information is a demonstration animation, and in the demonstration animation, the normal vehicle of the identified vehicle is displayed as a white animation vehicle, and the vehicle is displayed as a red animation vehicle; the visual field in front of the auxiliary driving position is good, the auxiliary personnel with high driving attention degree can independently observe the situation of the corresponding vehicle in front through the brief reminding of the demonstration animation, the self experience judgment is added, the driver is given to be closer to the actual suggestion, unnecessary reminding can be filtered, and the driver is helped more.
When the auxiliary passenger is in the secondary driving and the attention degree of the auxiliary passenger to driving is not attention, the prompt information is a demonstration animation of the road, in the demonstration animation of the road, the normal vehicle of the identified vehicle is displayed as a white animation vehicle, the vehicle is displayed as a red animation vehicle, the vehicle is further marked with a prompt icon, and the prompt icon is related to the historical abnormal behavior of the vehicle, for example, a solid line changes lanes, a lane changing does not turn to a lamp, and a lane changing lifting icon is not driven according to a specified lane.
If the assistant of the assistant driver does not pay attention to driving, the assistant driver can help the assistant driver to understand the prompt reason through the prompt icon, and then observe the prompt reason, so that the assistant driver is helpful to obtain a prompt closer to the actual situation, and the assistant driver has more help to the driver compared with the prompt information displayed by a simple wearing screen.
When the auxiliary passenger is positioned at the rear row and the attention degree of the auxiliary passenger on driving is attention, the prompt information is a real scene picture, in the real scene picture, the vehicle is framed with a red frame, annotation characters are also displayed, and the content mode of the annotation characters is brief (the reason for describing the framed vehicle, such as illegal history); compared with the existing UI of the demonstration animation, the fusion degree of the live-action picture and the screen is lower, the displayed content is more abrupt, and the beauty is inferior to that of the demonstration animation. However, compared with the front-row visual field, the rear-row visual field is worse, the display of the real scene is helpful for assisting passengers in knowing the external situation, and the content mode of the annotation text is brief, so that the assisting personnel can make more effective suggestions by combining with own autonomous observation, and unnecessary reminding can be filtered, thereby helping the driver more greatly.
When the auxiliary passenger is positioned at the rear row and the attention degree of the auxiliary passenger to driving is not attention, the prompt information is a real scene picture, the red frame for the vehicle is selected in the real scene picture, annotation characters are also displayed, the content display mode of the annotation characters is detailed (specifically describing the reason why the vehicle is selected by the frame, for example, the vehicle changes lanes once in a solid line, please pay attention to the state of the vehicle, and the driver can be reminded of attention if necessary); and providing detailed annotation words for auxiliary passengers which are not concerned about driving, and helping the auxiliary passengers to give correct reminding to the driver.
The method of the present embodiment uses the above-described system.
The foregoing is merely an embodiment of the present application, the present application is not limited to the field of this embodiment, and the specific structures and features well known in the schemes are not described in any way herein, so that those skilled in the art will know all the prior art in the field before the application date or priority date of the present application, and will have the capability of applying the conventional experimental means before the date, and those skilled in the art may, in light of the present application, complete and implement the present scheme in combination with their own capabilities, and some typical known structures or known methods should not be an obstacle for those skilled in the art to practice the present application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present application, and these should also be considered as the scope of the present application, which does not affect the effect of the implementation of the present application and the utility of the patent. The protection scope of the present application is subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (10)

1. The driving assistance system based on visual recognition comprises a cloud platform and vehicle-mounted equipment, and is characterized in that the vehicle-mounted equipment comprises a calculation module, an external image acquisition module and an interaction module;
the cloud platform is used for deploying the AI visual model to the computing module;
The vehicle-mounted equipment is used for acquiring an external image acquired in the running process of the vehicle from the external image acquisition module; analyzing the external image through an AI visual model, identifying the vehicle and behavior characteristics of the vehicle, and marking the abnormal behavior vehicle according to the behavior characteristics of the vehicle; uploading information of the abnormal behavior vehicle to a cloud platform;
The cloud platform is also used for updating a cloud abnormal vehicle database according to the received information of the abnormal behavior vehicle; updating a local abnormal vehicle database of the computing module based on the cloud abnormal vehicle database at intervals of preset time;
the computing module is also used for judging whether a vehicle recorded in the local abnormal vehicle database exists in the currently identified vehicles, and generating prompt information of the vehicle if the vehicle exists;
the interaction module is used for displaying prompt information of the vehicle.
2. The visual recognition-based driving assistance system according to claim 1, wherein: the interactive modules are arranged at different positions of the vehicle in number;
the vehicle-mounted equipment further comprises an internal image acquisition module; the computing module is also used for acquiring an internal image of the vehicle from the internal image acquisition module and identifying a driver and the sight direction of the driver based on the internal image of the vehicle;
The computing module is also used for judging whether a vehicle recorded in the local abnormal vehicle database exists in the current identified vehicle, if so, determining an interaction module closest to the sight line direction of the driver according to the sight line direction of the driver and the pre-stored installation position of the interaction module, and pushing prompt information of the vehicle to the interaction module.
3. The visual recognition-based driving assistance system according to claim 2, wherein: the vehicle-mounted equipment further comprises an interconnection module; the interconnection module is used for connecting the smart phone of the driver;
the computing module is further used for judging whether the sight line of the driver faces the smart phone or not when the interaction module closest to the sight line of the driver is not recognized; if the vehicle faces the smart phone, the prompt information of the vehicle is pushed to the smart phone through the interconnection module.
4. A visual recognition-based driving assistance system according to claim 3, wherein: the computing module is also used for determining the relative position relation between the current identified vehicle and the vehicle, classifying the vehicles according to the relative position relation, wherein the classification comprises a first distance vehicle, a second distance vehicle and a third distance vehicle;
the computing module is also used for acquiring a driving mode of the vehicle, wherein the driving mode comprises a motion mode and a normal mode;
the computing module is also used for determining corresponding classifications according to the driving modes; it is determined whether the vehicles in the corresponding classification are recorded in the local abnormal vehicle database.
5. The visual recognition-based driving assistance system according to claim 4, wherein: the computing module is further configured to identify a driver's attentiveness status, including attentive status and non-attentive status,
The computing module is also used for determining corresponding classifications according to the attention state of the driver; it is determined whether the vehicles in the corresponding classification are recorded in the local abnormal vehicle database.
6. The visual recognition-based driving assistance system according to claim 5, wherein: the calculation module is also used for identifying the passengers of the vehicle and the ages of the passengers according to the images in the vehicle, recording the times of taking, judging whether the times of taking the passengers in the preset age range exceed a threshold value, and if so, marking the passengers as auxiliary passengers;
The computing module is further used for judging whether the interaction module exists at the corresponding position of the auxiliary passenger when the interaction module closest to the sight line of the driver is not recognized and the sight line of the driver is not directed towards the smart phone, and pushing the prompt information of the vehicle to the interaction module at the corresponding position of the auxiliary passenger if the interaction module exists.
7. A method of driving assistance based on visual recognition, using a system according to any one of claims 1-6, characterized by comprising the following:
s1, the cloud platform deploys an AI visual model to a computing module;
S2, the vehicle-mounted equipment acquires an external image acquired in the running process of the vehicle from an external image acquisition module; analyzing the external image through an AI visual model, identifying the vehicle and behavior characteristics of the vehicle, and marking the abnormal behavior vehicle according to the behavior characteristics of the vehicle; uploading information of the abnormal behavior vehicle to a cloud platform;
S3, the cloud platform updates a cloud abnormal vehicle database according to the received information of the abnormal behavior vehicle; updating a local abnormal vehicle database of the computing module based on the cloud abnormal vehicle database at intervals of preset time;
S4, the calculation module judges whether a vehicle recorded in a local abnormal vehicle database exists in the currently identified vehicles, and if so, prompt information of the vehicle is generated;
S5, the interaction module displays prompt information of the vehicle.
8. The visual recognition-based driving assistance method according to claim 7, wherein: in the step S2, the computing module acquires an internal image of the vehicle from the internal image acquisition module, and identifies the driver and the sight orientation of the driver based on the internal image of the vehicle;
In step S4, the computing module determines whether a vehicle recorded in the local abnormal vehicle database exists in the current identified vehicle, if so, determines an interaction module closest to the driver 'S sight line orientation according to the driver' S sight line orientation and the pre-stored installation position of the interaction module, and pushes prompt information of the vehicle to the interaction module.
9. The visual recognition-based driving assistance method according to claim 8, wherein: in the step S4, when the computing module does not recognize the interaction module closest to the sight line of the driver, it is determined whether the sight line of the driver is directed toward the smart phone; if the vehicle faces the smart phone, the prompt information of the vehicle is pushed to the smart phone through the interconnection module.
10. A storage medium storing a computer program which, when executed by a processor, implements the steps of the method of claim 7.
CN202410142214.6A 2024-02-01 2024-02-01 Driving assistance system, method and storage medium based on visual recognition Pending CN117994750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410142214.6A CN117994750A (en) 2024-02-01 2024-02-01 Driving assistance system, method and storage medium based on visual recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410142214.6A CN117994750A (en) 2024-02-01 2024-02-01 Driving assistance system, method and storage medium based on visual recognition

Publications (1)

Publication Number Publication Date
CN117994750A true CN117994750A (en) 2024-05-07

Family

ID=90890315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410142214.6A Pending CN117994750A (en) 2024-02-01 2024-02-01 Driving assistance system, method and storage medium based on visual recognition

Country Status (1)

Country Link
CN (1) CN117994750A (en)

Similar Documents

Publication Publication Date Title
US7428449B2 (en) System and method for determining a workload level of a driver
US11814054B2 (en) Exhaustive driving analytical systems and modelers
US11055605B2 (en) Detecting dangerous driving situations by parsing a scene graph of radar detections
JP5120249B2 (en) Monitoring device and monitoring method, control device and control method, and program
US11117519B2 (en) Augmented reality-based roadside content viewing within primary field of view
US20040252027A1 (en) Method and apparatus for classifying vehicle operator activity state
KR20190126258A (en) Electronic device for vehicle and method for operating the same
EP3882883A1 (en) Information processing device, information processing method, and program
EP4030326A1 (en) Information processing device, mobile device, information processing system, method, and program
EP3517384A1 (en) Vehicle control device, vehicle control method, and moving body
JP2020024580A (en) Driving evaluation device and on-vehicle device
CN112105537A (en) Driving risk calculation device and method
CN109987090A (en) Driving assistance system and method
CN115720555A (en) Method and system for improving user alertness in an autonomous vehicle
Hoch et al. The BMW SURF project: A contribution to the research on cognitive vehicles
CN117994750A (en) Driving assistance system, method and storage medium based on visual recognition
CN115299948A (en) Driver fatigue detection method and detection system
CN115131749A (en) Image processing apparatus, image processing method, and computer-readable storage medium
JP7207912B2 (en) Driving evaluation system
US20150077563A1 (en) Display system for a vehicle
CN112519673A (en) Notification device and notification method in automatic driving vehicle
DE102021207258B3 (en) Method for automatically controlling at least one vehicle function of a vehicle and notification system for a vehicle
US11926259B1 (en) Alert modality selection for alerting a driver
WO2023092611A1 (en) Information broadcasting method and apparatus, traffic state prompting method and apparatus, and vehicle
CN114435374A (en) Measuring safe driving coefficient of driver

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination