CN112991790B - Method, device, electronic equipment and medium for prompting user - Google Patents

Method, device, electronic equipment and medium for prompting user Download PDF

Info

Publication number
CN112991790B
CN112991790B CN201911214750.8A CN201911214750A CN112991790B CN 112991790 B CN112991790 B CN 112991790B CN 201911214750 A CN201911214750 A CN 201911214750A CN 112991790 B CN112991790 B CN 112991790B
Authority
CN
China
Prior art keywords
vehicle
parameters
target vehicle
parameter
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911214750.8A
Other languages
Chinese (zh)
Other versions
CN112991790A (en
Inventor
李清华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201911214750.8A priority Critical patent/CN112991790B/en
Publication of CN112991790A publication Critical patent/CN112991790A/en
Application granted granted Critical
Publication of CN112991790B publication Critical patent/CN112991790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions

Abstract

The application discloses a method, a device, electronic equipment and a medium for prompting a user. According to the method and the device, after the road parameters and the vehicle parameters for the target vehicle are obtained, the running track of the target vehicle can be determined based on the running parameters, and when the running track of the target vehicle is detected to be not in accordance with the preset conditions, an early warning prompt for prompting a user to drive the target vehicle and indicating potential safety hazards exists in the target vehicle is generated. By applying the technical scheme, the road parameters and the vehicle parameters of the vehicle can be automatically detected in the driving process of the vehicle, whether the driving track of the vehicle deviates from a normal lane line or not is determined according to the parameters, and prompt information for prompting a user to have potential safety hazards is generated after the deviation is detected. This also avoids the disadvantages of the related art that traffic accidents are easily caused due to fatigue driving of the driver.

Description

Method, device, electronic equipment and medium for prompting user
Technical Field
The present application relates to data processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for prompting a user.
Background
Due to the rise of the communications age and society, travel by means of vehicles has been increasingly used by more people with the rise of communications technology.
Further, the driver driving the vehicle is called fatigue driving when the driver continuously drives the vehicle for more than a certain time. Fatigue driving is prone to cause disorder of physiological and psychological functions, that is, the physiological parameters of the driver change, and the driving behavior is manifested as decreased driving skill, inattention, slow and sluggish operation of the vehicle, and the like. When the vehicle is driven continuously after fatigue, unsafe factors such as drowsiness and long operation response time can occur, thereby causing traffic accidents.
Therefore, how to prompt the driver with a warning in time under the condition that the driver is tired of driving becomes a problem to be solved by the technical personnel in the field.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a medium for prompting a user.
According to an aspect of an embodiment of the present application, a method for prompting a user is provided, which includes:
acquiring driving parameters aiming at a target vehicle, wherein the driving parameters comprise road parameters and vehicle parameters;
determining a driving track of the target vehicle based on the driving parameters;
and when the target vehicle is detected to be not in line with the preset condition, generating an early warning prompt, wherein the early warning prompt is used for prompting that potential safety hazards exist when a user drives the target vehicle.
Optionally, in another embodiment based on the above method of the present application, the determining the driving track of the target vehicle based on the driving parameter includes:
acquiring a road image by using a camera shooting acquisition device;
acquiring a lane line parameter corresponding to the road image based on the road image and a preset neural network model;
and determining the running track of the target vehicle by using the lane line parameters and the vehicle parameters.
Optionally, in another embodiment based on the above method of the present application, the determining the driving track of the target vehicle by using the lane line parameter and the vehicle parameter includes:
acquiring a running position of the target vehicle based on the vehicle parameters;
detecting whether the driving position is matched with the lane line parameter or not, and generating a matching result;
and determining the running track of the target vehicle based on the matching result.
Optionally, in another embodiment based on the foregoing method of the present application, the detecting whether the driving position is matched with the lane line parameter, and generating a matching result includes:
acquiring lane line positions based on the lane line parameters;
calculating a degree of departure of the target vehicle from the lane line based on the lane line position and the travel position;
and generating the matching result based on the deviation degree of the target vehicle and the lane line.
Optionally, in another embodiment based on the foregoing method of the present application, the detecting whether the driving position is matched with the lane line parameter, and generating a matching result includes:
acquiring a vehicle speed parameter and a vehicle configuration parameter corresponding to the target vehicle based on the vehicle parameter, wherein the vehicle configuration parameter is used for reflecting the driving state of the target vehicle;
and generating the matching result based on the vehicle speed parameter, the vehicle matching parameter and whether the driving position is matched with the lane line parameter.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring, by the camera acquiring device, a road image, the method further includes:
acquiring an environment image by using a camera shooting acquisition device;
acquiring an environment parameter corresponding to the environment image based on the environment image and a preset neural network model;
and determining the running track of the target vehicle by using the lane line parameters, the environment parameters and the vehicle parameters.
Optionally, in another embodiment based on the above method of the present application, the environmental parameters include weather parameters and road surface parameters:
determining the driving track of the target vehicle by using the lane line parameters, the environment parameters and the vehicle parameters comprises:
when a target object is determined to exist in the preset distance of the target vehicle based on the road surface parameters, acquiring a passing time parameter, wherein the passing time parameter is used for reflecting the time of the target vehicle for bypassing the target object;
determining a driving track of the target vehicle based on the weather parameter, the transit time parameter, the lane line parameter, and the vehicle parameter.
According to another aspect of the embodiments of the present application, there is provided an apparatus for prompting a user, including:
an acquisition module configured to acquire driving parameters for a target vehicle, the driving parameters including road parameters and vehicle parameters;
a determination module configured to determine a travel trajectory of the target vehicle based on the travel parameter;
the generating module is set to generate an early warning prompt when the running track of the target vehicle is detected to be not in accordance with a preset condition, and the early warning prompt is used for prompting that potential safety hazards exist when a user drives the target vehicle.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the above-described methods of prompting a user.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed perform the operations of any one of the above methods for prompting a user.
In the application, after road parameters and vehicle parameters aiming at a target vehicle are obtained, the running track of the target vehicle can be determined based on the running parameters, and when the running track of the target vehicle is detected to be not in accordance with the preset condition, an early warning prompt for prompting a user to drive the target vehicle and indicating potential safety hazards exists during driving of the target vehicle is generated. By applying the technical scheme, the road parameters and the vehicle parameters of the vehicle can be automatically detected in the driving process of the vehicle, whether the driving track of the vehicle deviates from a normal lane line or not is determined according to the parameters, and prompt information for prompting a user to have potential safety hazards is generated after the deviation is detected. This also avoids the disadvantages of the related art that traffic accidents are easily caused due to fatigue driving of the driver.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a system architecture for visually prompting a user according to the present application;
fig. 2 is a schematic diagram of a method for prompting a user according to the present application;
FIG. 3 is a schematic structural diagram of a device for prompting a user according to the present application;
fig. 4 is a schematic view of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for prompting a user according to an exemplary embodiment of the present application is described below in conjunction with fig. 1-3. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which a video processing method or a video processing apparatus of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
A user may use terminal devices 101, 102, 103 to interact with a server 105 over a network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like.
The terminal apparatuses 101, 102, 103 in the present application may be terminal apparatuses that provide various services. For example, a user acquires driving parameters for a target vehicle through the terminal device 103 (which may also be the terminal device 101 or 102), where the driving parameters include road parameters and vehicle parameters; determining a driving track of the target vehicle based on the driving parameters; and when the target vehicle is detected to be not in line with the preset condition, generating an early warning prompt, wherein the early warning prompt is used for prompting that potential safety hazards exist when a user drives the target vehicle.
It should be noted that the video processing method provided in the embodiments of the present application may be executed by one or more of the terminal devices 101, 102, and 103, and/or the server 105, and accordingly, the video processing apparatus provided in the embodiments of the present application is generally disposed in the corresponding terminal device, and/or the server 105, but the present application is not limited thereto.
The application also provides a method, a device, a target terminal and a medium for prompting the user.
Fig. 2 schematically shows a flowchart of a method for prompting a user according to an embodiment of the present application. As shown in fig. 2, the method includes:
s101, acquiring driving parameters aiming at a target vehicle, wherein the driving parameters comprise road parameters and vehicle parameters.
It should be noted that, the device for acquiring the driving parameters in the present application is specifically limited, and may be, for example, an intelligent device or a server. The smart device may be a PC (Personal Computer), a smart phone, a tablet PC, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) device for prompting a user, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) device for prompting a user, a portable Computer, and other mobile terminal devices having a display function.
Further, the present application does not specifically limit the target vehicle, and may be, for example, an electric vehicle, an automobile, a tricycle, etc.
In addition, the driving parameters of the target vehicle are not specifically limited in the present application, for example, the driving parameters may include road parameters and vehicle parameters. The road parameter may be a condition reflecting a road on which the target vehicle travels. Such as lane line position, road attributes (freeway, provincial or town road, etc.), road surface flatness, etc.
Further, the vehicle parameter in the present application may be a parameter reflecting a vehicle state, such as a vehicle model parameter, a driving speed parameter, a vehicle configuration parameter, a vehicle driving direction parameter, and the like. The method and the device can judge the running track of the target vehicle in real time according to the road parameters and the vehicle parameters of the target vehicle.
It should be noted that, in the present application, a timing of acquiring the driving parameters of the target vehicle is not specifically limited, for example, the corresponding driving parameters may be acquired when the start of driving of the target vehicle is detected, or the corresponding driving parameters may be acquired after a preset time period of driving of the target vehicle is detected. Or, the corresponding driving parameters can be acquired after the preset distance section of the target vehicle is detected.
And S102, determining the running track of the target vehicle based on the running parameters.
In the application, after the running parameters of the target vehicle are acquired, the running track corresponding to the target vehicle can be determined according to the running parameters. It can be understood that, the application can automatically determine whether the vehicle deviates from the normal track according to the running track of the vehicle. Thereby determining whether or not there is a fatigue driving situation for the driver driving the target vehicle.
Further, with the gradual improvement of the living standard of people, the automobile becomes a main vehicle for people to go out daily, the automobile brings the change of turning over the ground for the work and life of people, but along with the change, a large number of automobiles appear on the road, a plurality of traffic problems are gradually caused, and especially human factors become the main reason of traffic accidents at present.
In the case of fatigue driving, when a driver is fatigued, there are risks of a decrease in judgment ability, a delay in response, and an increase in misoperation. When the driver is slightly tired, the gear shifting is untimely and inaccurate; when the driver is in moderate fatigue, the operation action is sluggish, and sometimes even the driver forgets to operate; when a driver is severely tired, the driver is usually conscious to operate or sleeps for a short time, and the driver loses the control capability of the vehicle in severe cases, so that traffic accidents are caused. Therefore, the driving parameters of the target vehicle can be obtained in real time, the driving track of the vehicle is monitored in real time, and whether the target vehicle has the potential safety hazard of fatigue driving or not is determined.
S103, when the fact that the running track of the target vehicle does not accord with the preset condition is detected, an early warning prompt is generated and used for prompting that potential safety hazards exist when the user drives the target vehicle.
It should be noted that the preset condition is not specifically limited in the present application, and may be, for example, a condition deviating from a normal driving route, a condition that the number of times the vehicle swings in a unit time reaches a threshold, a condition that a reaction time for the vehicle to avoid an obstacle is taken as a condition, and the like.
In addition, the early warning prompt is not specifically limited in the same way. For example, the early warning prompt may be a prompt generated based on voice, and it can be understood that when the device determines that the driving track of the target vehicle does not meet the preset condition based on the driving parameters, the device may play voice of a preset audio to prompt a driver that a potential safety hazard currently exists. Or, the early warning prompt may also be a prompt generated based on vibration, and it can be understood that, when the device determines that the driving track of the target vehicle does not meet the preset condition based on the driving parameters, a vibrator on a seat or a steering wheel arranged inside the vehicle may be started to prompt the driver of the current potential safety hazard through vibration. Still alternatively, the early warning prompt may also be a prompt generated based on video information, and it can be understood that when the device determines that the driving track of the target vehicle does not meet the preset condition based on the driving parameters, the preset audio/video route may be played on a display screen arranged inside the vehicle to prompt the driver that the potential safety hazard currently exists.
It should be noted that, the present application does not specifically limit the number of times that the travel track of the target vehicle is determined not to meet the preset condition, for example, the present application may generate a corresponding warning prompt when the travel track of the target vehicle is determined not to meet the preset condition for the first time. Or, the application can also generate the corresponding warning prompt when the preset times that the running track of the preset target vehicle does not accord with the preset conditions are reached.
In the application, after road parameters and vehicle parameters aiming at a target vehicle are obtained, the running track of the target vehicle can be determined based on the running parameters, and when the running track of the target vehicle is detected to be not in accordance with the preset condition, an early warning prompt for prompting a user to drive the target vehicle and indicating potential safety hazards exists during driving of the target vehicle is generated. By applying the technical scheme, the road parameters and the vehicle parameters of the vehicle can be automatically detected in the driving process of the vehicle, whether the driving track of the vehicle deviates from a normal lane line or not is determined according to the parameters, and prompt information for prompting a user of potential safety hazards is generated after the deviation is detected. This also avoids the disadvantages of the related art that traffic accidents are easily caused due to fatigue driving of the driver.
In another possible embodiment of the present application, in S102 (determining the travel track of the target vehicle based on the travel parameters), the following may be implemented:
acquiring a road image by using a camera shooting acquisition device;
acquiring a lane line parameter corresponding to a road image based on the road image and a preset neural network model;
and determining the running track of the target vehicle by using the lane line parameters and the vehicle parameters.
Furthermore, the camera shooting and collecting device arranged at the head position of the target vehicle can be utilized to collect corresponding road images. As can be understood, the road image is image information on a road on which the target vehicle is currently traveling. After the image is obtained, the neural network model can be further utilized to extract the corresponding lane line parameters in the image. And determining the corresponding running track of the target vehicle according to the lane line parameters.
Further, for the equipment, after the road image is acquired by the camera shooting acquisition device, the characteristic information of the road image is extracted by the neural network model, and then the corresponding lane line parameters are obtained. It should be noted that, the preset neural network model is not specifically limited in the present application, and in a possible implementation, the feature recognition may be performed on the road image by using a convolutional neural network model.
Among them, Convolutional Neural Networks (CNN) are a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and are one of the representative algorithms of deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. The CNN (convolutional neural network) has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like due to the powerful feature characterization capability of the CNN on the image.
Further, the application may use feature information of each road in the extracted road image in the CNN neural network model. At least one road image needs to be input into a preset convolutional neural network model, and the output of a final fully connected layer (FC) of the convolutional neural network model is used as feature data corresponding to the road image. So as to obtain the feature recognition result (lane line parameter) corresponding to the road image according to the feature data.
Further, after the road image is acquired, the camera shooting and collecting device can be used for implementing the following steps:
acquiring an environment image by using a camera shooting acquisition device;
acquiring an environment parameter corresponding to the environment image based on the environment image and a preset neural network model;
and determining the running track of the target vehicle by using the lane line parameters, the environment parameters and the vehicle parameters.
Further, the probability of potential safety hazards during the running process of the vehicle on the road is increased due to the occurrence of severe weather. Therefore, the environment image outside the vehicle can be further acquired in the application. The environment image is not specifically limited, and for example, the environment image may be used to reflect a weather condition outside the vehicle, and the environment image may also be used to reflect a road obstacle or the like outside the vehicle.
Similarly, after the environment image is obtained, the preset convolutional neural network model can be used for extracting the features of the environment image. And further determining an environmental parameter corresponding to the image, and determining an environmental state outside the vehicle by using the environmental parameter.
For example, after the environmental image is obtained by using the camera acquisition device and the environmental parameter corresponding to the environmental image is further obtained according to the neural network model, the current weather condition outside the vehicle can be determined to be rainstorm according to the environmental parameter. It can be understood that because the potential safety hazard can be greatly increased when a driver drives a vehicle in the rainy days, the early warning prompt for prompting the potential safety hazard when a user drives the vehicle can be immediately generated when the driving track of the target vehicle is determined not to belong to the normal driving range by utilizing the lane line parameters and the vehicle parameters. So as to avoid the defect of traffic accidents caused by severe weather.
Further, the present application may also take the environmental image as an example to reflect the illumination condition outside the vehicle, for example, after the environmental image is obtained by using the camera capture device and the environmental parameter corresponding to the environmental image is further obtained according to the neural network model, the illumination condition outside the vehicle may be determined to be dark according to the environmental parameter. It can be understood that, because the potential safety hazard is greatly increased when the driver drives the vehicle under the condition of low visibility, the method and the device can also immediately generate the early warning prompt for prompting the potential safety hazard when the user drives the vehicle when determining that the driving track of the target vehicle does not belong to the normal driving range by using the lane line parameters and the vehicle parameters. So as to avoid the defect of traffic accidents caused by low visibility.
Still further, the present application may further use the environmental image as an example to explain the temperature and humidity condition outside the vehicle, for example, after the environmental image is obtained by using the camera acquisition device and the environmental parameter corresponding to the environmental image is further obtained according to the neural network model, the current temperature and humidity condition outside the vehicle may be determined to be the slippery road condition according to the environmental parameter. The method and the device have the advantages that potential safety hazards are greatly increased when a driver drives the vehicle on the wet and slippery road section, so that the early warning prompt for prompting potential safety hazards when the user drives the vehicle can be immediately generated when the driving track of the target vehicle is determined to be not in the normal driving range by means of lane line parameters and vehicle parameters. So as to avoid the defect of traffic accidents caused by road icing.
Further optionally, the environmental parameters in the present application include weather parameters and road surface parameters:
determining a driving track of a target vehicle by using the lane line parameters, the environmental parameters and the vehicle parameters, comprising:
when a target object is determined to exist in the preset distance of the target vehicle based on the road surface parameters, acquiring a passing time parameter, wherein the passing time parameter is used for reflecting the time of the target vehicle for bypassing the target object;
and determining the running track of the target vehicle based on the weather parameter, the passing time parameter, the lane line parameter and the vehicle parameter.
Further, the probability of potential safety hazards occurring when the vehicle travels on the road increases due to the occurrence of bad weather, and is also affected by whether other low-speed traveling or stationary automobiles or other obstacles occur in front of the vehicle. Therefore, the environment image outside the vehicle for reflecting the road surface parameters can be further acquired. The target object is not specifically limited, for example, the target object may be an automobile, or may be a stone, a roadblock, or other passing obstacles.
Similarly, after the environment image is obtained, the preset convolutional neural network model can be used for extracting the features of the environment image. And determining road surface parameters corresponding to the images, and determining the condition of the obstacles outside the vehicle by using the road surface parameters.
For example, after the environmental image is obtained by using the camera acquisition device and the road surface parameters corresponding to the environmental image are further obtained according to the neural network model, it can be determined that a road block which is prohibited to pass to the right front exists 500 meters ahead of the current vehicle according to the road surface parameters. It can be appreciated that avoiding roadblocks later can greatly increase the potential safety hazard because the driver's speed is very fast when driving the vehicle at high speed. Therefore, the method and the device can judge that fatigue driving of a driver is possible when the passing time parameter is determined to exceed the time threshold after the passing time parameter reflecting the time for the target vehicle to pass by the target object is acquired. Therefore, when the lane line parameters and the vehicle parameters are subsequently utilized to determine that the running track of the target vehicle does not belong to the normal running range, early warning prompts for prompting potential safety hazards exist when a user drives the vehicle are immediately generated. So as to avoid the defect of traffic accidents caused by fatigue driving.
Similarly, the environmental image is taken as an example of road surface parameters reflecting the outside of the vehicle, for example, after the environmental image is obtained by using the camera acquisition device and the road surface parameters corresponding to the environmental image are further obtained according to the neural network model, it can be determined that the stones located on the right side of the road exist in the place 1000 meters ahead of the vehicle at present according to the road surface parameters. Therefore, the driving state of the driver can be judged to be good when the passing time parameter is determined not to exceed the time threshold after the passing time parameter reflecting the time for the target vehicle to pass by the target object is acquired. Therefore, when the lane line parameters and the vehicle parameters are subsequently utilized to determine that the running track of the target vehicle does not belong to the normal running range, the corresponding early warning prompt can not be generated temporarily. And when the running track of the target vehicle detected for the preset times does not belong to the normal running range, generating a corresponding early warning prompt.
Further, in the present application, in the process of determining the driving track of the target vehicle by using the lane line parameters and the vehicle parameters, the following steps may be performed:
acquiring a running position of a target vehicle based on the vehicle parameters;
detecting whether the driving position is matched with the lane line parameters or not, and generating a matching result;
further optionally, in the process of generating the matching result, the method may be implemented by the following steps:
acquiring lane line positions based on the lane line parameters;
calculating the deviation degree of the target vehicle and the lane line based on the lane line position and the driving position;
and generating a matching result based on the deviation degree of the target vehicle and the lane line.
According to the lane marking method and device, after the lane marking parameters on the road where the target vehicle runs are obtained, the lane marking positions marked on the corresponding road can be obtained according to the parameters. Further, after the position of the lane line is obtained, the method can calculate whether the driving position of the target vehicle deviates from the position of the lane line. It is understood that when the driving position of the vehicle is detected to be located in the middle of the two lane line positions, it is determined that the degree of deviation of the target vehicle from the lane line is smaller than the preset threshold value. And then a matching result that the vehicle is not offset can be generated. When the driving position of the vehicle is detected not to be located in the middle of the two lane line positions, the deviation degree of the target vehicle and the lane line can be determined to exceed the preset threshold value, and then the matching result of the vehicle deviation can be generated.
Further, the method and the device can determine whether the vehicle deviates from the lane line according to whether the running position of the target vehicle is located between two non-adjacent lane lines. It will be appreciated that when the vehicle is normally traveling within a lane line, the position of the vehicle will necessarily also be intermediate the positions of the adjacent two lane lines. When the driver cannot normally operate the vehicle due to fatigue driving or the like, the position of the vehicle is located between two non-adjacent lane lines.
Optionally, in the process of detecting whether the driving position is matched with the lane line parameter and generating the matching result, the method can further include the following steps:
acquiring a vehicle speed parameter and a vehicle configuration parameter corresponding to a target vehicle based on the vehicle parameter, wherein the vehicle configuration parameter is used for reflecting the driving state of the target vehicle;
and generating a matching result based on the vehicle speed parameter, the vehicle matching parameter and whether the driving position is matched with the lane line parameter.
According to the lane marking method and device, after the lane marking parameters on the road where the target vehicle runs are obtained, the lane marking positions marked on the corresponding road can be obtained according to the parameters. Further, after the position of the lane line is obtained, vehicle configuration parameters of the target vehicle can be further obtained. The vehicle configuration parameters are used for reflecting whether the vehicle is in a four-wheel drive driving state or a two-wheel drive driving state. It will be appreciated that when the vehicle is in a four-wheel drive condition, the vehicle is relatively stable. When the vehicle is in a two-wheel drive state, the vehicle is relatively unstable. Therefore, the corresponding matching result can be further accurately selected and generated according to the driving state of the vehicle.
Further, the method and the device start to calculate the number of times of the vehicle deviation when the deviation of the driving position of the target vehicle and the lane line position is detected. It is understood that, for example, when the target vehicle is in the two-wheel drive state, the matching result of the vehicle offset may be generated when the number of times of detecting the vehicle offset is 3 times. When the target vehicle is in a four-wheel drive state, the matching result of the vehicle offset can be generated when the number of times of detecting the vehicle offset is 5.
Based on the matching result, the travel locus of the target vehicle is determined.
Further, when a matching result that the vehicle is deviated is generated, it can be determined that the running track of the target vehicle deviates from the normal track. And when a matching result that the vehicle is not deviated is generated, the running track of the target vehicle can be determined to belong to the normal track.
In another embodiment of the present application, as shown in fig. 3, the present application further provides a device for prompting a user. The device comprises an acquisition module 201, a determination module 202 and a generation module 203, wherein:
an obtaining module 201 configured to obtain driving parameters for a target vehicle, where the driving parameters include road parameters and vehicle parameters;
a determination module 202 configured to determine a driving trajectory of the target vehicle based on the driving parameter;
the generating module 203 is configured to generate an early warning prompt when it is detected that the driving track of the target vehicle does not conform to a preset condition, where the early warning prompt is used for prompting a user that a potential safety hazard exists when the user drives the target vehicle.
In the application, after road parameters and vehicle parameters aiming at a target vehicle are obtained, the running track of the target vehicle can be determined based on the running parameters, and when the running track of the target vehicle is detected to be not in accordance with the preset condition, an early warning prompt for prompting a user to drive the target vehicle and indicating potential safety hazards exists during driving of the target vehicle is generated. By applying the technical scheme, the road parameters and the vehicle parameters of the vehicle can be automatically detected in the driving process of the vehicle, whether the driving track of the vehicle deviates from a normal lane line or not is determined according to the parameters, and prompt information for prompting a user of potential safety hazards is generated after the deviation is detected. This also avoids the disadvantages of the related art that traffic accidents are easily caused due to fatigue driving of the driver.
In another embodiment of the present application, the determining module 202 further includes:
a determination module 202 configured to acquire a road image by using a camera acquisition device;
the determining module 202 is configured to obtain a lane line parameter corresponding to the road image based on the road image and a preset neural network model;
a determining module 202 configured to determine a driving trajectory of the target vehicle using the lane line parameter and the vehicle parameter.
In another embodiment of the present application, the determining module 202 further includes:
a determination module 202 configured to obtain a driving position of the target vehicle based on the vehicle parameter;
a determining module 202 configured to detect whether the driving position is matched with the lane line parameter, and generate a matching result;
a determination module 202 configured to determine a driving trajectory of the target vehicle based on the matching result.
In another embodiment of the present application, the module 203 is generated, wherein:
a generating module 203 configured to obtain lane line positions based on the lane line parameters;
a generating module 203 configured to calculate a degree of deviation of the target vehicle from the lane line based on the lane line position and the travel position;
a generating module 203 configured to generate the matching result based on a degree of deviation of the target vehicle from the lane line.
In another embodiment of the present application, the module 203 is generated, wherein:
the generating module 203 is configured to obtain a vehicle speed parameter and a vehicle configuration parameter corresponding to the target vehicle based on the vehicle parameter, wherein the vehicle configuration parameter is used for reflecting a driving state of the target vehicle;
a generating module 203 configured to generate the matching result based on the vehicle speed parameter and the vehicle configuration parameter, and whether the driving position is matched with the lane line parameter.
In another embodiment of the present application, the determining module 202 further includes:
a determining module 202 configured to acquire an environment image by using a camera acquisition device;
the determining module 202 is configured to obtain an environmental parameter corresponding to the environmental image based on the environmental image and a preset neural network model;
a determination module 202 configured to determine a driving trajectory of the target vehicle using the lane line parameter, the environmental parameter, and a vehicle parameter.
In another embodiment of the present application, the determining module 202 further includes:
a determination module 202 configured to determine a driving trajectory of the target vehicle using the lane line parameter, the environmental parameter, and a vehicle parameter, including:
a determining module 202 configured to, when it is determined that a target object exists within a preset distance of the target vehicle based on the road surface parameter, obtain a passing time parameter, where the passing time parameter is used for reflecting a time for the target vehicle to bypass the target object;
a determination module 202 configured to determine a driving trajectory of the target vehicle based on the weather parameter, the transit time parameter, the lane line parameter, and the vehicle parameter.
Fig. 4 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, electronic device 300 may include one or more of the following components: a processor 301 and a memory 302.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 302 is configured to store at least one instruction for execution by the processor 301 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 300 may further include: a peripheral interface 303 and at least one peripheral. The processor 301, memory 302 and peripheral interface 303 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 303 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, touch display screen 305, camera 306, audio circuitry 307, positioning component 308, and power supply 309.
The peripheral interface 303 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and peripheral interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the peripheral interface 303 may be implemented on a separate chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, providing the front panel of the electronic device 300; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device 300 or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 300. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 306 is used to capture images or video. Optionally, camera assembly 306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 301 for processing or inputting the electric signals to the radio frequency circuit 304 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the electronic device 300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 301 or the radio frequency circuitry 304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 307 may also include a headphone jack.
The positioning component 308 is used to locate the current geographic Location of the electronic device 300 to implement navigation or LBS (Location Based Service). The Positioning component 308 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 309 is used to supply power to various components in the electronic device 300. The power source 309 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 309 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 300 also includes one or more sensors 310. The one or more sensors 310 include, but are not limited to: acceleration sensor 311, gyro sensor 312, pressure sensor 313, fingerprint sensor 314, optical sensor 315, and proximity sensor 316.
The acceleration sensor 311 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 300. For example, the acceleration sensor 311 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 301 may control the touch display screen 305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 311. The acceleration sensor 311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 312 may detect a body direction and a rotation angle of the electronic device 300, and the gyro sensor 312 and the acceleration sensor 311 may cooperate to acquire a 3D motion of the user on the electronic device 300. The processor 301 may implement the following functions according to the data collected by the gyro sensor 312: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 313 may be disposed on a side bezel of the electronic device 300 and/or an underlying layer of the touch display screen 305. When the pressure sensor 313 is arranged on the side frame of the electronic device 300, the holding signal of the user to the electronic device 300 can be detected, and the processor 301 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 313. When the pressure sensor 313 is disposed at the lower layer of the touch display screen 305, the processor 301 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 314 is used for collecting a fingerprint of the user, and the processor 301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 314, or the fingerprint sensor 314 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, processor 301 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 314 may be disposed on the front, back, or side of the electronic device 300. When a physical button or vendor Logo is provided on the electronic device 300, the fingerprint sensor 314 may be integrated with the physical button or vendor Logo.
The optical sensor 315 is used to collect the ambient light intensity. In one embodiment, the processor 301 may control the display brightness of the touch screen display 305 based on the ambient light intensity collected by the optical sensor 315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 305 is turned down. In another embodiment, the processor 301 may also dynamically adjust the shooting parameters of the camera head assembly 306 according to the ambient light intensity collected by the optical sensor 315.
The proximity sensor 316, also referred to as a distance sensor, is typically disposed on the front panel of the electronic device 300. The proximity sensor 316 is used to capture the distance between the user and the front of the electronic device 300. In one embodiment, the processor 301 controls the touch display screen 305 to switch from the bright screen state to the dark screen state when the proximity sensor 316 detects that the distance between the user and the front surface of the electronic device 300 gradually decreases; when the proximity sensor 316 detects that the distance between the user and the front surface of the electronic device 300 is gradually increased, the processor 301 controls the touch display screen 305 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of electronic device 300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 304, including instructions executable by the processor 320 of the electronic device 300 to perform the above-described method of prompting a user, the method including: acquiring running parameters aiming at a target vehicle, wherein the running parameters comprise road parameters and vehicle parameters; determining a driving track of the target vehicle based on the driving parameters; and when the target vehicle is detected to be not in line with the preset condition, generating an early warning prompt, wherein the early warning prompt is used for prompting that potential safety hazards exist when a user drives the target vehicle. Optionally, the instructions may also be executable by the processor 320 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above. Optionally, the instructions may also be executable by the processor 320 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by the processor 320 of the electronic device 300 to perform the above-described method of prompting a user, the method comprising: acquiring running parameters aiming at a target vehicle, wherein the running parameters comprise road parameters and vehicle parameters; determining a driving track of the target vehicle based on the driving parameters; and when the target vehicle is detected to be not in line with the preset condition, generating an early warning prompt, wherein the early warning prompt is used for prompting that potential safety hazards exist when a user drives the target vehicle. Optionally, the instructions may also be executed by the processor 320 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (7)

1. A method of prompting a user, comprising:
acquiring driving parameters aiming at a target vehicle, wherein the driving parameters comprise road parameters and vehicle parameters;
acquiring a road image by using a camera shooting acquisition device;
acquiring a lane line parameter corresponding to the road image based on the road image and a preset neural network model;
acquiring a running position of the target vehicle based on the vehicle parameters;
detecting whether the driving position is matched with the lane line parameter or not, and generating a matching result;
determining a running track of the target vehicle based on the matching result;
when the fact that the running track of the target vehicle does not accord with a preset condition is detected, generating an early warning prompt, wherein the early warning prompt is used for prompting that potential safety hazards exist when a user drives the target vehicle;
the detecting whether the driving position is matched with the lane line parameter to generate a matching result includes:
acquiring a vehicle speed parameter and a vehicle configuration parameter corresponding to the target vehicle based on the vehicle parameter, wherein the vehicle configuration parameter is used for reflecting the driving state of the target vehicle;
and generating the matching result based on the vehicle speed parameter, the vehicle matching parameter and whether the driving position is matched with the lane line parameter.
2. The method of claim 1, wherein the detecting whether the driving location matches the lane line parameter, generating a match result, comprises:
acquiring lane line positions based on the lane line parameters;
calculating a degree of departure of the target vehicle from the lane line based on the lane line position and the travel position;
and generating the matching result based on the deviation degree of the target vehicle and the lane line.
3. The method of claim 1, wherein after said acquiring a road image with a camera acquisition device, further comprising:
acquiring an environment image by using a camera shooting acquisition device;
acquiring an environment parameter corresponding to the environment image based on the environment image and a preset neural network model;
and determining the running track of the target vehicle by using the lane line parameters, the environment parameters and the vehicle parameters.
4. The method of claim 3, wherein the environmental parameters include weather parameters and road surface parameters:
determining the driving track of the target vehicle by using the lane line parameters, the environment parameters and the vehicle parameters comprises:
when a target object is determined to exist in the preset distance of the target vehicle based on the road surface parameters, acquiring a passing time parameter, wherein the passing time parameter is used for reflecting the time of the target vehicle for bypassing the target object;
determining a driving track of the target vehicle based on the weather parameter, the transit time parameter, the lane line parameter, and the vehicle parameter.
5. An apparatus for prompting a user, comprising:
an acquisition module configured to acquire driving parameters for a target vehicle, the driving parameters including road parameters and vehicle parameters;
the determining module is configured to acquire a road image by using a camera acquisition device, acquire lane line parameters corresponding to the road image based on the road image and a preset neural network model, acquire a driving position of the target vehicle based on the vehicle parameters, detect whether the driving position is matched with the lane line parameters, generate a matching result, and determine a driving track of the target vehicle based on the matching result;
the generating module is used for generating an early warning prompt when the fact that the running track of the target vehicle does not accord with a preset condition is detected, wherein the early warning prompt is used for prompting that potential safety hazards exist when a user drives the target vehicle;
the determining module is further configured to obtain a vehicle speed parameter and a vehicle configuration parameter corresponding to the target vehicle based on the vehicle parameter, wherein the vehicle configuration parameter is used for reflecting a driving state of the target vehicle, and the matching result is generated based on whether the vehicle speed parameter and the vehicle configuration parameter are matched with each other and the driving position is matched with the lane line parameter.
6. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the method of prompting a user of any of claims 1-4.
7. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the method of prompting a user of any of claims 1-4.
CN201911214750.8A 2019-12-02 2019-12-02 Method, device, electronic equipment and medium for prompting user Active CN112991790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911214750.8A CN112991790B (en) 2019-12-02 2019-12-02 Method, device, electronic equipment and medium for prompting user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911214750.8A CN112991790B (en) 2019-12-02 2019-12-02 Method, device, electronic equipment and medium for prompting user

Publications (2)

Publication Number Publication Date
CN112991790A CN112991790A (en) 2021-06-18
CN112991790B true CN112991790B (en) 2022-06-07

Family

ID=76331130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911214750.8A Active CN112991790B (en) 2019-12-02 2019-12-02 Method, device, electronic equipment and medium for prompting user

Country Status (1)

Country Link
CN (1) CN112991790B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000251171A (en) * 1999-02-26 2000-09-14 Toyota Motor Corp Lane deviation alarming device for vehicle
JP2002092794A (en) * 2000-09-19 2002-03-29 Toyota Motor Corp Warning device for vehicle
JP2005346269A (en) * 2004-06-01 2005-12-15 Toyota Motor Corp Lane deviation warning device
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101984478A (en) * 2010-08-03 2011-03-09 浙江大学 Abnormal S-type driving warning method based on binocular vision lane marking detection
WO2012117505A1 (en) * 2011-02-28 2012-09-07 トヨタ自動車株式会社 Travel assistance device and method
CN102963359A (en) * 2011-08-31 2013-03-13 罗伯特·博世有限公司 Method for monitoring lanes and lane monitoring system for a vehicle
KR20140006564A (en) * 2012-07-06 2014-01-16 현대자동차주식회사 Lane keeping assist method and apparatus for vehicles
CN103723147A (en) * 2013-12-24 2014-04-16 财团法人车辆研究测试中心 Method for warning vehicle deviation and estimating driver state
CN206757846U (en) * 2017-04-21 2017-12-15 深圳六合六医疗器械有限公司 A kind of fatigue driving four-dimension monitoring system
CN107958601A (en) * 2017-11-22 2018-04-24 华南理工大学 A kind of fatigue driving detecting system and method
CN109326085A (en) * 2018-11-08 2019-02-12 上海掌门科技有限公司 A kind of method and apparatus for the progress fatigue driving detection on vehicle arrangement
CN109866684A (en) * 2019-03-15 2019-06-11 江西江铃集团新能源汽车有限公司 Lane departure warning method, system, readable storage medium storing program for executing and computer equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000251171A (en) * 1999-02-26 2000-09-14 Toyota Motor Corp Lane deviation alarming device for vehicle
JP2002092794A (en) * 2000-09-19 2002-03-29 Toyota Motor Corp Warning device for vehicle
JP2005346269A (en) * 2004-06-01 2005-12-15 Toyota Motor Corp Lane deviation warning device
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN101984478A (en) * 2010-08-03 2011-03-09 浙江大学 Abnormal S-type driving warning method based on binocular vision lane marking detection
WO2012117505A1 (en) * 2011-02-28 2012-09-07 トヨタ自動車株式会社 Travel assistance device and method
CN102963359A (en) * 2011-08-31 2013-03-13 罗伯特·博世有限公司 Method for monitoring lanes and lane monitoring system for a vehicle
KR20140006564A (en) * 2012-07-06 2014-01-16 현대자동차주식회사 Lane keeping assist method and apparatus for vehicles
CN103723147A (en) * 2013-12-24 2014-04-16 财团法人车辆研究测试中心 Method for warning vehicle deviation and estimating driver state
CN206757846U (en) * 2017-04-21 2017-12-15 深圳六合六医疗器械有限公司 A kind of fatigue driving four-dimension monitoring system
CN107958601A (en) * 2017-11-22 2018-04-24 华南理工大学 A kind of fatigue driving detecting system and method
CN109326085A (en) * 2018-11-08 2019-02-12 上海掌门科技有限公司 A kind of method and apparatus for the progress fatigue driving detection on vehicle arrangement
CN109866684A (en) * 2019-03-15 2019-06-11 江西江铃集团新能源汽车有限公司 Lane departure warning method, system, readable storage medium storing program for executing and computer equipment

Also Published As

Publication number Publication date
CN112991790A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN110148294B (en) Road condition state determining method and device
CN108961681B (en) Fatigue driving reminding method and device and storage medium
WO2021082483A1 (en) Method and apparatus for controlling vehicle
CN110618800A (en) Interface display method, device, equipment and storage medium
CN110920631B (en) Method and device for controlling vehicle, electronic equipment and readable storage medium
CN111126276B (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN111010537B (en) Vehicle control method, device, terminal and storage medium
CN110775056B (en) Vehicle driving method, device, terminal and medium based on radar detection
CN110231049B (en) Navigation route display method, device, terminal and storage medium
CN109189068B (en) Parking control method and device and storage medium
CN111147738A (en) Police vehicle-mounted panoramic and coma system, device, electronic equipment and medium
CN110920614A (en) Lane change control method, apparatus, device and storage medium
CN112991790B (en) Method, device, electronic equipment and medium for prompting user
CN111583669B (en) Overspeed detection method, overspeed detection device, control equipment and storage medium
CN112863168A (en) Traffic grooming method and device, electronic equipment and medium
CN114506383B (en) Steering wheel alignment control method, device, terminal, storage medium and product
CN111717205B (en) Vehicle control method, device, electronic equipment and computer readable storage medium
CN110944294B (en) Movement track recording method, device, system, computer equipment and storage medium
CN111294513B (en) Photographing method and device, electronic equipment and storage medium
CN115959157A (en) Vehicle control method and apparatus
CN115576459A (en) Information display method and device, electronic equipment and computer readable storage medium
CN116170694A (en) Method, device and storage medium for displaying content
CN114537139A (en) Vehicle control method, device, terminal, storage medium and product
CN117227747A (en) Method, device, equipment and storage medium for detecting autopilot capability
CN116331196A (en) Automatic driving automobile data security interaction system, method, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant