CN111391861A - Vehicle driving assisting method and device - Google Patents

Vehicle driving assisting method and device Download PDF

Info

Publication number
CN111391861A
CN111391861A CN201811639135.7A CN201811639135A CN111391861A CN 111391861 A CN111391861 A CN 111391861A CN 201811639135 A CN201811639135 A CN 201811639135A CN 111391861 A CN111391861 A CN 111391861A
Authority
CN
China
Prior art keywords
vehicle
image
driving
state
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811639135.7A
Other languages
Chinese (zh)
Inventor
贾可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201811639135.7A priority Critical patent/CN111391861A/en
Publication of CN111391861A publication Critical patent/CN111391861A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Abstract

The invention discloses a vehicle driving assistance method and device. The invention provides a vehicle driving assistance method, which captures an image of the periphery of a vehicle during the driving of the vehicle. And judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state. According to the shot images, the current driving state of the vehicle can be obtained, the environmental information around the vehicle can also be obtained, and whether the current driving state of the vehicle is safe or not is confirmed, so that the driving of a user is assisted, and the driving safety is ensured. Moreover, the camera can shoot images of blind areas possibly existing in the user, the driving visual field can be better increased, and dangerous accidents are avoided.

Description

Vehicle driving assisting method and device
Technical Field
The invention relates to the technical field of vehicle driving, in particular to a vehicle driving assisting method and device.
Background
When the vehicle is driven, whether the vehicle is safe or not is generally judged by a driver according to the visual condition. However, a driver sitting in the vehicle is shielded by the vehicle structure, and even with the assistance of the rearview mirror, the driver cannot see all the conditions around the vehicle, so that the driver cannot timely draw a conclusion about whether the vehicle is safe to run. Particularly, when the vehicle turns, the blind area of the driver is large, and it is necessary to pay attention to a plurality of road conditions, such as the driving condition on the road to be turned and the driving condition of the vehicle behind, and danger is more likely to occur. Therefore, a corresponding solution is needed.
Disclosure of Invention
In view of the above, the present invention has been made to provide a vehicle driving assistance method, apparatus, electronic device, and computer-readable storage medium that overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided a vehicle driving assist method including:
shooting images around a vehicle during the running of the vehicle;
and judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state.
Preferably, the capturing the image around the vehicle includes: shooting an image in front of the vehicle through a first camera;
the judging of the driving state of the vehicle according to the image includes: and identifying the image in front of the vehicle and judging whether the vehicle is in a steering state.
Preferably, the recognizing the image in front of the vehicle and determining whether the vehicle is in a turning state includes: identifying characteristic points from a plurality of continuous images in front of the vehicle, carrying out image matching according to the characteristic points, and judging whether the vehicle deviates according to a matching result.
Preferably, the pair of consecutive images in front of the plurality of vehicles, from which the feature points are identified, includes: if the available reference object can be identified, the feature points corresponding to the available reference object are extracted.
Preferably, the determining the driving state of the vehicle from the image includes: ranging the available reference object;
and calculating the steering speed according to the acquisition speed of the image in front of the vehicle and the distance obtained by ranging.
Preferably, the capturing the image around the vehicle includes: shooting an image behind the vehicle through a second camera;
evaluating the safety of the driving state includes: and identifying the image behind the vehicle, and evaluating the safety of the driving state according to the identified road condition.
Preferably, the identified road conditions include one or more of:
the rear part has no vehicle;
a straight-driving vehicle is arranged behind the front wheel;
the rear part is provided with a vehicle to be steered.
Preferably, the method further comprises: and generating corresponding prompt information according to the safety evaluation result.
According to another aspect of the present invention, there is provided a driving assistance apparatus for a vehicle, including:
an image capturing unit adapted to capture an image of the surroundings of the vehicle during the running of the vehicle;
and the evaluation unit is suitable for judging the running state of the vehicle according to the image and evaluating the safety of the running state.
Preferably, the image capturing unit is adapted to capture an image of the front of the vehicle through the first camera;
the evaluation unit is suitable for identifying the image in front of the vehicle and judging whether the vehicle is in a steering state.
Preferably, the evaluation unit is further adapted to identify feature points from a plurality of consecutive images in front of the vehicle, perform image matching according to the feature points, and determine whether the vehicle is offset according to a matching result.
Preferably, the evaluation unit is further adapted to extract feature points corresponding to the available reference objects if the available reference objects can be identified therefrom.
Preferably, the evaluation unit is further adapted to perform ranging on the available reference object;
and calculating the steering speed according to the acquisition speed of the image in front of the vehicle and the distance obtained by ranging.
Preferably, the image capturing unit is adapted to capture an image behind the vehicle through the second camera;
the evaluation unit is suitable for identifying the image behind the vehicle and evaluating the safety of the driving state according to the identified road condition.
Preferably, the identified road conditions include one or more of:
the rear part has no vehicle;
a straight-driving vehicle is arranged behind the front wheel;
the rear part is provided with a vehicle to be steered.
Preferably, the apparatus further comprises: and the prompting unit is suitable for generating corresponding prompting information according to the safety evaluation result.
In accordance with still another aspect of the present invention, there is provided an electronic apparatus including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method as any one of the above.
According to a further aspect of the invention, there is provided a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement a method as any one of the above.
As can be seen from the above, the present invention is configured to capture an image of the surroundings of the vehicle while the vehicle is traveling. And judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state. According to the shot images, the current driving state of the vehicle can be obtained, the environmental information around the vehicle can also be obtained, and whether the current driving state of the vehicle is safe or not is confirmed, so that the driving of a user is assisted, and the driving safety is ensured. Moreover, the camera can shoot images of blind areas possibly existing in the user, the driving visual field can be better increased, and dangerous accidents are avoided.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic flow diagram of a vehicle driving assistance method according to an embodiment of the invention;
fig. 2 is a schematic structural view showing a driving assistance apparatus for vehicle according to an embodiment of the invention;
FIG. 3 shows a schematic structural diagram of an electronic device according to one embodiment of the invention;
fig. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic flow diagram of a vehicle driving assistance method according to an embodiment of the invention. As shown in fig. 1, the method includes:
in step S110, an image of the surroundings of the vehicle is captured while the vehicle is traveling.
The vehicle is provided with a camera, and images around the vehicle are shot through the camera. For example, a camera provided at the front end of the vehicle captures an image of the front of the vehicle, a camera provided at the side end of the vehicle captures an image of the side of the vehicle, and a camera provided at the rear end of the vehicle captures an image of the rear of the vehicle. The images around the vehicle are taken, and generally, the images are continuously taken within a certain time period, so that a plurality of continuous images are obtained in the driving process of the vehicle.
And step S120, judging the running state of the vehicle according to the image, and evaluating the safety of the running state.
The current driving state of the vehicle can be judged according to the images shot by the camera, for example, when the multiple shot images around the vehicle are the same image, the vehicle is currently in a parking state; when the image information of the left part of the previous image is the same as the image information of the right part of the next image in any two adjacent images in the plurality of shot images, the situation that the vehicle is in a left-turning or left-turning state currently is shown. Meanwhile, from the captured image, environmental information around the vehicle, such as whether there is an approaching pedestrian or vehicle, may also be acquired. The image information in the acquired image can be obtained by processing the image information by adopting an image recognition technology. On the basis of the processing result of the image recognition, the safety evaluation is carried out on the driving state of the vehicle by combining the acquired environment around the vehicle and the driving state of the vehicle, and whether the current driving state of the vehicle is safe or not can be confirmed, so that the driving of a user is assisted, and the driving safety is ensured. The user here means a driver.
In one specific example, the image recognition technology is used to recognize the image, and it is determined that the vehicle is currently turning to the left and that a pedestrian is present in a short distance to the left of the vehicle. And evaluating the vehicle to obtain an evaluation result that danger may occur if the vehicle continuously turns to the left at the current driving speed, so that the user is prompted to decelerate or stop the vehicle, and the vehicle continuously turns after the pedestrian passes, thereby avoiding the occurrence of danger and ensuring the driving safety.
According to the technical scheme, the image of the periphery of the vehicle is shot in the driving process of the vehicle. And judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state. According to the shot images, the current driving state of the vehicle can be obtained, the environmental information around the vehicle can also be obtained, and whether the current driving state of the vehicle is safe or not is confirmed, so that the driving of a user is assisted, and the driving safety is ensured. Moreover, the camera can shoot images of blind areas possibly existing in the user, the driving visual field can be better increased, and dangerous accidents are avoided.
In one embodiment of the present invention, as in the method shown in fig. 1, the capturing of the image of the surroundings of the vehicle in step S110 includes: an image in front of the vehicle is captured by the first camera. The determining of the driving state of the vehicle from the image in step S120 includes: and recognizing the image in front of the vehicle and judging whether the vehicle is in a steering state or not.
Generally, auxiliary driving needs to be performed on a user under special driving conditions, and the special driving conditions comprise steering, turning around or backing up and the like. In these cases, the user is likely to have a visual blind area, and therefore needs to assist driving. The embodiment provides a specific mode for rapidly and accurately judging whether the vehicle is in a steering state or not so as to better assist the driving of a user.
When a vehicle turns, the head of the vehicle deflects firstly. That is, if the vehicle turns, the image in front of the vehicle is first converted. Therefore, whether the vehicle is in a steering state can be judged accurately and quickly by shooting and recognizing the image in front of the vehicle. A first camera facing the front of the vehicle is arranged on the vehicle, the first camera shoots images in front of the vehicle, and a plurality of images, such as 5 images in 1s, are obtained during the running process of the vehicle. And recognizing the image in front of the vehicle and judging whether the vehicle is in a steering state or not. For example, in the above 5 images, in any two adjacent images, the image information of the left part of the previous image is the same as the image information of the right part of the next image, which indicates that the vehicle is currently in a left-turn or left-turn state.
It should be noted that the shooting process and the image recognition process are generally performed simultaneously, for example, 5 images are shot within the above-mentioned 1s time, and if the vehicle is currently in the left turn state according to the judgment that the first 3 images are shot, the current image recognition task or shooting task can be ended.
In one embodiment of the present invention, the method, wherein recognizing the image in front of the vehicle and determining whether the vehicle is in a turning state, comprises: and identifying characteristic points from a plurality of continuous images in front of the vehicle, performing image matching according to the characteristic points, and judging whether the vehicle deviates according to a matching result.
The above embodiments have explained that it is possible to determine whether the vehicle is in a turning state based on the image in front of the vehicle. In order to improve the efficiency of image recognition and the judgment accuracy, the embodiment specifically judges through feature points in an image in front of the vehicle. Specifically, the same feature points in a plurality of continuous images in front of the vehicle are identified, the same feature points are subjected to image matching, and whether the vehicle is offset or not is judged. If the same feature point is located at different positions in different images, and the change in position of the feature point in multiple consecutive images is regular, for example, the feature point gradually moves from the left to the right, that is, the vehicle is shifted. The vehicle is deviated, which indicates that the vehicle is in a steering state; if the vehicle is not deviated, the vehicle is not in a steering state. The characteristic points are used for matching, so that the image identification is more targeted, and the judgment on the driving state of the vehicle is more accurate.
In one specific example, the same feature point road centerline is identified in several consecutive images. When the vehicle normally travels straight, the road centerline is located to the left of the vehicle. When the center lines of the roads are matched in a plurality of continuous images, the center lines of the roads are located at the same position of the images, which indicates that the vehicles are not deviated and the straight running state is maintained. If the position of the center line of the road moves to the right of the image, the vehicle is shown to be deviated leftwards and is in a left-turning state; if the position of the center line of the road is shifted to the left of the image, it indicates that the vehicle is shifted to the right and the vehicle is in a right-turn state.
In an embodiment of the present invention, the method, wherein the identifying the feature points from the consecutive images in front of the vehicle comprises: if the available reference object can be identified, the feature points corresponding to the available reference object are extracted.
The embodiment provides a specific implementation manner of identifying the feature points. And setting an available reference object, and extracting corresponding feature points in the available reference object during identification, so that the identification efficiency and accuracy can be improved. In one particular example, the vehicle is set as a usable reference. During identification, corresponding feature points, such as license plate numbers, lamps or exhaust pipes, are extracted from the vehicles, and identification of the feature points is completed.
In one embodiment of the present invention, the method described above, wherein determining the running state of the vehicle from the image includes: measuring the distance of the available reference object; and calculating the steering speed according to the acquisition speed of the image in front of the vehicle and the distance obtained by ranging.
Calculating the steering rate of the vehicle may also be accomplished according to the available references in the above embodiments. In the process of recognizing the images in front of the vehicle, the vehicle is taken as an example, the offset distance between the vehicles of the available reference object in the two images is calculated, the driving distance of the vehicle steering is obtained, and the steering speed of the vehicle is calculated according to the acquired driving distance and the acquisition speed of the images.
In a specific example, during the process of steering the vehicle to the left, the front available reference object vehicle in the image may be shifted to the left, and the distance that the front vehicle is shifted in the image is converted into a corresponding real distance according to a set scale, that is, the driving distance after the vehicle is steered to the left. For example, in the first image, the preceding vehicle is at the center position of the image. In the second image, the front vehicle is in the position of one third from right to left of the image. That is, the preceding vehicle is shifted to the right by one sixth of the image distance in the image, and the real distance is converted to 1 meter according to the scale 1: 6. That is, the host vehicle turns left and travels a distance of 1 meter. The acquisition rate between the two images is 0.1s, the speed of the vehicle turning to the left is 10m/s according to the speed formula, and the turning speed is in a safe range.
In one embodiment of the present invention, as in the method shown in fig. 1, the capturing of the image of the surroundings of the vehicle in step S110 includes: and shooting an image behind the vehicle through the second camera. The evaluation of the safety of the running state in step S120 includes: and recognizing the image behind the vehicle, and evaluating the safety of the driving state according to the recognized road condition.
And a second camera facing to the rear is arranged and used for shooting images behind the vehicle, acquiring the road condition behind the vehicle and judging whether the current driving state of the vehicle is safe or not according to the road condition. The road condition may be a vehicle condition traveling on a rear road, a traveling state of each vehicle, and the like.
In a specific example, an image behind the vehicle is shot through the second camera, the road condition is obtained according to image recognition, the vehicle driving in a right-turn direction is arranged on a road behind, and the safety is confirmed by the driving state of the current vehicle in a left-turn direction.
In one embodiment of the present invention, in the method, the identified road condition includes one or more of the following: the rear part has no vehicle; a straight-driving vehicle is arranged behind the front wheel; the rear part is provided with a vehicle to be steered.
The recognition of the road condition behind the vehicle and the recognition of the driving state of the vehicle on the road can be confirmed by recognizing the vehicle and the turn signal of the vehicle. For example, if a characteristic point of a vehicle is recognized by using an image recognition technique and a windshield, a rear view mirror, a wiper, an air intake, or the like exists as the characteristic point, a vehicle traveling on a road behind the vehicle is recognized, and the state of a turn signal of the vehicle is further recognized. If the vehicle steering lamp is in a non-state, the vehicle is in a straight-going state; if the left steering lamp of the vehicle is turned on, the vehicle is to be steered to the left; if the right turn light of the vehicle is on, the vehicle is ready to turn to the right. If the characteristic point of the vehicle does not exist, it is confirmed that no vehicle runs on the rear road.
In one embodiment of the present invention, as in the method shown in fig. 1, the method further comprises: and generating corresponding prompt information according to the safety evaluation result.
And according to the safety evaluation result, whether the current running state of the vehicle is safe or not can be confirmed, and corresponding prompt information is generated to prompt the user. In a specific example, if the safety evaluation result of the vehicle turning left is dangerous, the steering wheel is vibrated or a danger prompt tone is played to remind the user to pay attention, adjust the driving state of the vehicle in time and give up the left turn.
Fig. 2 shows a schematic configuration diagram of a driving assistance apparatus for vehicle according to an embodiment of the present invention. As shown in fig. 2, the apparatus 200 includes:
the image capturing unit 210 is adapted to capture an image of the surroundings of the vehicle while the vehicle is running.
The vehicle is provided with a camera, and images around the vehicle are shot through the camera. For example, a camera provided at the front end of the vehicle captures an image of the front of the vehicle, a camera provided at the side end of the vehicle captures an image of the side of the vehicle, and a camera provided at the rear end of the vehicle captures an image of the rear of the vehicle. The images around the vehicle are taken, and generally, the images are continuously taken within a certain time period, so that a plurality of continuous images are obtained in the driving process of the vehicle.
An evaluation unit 220 adapted to determine the driving state of the vehicle from the image and to evaluate the safety of the driving state.
The current driving state of the vehicle can be judged according to the images shot by the camera, for example, when the multiple shot images around the vehicle are the same image, the vehicle is currently in a parking state; when the image information of the left part of the previous image is the same as the image information of the right part of the next image in any two adjacent images in the plurality of shot images, the situation that the vehicle is in a left-turning or left-turning state currently is shown. Meanwhile, from the captured image, environmental information around the vehicle, such as whether there is an approaching pedestrian or vehicle, may also be acquired. The image information in the acquired image can be obtained by processing the image information by adopting an image recognition technology. On the basis of the processing result of the image recognition, the safety evaluation is carried out on the driving state of the vehicle by combining the acquired environment around the vehicle and the driving state of the vehicle, and whether the current driving state of the vehicle is safe or not can be confirmed, so that the driving of a user is assisted, and the driving safety is ensured. The user here means a driver.
In one specific example, the image recognition technology is used to recognize the image, and it is determined that the vehicle is currently turning to the left and that a pedestrian is present in a short distance to the left of the vehicle. And evaluating the vehicle to obtain an evaluation result that danger may occur if the vehicle continuously turns to the left at the current driving speed, so that the user is prompted to decelerate or stop the vehicle, and the vehicle continuously turns after the pedestrian passes, thereby avoiding the occurrence of danger and ensuring the driving safety.
According to the technical scheme, the image of the periphery of the vehicle is shot in the driving process of the vehicle. And judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state. According to the shot images, the current driving state of the vehicle can be obtained, the environmental information around the vehicle can also be obtained, and whether the current driving state of the vehicle is safe or not is confirmed, so that the driving of a user is assisted, and the driving safety is ensured. Moreover, the camera can shoot images of blind areas possibly existing in the user, the driving visual field can be better increased, and dangerous accidents are avoided.
In one embodiment of the present invention, as in the apparatus 200 shown in fig. 2, the image capturing unit 210 is adapted to capture an image in front of the vehicle through the first camera. The evaluation unit 220 is adapted to recognize an image in front of the vehicle and determine whether the vehicle is in a turning state.
Generally, auxiliary driving needs to be performed on a user under special driving conditions, and the special driving conditions comprise steering, turning around or backing up and the like. In these cases, the user is likely to have a visual blind area, and therefore needs to assist driving. The embodiment provides a specific mode for rapidly and accurately judging whether the vehicle is in a steering state or not so as to better assist the driving of a user.
When a vehicle turns, the head of the vehicle deflects firstly. That is, if the vehicle turns, the image in front of the vehicle is first converted. Therefore, whether the vehicle is in a steering state can be judged accurately and quickly by shooting and recognizing the image in front of the vehicle. A first camera facing the front of the vehicle is arranged on the vehicle, the first camera shoots images in front of the vehicle, and a plurality of images, such as 5 images in 1s, are obtained during the running process of the vehicle. And recognizing the image in front of the vehicle and judging whether the vehicle is in a steering state or not. For example, in the above 5 images, in any two adjacent images, the image information of the left part of the previous image is the same as the image information of the right part of the next image, which indicates that the vehicle is currently in a left-turn or left-turn state.
It should be noted that the shooting process and the image recognition process are generally performed simultaneously, for example, 5 images are shot within the above-mentioned 1s time, and if the vehicle is currently in the left turn state according to the judgment that the first 3 images are shot, the current image recognition task or shooting task can be ended.
In an embodiment of the present invention, in the apparatus 200, the evaluation unit 220 is further adapted to identify feature points from a plurality of consecutive images in front of the vehicle, perform image matching according to the feature points, and determine whether the vehicle is displaced according to a matching result.
The above embodiments have explained that it is possible to determine whether the vehicle is in a turning state based on the image in front of the vehicle. In order to improve the efficiency of image recognition and the judgment accuracy, the embodiment specifically judges through feature points in an image in front of the vehicle. Specifically, the same feature points in a plurality of continuous images in front of the vehicle are identified, the same feature points are subjected to image matching, and whether the vehicle is offset or not is judged. If the same feature point is located at different positions in different images, and the change in position of the feature point in multiple consecutive images is regular, for example, the feature point gradually moves from the left to the right, that is, the vehicle is shifted. The vehicle is deviated, which indicates that the vehicle is in a steering state; if the vehicle is not deviated, the vehicle is not in a steering state. The characteristic points are used for matching, so that the image identification is more targeted, and the judgment on the driving state of the vehicle is more accurate.
In one specific example, the same feature point road centerline is identified in several consecutive images. When the vehicle normally travels straight, the road centerline is located to the left of the vehicle. When the center lines of the roads are matched in a plurality of continuous images, the center lines of the roads are located at the same position of the images, which indicates that the vehicles are not deviated and the straight running state is maintained. If the position of the center line of the road moves to the right of the image, the vehicle is shown to be deviated leftwards and is in a left-turning state; if the position of the center line of the road is shifted to the left of the image, it indicates that the vehicle is shifted to the right and the vehicle is in a right-turn state.
In an embodiment of the present invention, in the apparatus 200, the evaluation unit 220 is further adapted to extract a feature point corresponding to the available reference object if the available reference object can be identified therefrom.
The embodiment provides a specific implementation manner of identifying the feature points. And setting an available reference object, and extracting corresponding feature points in the available reference object during identification, so that the identification efficiency and accuracy can be improved. In one particular example, the vehicle is set as a usable reference. During identification, corresponding feature points, such as license plate numbers, lamps or exhaust pipes, are extracted from the vehicles, and identification of the feature points is completed.
In an embodiment of the present invention, in the apparatus 200, the evaluation unit 220 is further adapted to perform ranging on the available reference object; and calculating the steering speed according to the acquisition speed of the image in front of the vehicle and the distance obtained by ranging.
Calculating the steering rate of the vehicle may also be accomplished according to the available references in the above embodiments. In the process of recognizing the images in front of the vehicle, the vehicle is taken as an example, the offset distance between the vehicles of the available reference object in the two images is calculated, the driving distance of the vehicle steering is obtained, and the steering speed of the vehicle is calculated according to the acquired driving distance and the acquisition speed of the images.
In a specific example, during the process of steering the vehicle to the left, the front available reference object vehicle in the image may be shifted to the left, and the distance that the front vehicle is shifted in the image is converted into a corresponding real distance according to a set scale, that is, the driving distance after the vehicle is steered to the left. For example, in the first image, the preceding vehicle is at the center position of the image. In the second image, the front vehicle is in the position of one third from right to left of the image. That is, the preceding vehicle is shifted to the right by one sixth of the image distance in the image, and the real distance is converted to 1 meter according to the scale 1: 6. That is, the host vehicle turns left and travels a distance of 1 meter. The acquisition rate between the two images is 0.1s, the speed of the vehicle turning to the left is 10m/s according to the speed formula, and the turning speed is in a safe range.
In one embodiment of the present invention, as in the apparatus 200 shown in fig. 2, the image capturing unit 210 is adapted to capture an image behind the vehicle through the second camera. The evaluation unit 220 is adapted to recognize the image behind the vehicle and evaluate the safety of the driving state according to the recognized road condition.
And a second camera facing to the rear is arranged and used for shooting images behind the vehicle, acquiring the road condition behind the vehicle and judging whether the current driving state of the vehicle is safe or not according to the road condition. The road condition may be a vehicle condition traveling on a rear road, a traveling state of each vehicle, and the like.
In a specific example, an image behind the vehicle is shot through the second camera, the road condition is obtained according to image recognition, the vehicle driving in a right-turn direction is arranged on a road behind, and the safety is confirmed by the driving state of the current vehicle in a left-turn direction.
In an embodiment of the present invention, in the apparatus 200, the identified road condition includes one or more of the following: the rear part has no vehicle; a straight-driving vehicle is arranged behind the front wheel; the rear part is provided with a vehicle to be steered.
The recognition of the road condition behind the vehicle and the recognition of the driving state of the vehicle on the road can be confirmed by recognizing the vehicle and the turn signal of the vehicle. For example, if a characteristic point of a vehicle is recognized by using an image recognition technique and a windshield, a rear view mirror, a wiper, an air intake, or the like exists as the characteristic point, a vehicle traveling on a road behind the vehicle is recognized, and the state of a turn signal of the vehicle is further recognized. If the vehicle steering lamp is in a non-state, the vehicle is in a straight-going state; if the left steering lamp of the vehicle is turned on, the vehicle is to be steered to the left; if the right turn light of the vehicle is on, the vehicle is ready to turn to the right. If the characteristic point of the vehicle does not exist, it is confirmed that no vehicle runs on the rear road.
In one embodiment of the present invention, as shown in the apparatus 200 of fig. 2, the apparatus 200 further comprises: and the prompting unit is suitable for generating corresponding prompting information according to the safety evaluation result.
And according to the safety evaluation result, whether the current running state of the vehicle is safe or not can be confirmed, and corresponding prompt information is generated to prompt the user. In a specific example, if the safety evaluation result of the vehicle turning left is dangerous, the steering wheel is vibrated or a danger prompt tone is played to remind the user to pay attention, adjust the driving state of the vehicle in time and give up the left turn.
In summary, according to the technical scheme of the invention, the image around the vehicle is shot in the driving process of the vehicle. And judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state. According to the shot images, the current driving state of the vehicle can be obtained, the environmental information around the vehicle can also be obtained, and whether the current driving state of the vehicle is safe or not is confirmed, so that the driving of a user is assisted, and the driving safety is ensured. Moreover, the camera can shoot images of blind areas possibly existing in the user, the driving visual field can be better increased, and dangerous accidents are avoided.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of a vehicle driving assistance apparatus, electronic device, and computer-readable storage medium in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device comprises a processor 310 and a memory 320 arranged to store computer executable instructions (computer readable program code). The memory 320 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 320 has a storage space 330 storing computer readable program code 331 for performing any of the method steps described above. For example, the storage space 330 for storing the computer readable program code may comprise respective computer readable program codes 331 for respectively implementing various steps in the above method. The computer readable program code 331 may be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium such as described in fig. 4. Fig. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention. The computer readable storage medium 400 has stored thereon a computer readable program code 331 for performing the steps of the method according to the invention, readable by a processor 310 of the electronic device 300, which computer readable program code 331, when executed by the electronic device 300, causes the electronic device 300 to perform the steps of the method described above, in particular the computer readable program code 331 stored on the computer readable storage medium may perform the method shown in any of the embodiments described above. The computer readable program code 331 may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The invention provides a1, a vehicle driving assistance method, comprising:
shooting images around a vehicle during the running of the vehicle;
and judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state.
A2, the method of a1, wherein the capturing the image of the surroundings of the vehicle includes: shooting an image in front of the vehicle through a first camera;
the judging of the driving state of the vehicle according to the image includes: and identifying the image in front of the vehicle and judging whether the vehicle is in a steering state.
A3, the method of a2, wherein the recognizing the image of the front of the vehicle and determining whether the vehicle is in a turning state comprises:
identifying characteristic points from a plurality of continuous images in front of the vehicle, carrying out image matching according to the characteristic points, and judging whether the vehicle deviates according to a matching result.
A4, the method as in A3, wherein the identifying feature points from the consecutive images in front of the vehicle comprises:
if the available reference object can be identified, the feature points corresponding to the available reference object are extracted.
A5, the method of A4, wherein the determining the driving state of the vehicle according to the image comprises:
ranging the available reference object;
and calculating the steering speed according to the acquisition speed of the image in front of the vehicle and the distance obtained by ranging.
A6, the method of a1, wherein the capturing the image of the surroundings of the vehicle includes: shooting an image behind the vehicle through a second camera;
evaluating the safety of the driving state includes: and identifying the image behind the vehicle, and evaluating the safety of the driving state according to the identified road condition.
A7, the method of A6, wherein the identified road conditions include one or more of:
the rear part has no vehicle;
a straight-driving vehicle is arranged behind the front wheel;
the rear part is provided with a vehicle to be steered.
A8, the method of a1, wherein the method further comprises:
and generating corresponding prompt information according to the safety evaluation result.
The present invention also provides B9, a vehicle driving assist device, comprising:
an image capturing unit adapted to capture an image of the surroundings of the vehicle during the running of the vehicle;
and the evaluation unit is suitable for judging the running state of the vehicle according to the image and evaluating the safety of the running state.
B10, the device as in B9, wherein the image capturing unit is adapted to capture an image of the front of the vehicle by the first camera;
the evaluation unit is suitable for identifying the image in front of the vehicle and judging whether the vehicle is in a steering state.
The device of B11, as B10, wherein the evaluation unit is further adapted to identify feature points from a plurality of continuous images in front of the vehicle, perform image matching according to the feature points, and judge whether the vehicle is shifted according to the matching result.
B12, the device according to B11, wherein the evaluation unit is further adapted to extract feature points corresponding to the available reference if an available reference can be identified therefrom.
B13, the device according to B12, wherein the evaluation unit is further adapted to perform ranging on the available reference object;
and calculating the steering speed according to the acquisition speed of the image in front of the vehicle and the distance obtained by ranging.
B14, the device as in B9, wherein the image capturing unit is adapted to capture an image behind the vehicle through a second camera;
the evaluation unit is suitable for identifying the image behind the vehicle and evaluating the safety of the driving state according to the identified road condition.
B15, the apparatus of B14, wherein the identified road conditions include one or more of:
the rear part has no vehicle;
a straight-driving vehicle is arranged behind the front wheel;
the rear part is provided with a vehicle to be steered.
B16, the apparatus of B9, wherein the apparatus further comprises:
and the prompting unit is suitable for generating corresponding prompting information according to the safety evaluation result.
The invention also provides C17, an electronic device, wherein the electronic device comprises: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any one of a 1-A8.
The invention also provides D18, a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of a1-a 8.

Claims (10)

1. A vehicle driving assist method comprising:
shooting images around a vehicle during the running of the vehicle;
and judging the driving state of the vehicle according to the image, and evaluating the safety of the driving state.
2. The method of claim 1, wherein said capturing an image of the surroundings of the vehicle comprises: shooting an image in front of the vehicle through a first camera;
the judging of the driving state of the vehicle according to the image includes: and identifying the image in front of the vehicle and judging whether the vehicle is in a steering state.
3. The method of claim 2, wherein the identifying the image in front of the vehicle and determining whether the vehicle is in a turning state comprises:
identifying characteristic points from a plurality of continuous images in front of the vehicle, carrying out image matching according to the characteristic points, and judging whether the vehicle deviates according to a matching result.
4. The method of claim 3, wherein the identifying feature points from the consecutive images of the front of the vehicle comprises:
if the available reference object can be identified, the feature points corresponding to the available reference object are extracted.
5. The method of claim 4, wherein said determining a driving state of the vehicle from the image comprises:
ranging the available reference object;
and calculating the steering speed according to the acquisition speed of the image in front of the vehicle and the distance obtained by ranging.
6. The method of claim 1, wherein said capturing an image of the surroundings of the vehicle comprises: shooting an image behind the vehicle through a second camera;
evaluating the safety of the driving state includes: and identifying the image behind the vehicle, and evaluating the safety of the driving state according to the identified road condition.
7. A vehicle driving assist device comprising:
an image capturing unit adapted to capture an image of the surroundings of the vehicle during the running of the vehicle;
and the evaluation unit is suitable for judging the running state of the vehicle according to the image and evaluating the safety of the running state.
8. The apparatus of claim 7, wherein the image capturing unit is adapted to capture an image of the front of the vehicle through the first camera;
the evaluation unit is suitable for identifying the image in front of the vehicle and judging whether the vehicle is in a steering state.
9. An electronic device, wherein the electronic device comprises: a processor; and a memory arranged to store computer-executable instructions that, when executed, cause the processor to perform the method of any one of claims 1-6.
10. A computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of claims 1-6.
CN201811639135.7A 2018-12-29 2018-12-29 Vehicle driving assisting method and device Pending CN111391861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811639135.7A CN111391861A (en) 2018-12-29 2018-12-29 Vehicle driving assisting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811639135.7A CN111391861A (en) 2018-12-29 2018-12-29 Vehicle driving assisting method and device

Publications (1)

Publication Number Publication Date
CN111391861A true CN111391861A (en) 2020-07-10

Family

ID=71418720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811639135.7A Pending CN111391861A (en) 2018-12-29 2018-12-29 Vehicle driving assisting method and device

Country Status (1)

Country Link
CN (1) CN111391861A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11891085B2 (en) 2020-12-15 2024-02-06 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus of controlling driverless vehicle and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101122799A (en) * 2006-08-10 2008-02-13 比亚迪股份有限公司 Automobile tail-catching prealarming device and method
DE102011106052A1 (en) * 2010-07-06 2012-01-12 Gm Global Technology Operations Llc (N.D.Ges.D. Staates Delaware) Shadow removal in an image captured by a vehicle based camera using a nonlinear illumination invariant core
CN102778223A (en) * 2012-06-07 2012-11-14 沈阳理工大学 License number cooperation target and monocular camera based automobile anti-collision early warning method
CN102963299A (en) * 2012-10-31 2013-03-13 樊红娟 High-reliability and low-false alarm rate highway automobile anti-collision system and method
CN103395391A (en) * 2013-07-03 2013-11-20 北京航空航天大学 Lane changing warning device and lane changing state identifying method for vehicle
CN106846369A (en) * 2016-12-14 2017-06-13 广州市联奥信息科技有限公司 Vehicular turn condition discrimination method and device based on binocular vision
DE102017205630A1 (en) * 2017-04-03 2018-10-04 Conti Temic Microelectronic Gmbh Camera apparatus and method for detecting a surrounding area of a vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101122799A (en) * 2006-08-10 2008-02-13 比亚迪股份有限公司 Automobile tail-catching prealarming device and method
DE102011106052A1 (en) * 2010-07-06 2012-01-12 Gm Global Technology Operations Llc (N.D.Ges.D. Staates Delaware) Shadow removal in an image captured by a vehicle based camera using a nonlinear illumination invariant core
CN102778223A (en) * 2012-06-07 2012-11-14 沈阳理工大学 License number cooperation target and monocular camera based automobile anti-collision early warning method
CN102963299A (en) * 2012-10-31 2013-03-13 樊红娟 High-reliability and low-false alarm rate highway automobile anti-collision system and method
CN103395391A (en) * 2013-07-03 2013-11-20 北京航空航天大学 Lane changing warning device and lane changing state identifying method for vehicle
CN106846369A (en) * 2016-12-14 2017-06-13 广州市联奥信息科技有限公司 Vehicular turn condition discrimination method and device based on binocular vision
DE102017205630A1 (en) * 2017-04-03 2018-10-04 Conti Temic Microelectronic Gmbh Camera apparatus and method for detecting a surrounding area of a vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11891085B2 (en) 2020-12-15 2024-02-06 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus of controlling driverless vehicle and electronic device

Similar Documents

Publication Publication Date Title
CN108230731B (en) Parking lot navigation system and method
US20220317700A1 (en) Systems and methods for detecting low-height objects in a roadway
KR101680023B1 (en) Method for combining a road sign recognition system and lane detection system of a motor vehicle
JP3846494B2 (en) Moving obstacle detection device
JP2019139729A (en) System and method for lane detection
CN103918006B (en) For detecting method and the CCD camera assembly of the raindrop in vehicle windscreen
CN111507130B (en) Lane-level positioning method and system, computer equipment, vehicle and storage medium
JP6316161B2 (en) In-vehicle image processing device
JP2022542246A (en) Emergency vehicle detection
US10803340B2 (en) Method and apparatus for license plate recognition using multiple fields of view
CN104773177A (en) Aided driving method and aided driving device
JP2019012314A (en) Collision estimation device and collision estimation method
CN106183981B (en) Obstacle detection method, apparatus based on automobile and automobile
JP2002228734A (en) Peripheral object confirming device
JP2022502642A (en) How to evaluate the effect of objects around the means of transportation on the driving operation of the means of transportation
JP3655541B2 (en) Lane detector
JP2021086482A (en) Vehicle driving support device
CN111391861A (en) Vehicle driving assisting method and device
CN114537521A (en) Parking assistance apparatus and method
JP5813298B2 (en) Drive recorder and image storage method
JP2021003926A (en) Parking support device, parking support method and rut detection method
CN109703556B (en) Driving assistance method and apparatus
JP2019015570A (en) Parking position information acquisition device and automatic parking controller
JP5957182B2 (en) Road surface pattern recognition method and vehicle information recording apparatus
JP6263436B2 (en) Travel path recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710