CN116176625A - Vehicle running control method, system, device and medium based on machine vision - Google Patents

Vehicle running control method, system, device and medium based on machine vision Download PDF

Info

Publication number
CN116176625A
CN116176625A CN202310198258.6A CN202310198258A CN116176625A CN 116176625 A CN116176625 A CN 116176625A CN 202310198258 A CN202310198258 A CN 202310198258A CN 116176625 A CN116176625 A CN 116176625A
Authority
CN
China
Prior art keywords
vehicle
running
lamp
information
tail lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310198258.6A
Other languages
Chinese (zh)
Inventor
郑少强
马虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Honda Automobile Co Ltd
Guangqi Honda Automobile Research and Development Co Ltd
Original Assignee
GAC Honda Automobile Co Ltd
Guangqi Honda Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Honda Automobile Co Ltd, Guangqi Honda Automobile Research and Development Co Ltd filed Critical GAC Honda Automobile Co Ltd
Priority to CN202310198258.6A priority Critical patent/CN116176625A/en
Publication of CN116176625A publication Critical patent/CN116176625A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0017Planning or execution of driving tasks specially adapted for safety of other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4042Longitudinal speed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Automation & Control Theory (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

The invention discloses a vehicle running control method, a system, a device and a medium based on machine vision, which comprise the following steps: acquiring road image information of a region in front of a target vehicle, performing edge detection on the road image information to obtain a plurality of continuous outlines, and extracting a plurality of images to be identified from the road image information according to the continuous outlines; inputting an image to be identified into a pre-trained vehicle tail lamp identification model to obtain a tail lamp state identification result; determining the vehicle running behavior of the vehicle to be monitored in the front area of the target vehicle according to the tail lamp state identification result; and determining first position information and first speed information of the vehicle to be monitored, and performing running control on the target vehicle according to the first position information, the first speed information and the running behavior of the vehicle. The invention captures the change of the front taillight in real time during the running process of the vehicle to adjust the running strategy of the vehicle, improves the accuracy of vehicle control and the safety of vehicle running, and can be widely applied to the technical field of vehicle control.

Description

Vehicle running control method, system, device and medium based on machine vision
Technical Field
The invention relates to the technical field of vehicle control, in particular to a vehicle running control method, system, device and medium based on machine vision.
Background
With the development of intelligent networking of automobiles, vehicle monitoring and control technologies are becoming more and more intelligent. In the related technology of the existing automatic driving and advanced driving auxiliary system of the automobile, the position change of the front automobile is often determined only through a single detection means, and the capturing of key information such as the tail lamp of the automobile is lacking, so that the accuracy of automobile control and the safety of automobile running are affected. Therefore, there is a need to design a method for adjusting a vehicle driving strategy by capturing the change of the taillight of the front vehicle in real time during the driving process of the vehicle, so as to improve the accuracy of vehicle control and the safety of vehicle driving.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art to a certain extent.
Therefore, an object of an embodiment of the present invention is to provide a machine vision-based vehicle driving control method, which improves accuracy of vehicle control and safety of vehicle driving.
It is another object of an embodiment of the present invention to provide a machine vision-based vehicle travel control system.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the invention comprises the following steps:
in a first aspect, an embodiment of the present invention provides a machine vision-based vehicle driving control method, including the steps of:
acquiring road image information of a region in front of a target vehicle, performing edge detection on the road image information to obtain a plurality of continuous outlines, and extracting a plurality of images to be identified from the road image information according to the continuous outlines;
inputting the image to be identified into a pre-trained vehicle tail lamp identification model to obtain a tail lamp state identification result;
determining the vehicle running behavior of the vehicle to be monitored in the front area of the target vehicle according to the tail lamp state identification result;
and determining first position information and first speed information of the vehicle to be monitored, and performing running control on the target vehicle according to the first position information, the first speed information and the vehicle running behavior.
Further, in an embodiment of the present invention, the step of obtaining road image information of a region in front of the target vehicle, performing edge detection on the road image information to obtain a plurality of continuous contours, and extracting a plurality of images to be identified from the road image information according to the continuous contours specifically includes:
acquiring the road image information of a region in front of the target vehicle by an image pickup device mounted in advance on the target vehicle;
performing edge detection on the road image information through a Canny operator to obtain first image information, and determining continuous contours in the first image information;
and carrying out image segmentation on the road image information according to the continuous contour to obtain the image to be identified.
Further, in one embodiment of the present invention, the machine vision-based vehicle driving control method further includes a step of pre-training the vehicle tail light recognition model, which specifically includes:
acquiring a plurality of preset vehicle tail lamp sample images, and determining a plurality of tail lamp state labels corresponding to each vehicle tail lamp sample image;
constructing a training data set according to the vehicle tail lamp sample image and the tail lamp state label;
inputting the training data set into a convolutional neural network constructed in advance for training to obtain a trained vehicle tail lamp recognition model;
the tail lamp state label comprises a left steering lamp bright state, a left steering lamp dark state, a right steering lamp bright state, a right steering lamp dark state, a reversing lamp bright state, a reversing lamp dark state, a brake lamp bright state and a brake lamp dark state.
Further, in one embodiment of the present invention, the step of inputting the training data set into a pre-constructed convolutional neural network for training to obtain a trained vehicle tail lamp recognition model specifically includes:
inputting the training data set into the convolutional neural network to obtain a tail lamp state prediction result;
determining a loss value of the convolutional neural network according to the tail lamp state prediction result and the tail lamp state label;
updating model parameters of the convolutional neural network through a back propagation algorithm according to the loss value, and returning to the step of inputting the training data set into the convolutional neural network;
and stopping training when the loss value reaches a preset first threshold value or the iteration number reaches a preset second threshold value, and obtaining a trained vehicle tail lamp recognition model.
Further, in one embodiment of the present invention, the step of determining the vehicle running behavior of the vehicle to be monitored in the area in front of the target vehicle according to the tail light state recognition result specifically includes:
when the tail lamp state identification result is that the left steering lamp is on, and the right steering lamp, the reversing lamp and the brake lamp are all dark, determining that the vehicle running behavior is left steering running;
when the tail lamp state identification result is that the right steering lamp is on, and the left steering lamp, the reversing lamp and the brake lamp are all dark, determining that the vehicle running behavior is right steering running;
when the tail lamp state identification result is that the reversing lamp is on, and the left steering lamp, the right steering lamp and the brake lamp are all dark, determining that the vehicle running behavior is reversing running;
and when the tail lamp state identification result is that the brake lamp is on, determining that the vehicle running behavior is brake.
Further, in one embodiment of the present invention, the step of determining the first position information and the first speed information of the vehicle to be monitored specifically includes:
acquiring azimuth information of the vehicle to be monitored and distance information between the vehicle to be monitored and the target vehicle through a radar detection device pre-installed on the target vehicle;
and acquiring real-time position information of the target vehicle, and determining the first position information and the first speed information according to the real-time position information, the azimuth information and the distance information.
Further, in one embodiment of the present invention, the step of performing travel control on the target vehicle according to the first position information, the first speed information, and the vehicle travel behavior specifically includes:
predicting the driving track information of the vehicle to be monitored according to the first position information, the first speed information and the driving behavior of the vehicle;
and acquiring the running state information of the target vehicle, determining the running strategy of the target vehicle according to the running state information and the running track information, and further performing running control on the target vehicle according to the running strategy.
In a second aspect, an embodiment of the present invention provides a machine vision-based vehicle running control system, including:
the image extraction module is used for acquiring road image information of a region in front of a target vehicle, carrying out edge detection on the road image information to obtain a plurality of continuous outlines, and extracting a plurality of images to be identified from the road image information according to the continuous outlines;
the tail lamp state recognition module is used for inputting the image to be recognized into a pre-trained vehicle tail lamp recognition model to obtain a tail lamp state recognition result;
the driving behavior determining module is used for determining the vehicle driving behavior of the vehicle to be monitored in the area in front of the target vehicle according to the tail lamp state identification result;
and the running control module is used for determining the first position information and the first speed information of the vehicle to be monitored and controlling the running of the target vehicle according to the first position information, the first speed information and the running behavior of the vehicle.
In a third aspect, an embodiment of the present invention provides a machine vision-based vehicle travel control apparatus, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a machine vision-based vehicle travel control method as described above.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium in which a processor-executable program is stored, which when executed by a processor is configured to perform a machine vision-based vehicle running control method as described above.
The advantages and benefits of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
According to the embodiment of the invention, the road image information of the area in front of the target vehicle is obtained, the edge detection is carried out on the road image information to obtain a plurality of continuous outlines, a plurality of images to be identified are extracted from the road image information according to the continuous outlines, then the images to be identified are input into a pre-trained vehicle tail lamp identification model to obtain a tail lamp state identification result, the vehicle running behavior of the vehicle to be monitored in the area in front of the target vehicle is determined according to the tail lamp state identification result, and meanwhile, the first position information and the first speed information of the vehicle to be monitored are determined, so that the running control of the target vehicle can be carried out according to the first position information, the first speed information and the vehicle running behavior. The embodiment of the invention acquires the road image of the front area of the target vehicle and identifies the state of the front vehicle lamp, so that the vehicle running behavior of the front vehicle can be accurately determined, and meanwhile, the position information and the speed information of the front vehicle are captured in real time, so that the movement track of the front vehicle can be accurately predicted by combining the vehicle running behavior, the position information and the speed information of the front vehicle, the running strategy of the target vehicle is regulated accordingly, and the accuracy of vehicle control and the safety of vehicle running are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will refer to the drawings that are needed in the embodiments of the present invention, and it should be understood that the drawings in the following description are only for convenience and clarity to describe some embodiments in the technical solutions of the present invention, and other drawings may be obtained according to these drawings without any inventive effort for those skilled in the art.
Fig. 1 is a flowchart of steps of a machine vision-based vehicle driving control method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a machine vision-based vehicle travel control system according to an embodiment of the present invention;
fig. 3 is a block diagram of a vehicle running control device based on machine vision according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present invention, the plurality means two or more, and if the description is made to the first and second for the purpose of distinguishing technical features, it should not be construed as indicating or implying relative importance or implicitly indicating the number of the indicated technical features or implicitly indicating the precedence of the indicated technical features. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art.
Referring to fig. 1, an embodiment of the present invention provides a vehicle driving control method based on machine vision, which specifically includes the following steps:
s101, acquiring road image information of a region in front of a target vehicle, performing edge detection on the road image information to obtain a plurality of continuous outlines, and extracting a plurality of images to be identified from the road image information according to the continuous outlines.
Specifically, in the embodiment of the invention, the image information of the road in front is acquired in real time through the image pickup device in the running process of the target vehicle, and a plurality of images to be identified are extracted from the image information to be used for identifying the state of the tail lamp of the vehicle in front. It can be understood that the image pickup device of the embodiment of the invention is arranged on the target vehicle, so that the target vehicle can continuously identify the state of the tail lamp of the vehicle in front during the running process, thereby ensuring the running safety of the target vehicle, namely, the embodiment of the invention is applied to the target vehicle and is used for identifying the state of the tail lamp of the vehicle in front of the target vehicle. The step S101 specifically includes the following steps:
s1011, acquiring road image information of a surrounding area of a target vehicle through an imaging device pre-installed on the target vehicle;
s1012, carrying out edge detection on the road image information through a Canny operator to obtain first image information, and determining continuous contours in the first image information;
s1013, performing image segmentation on the road image information according to the continuous contour to obtain an image to be identified.
Specifically, the image pickup device may be mounted on the front or roof of the subject vehicle. The image edge is the part of the image with obvious brightness change in the local area, and the gray level image is the area with obvious change of the gray level value, namely, the gray level value is changed sharply from one gray level value to another gray level value with larger gray level difference in a small buffer area. According to the embodiment of the invention, the Canny operator is adopted for edge detection, so that the sensitivity to the edge of the tail of the vehicle is improved, and meanwhile, the noise can be restrained.
And performing traversal search on the first image information extracted by the Canny operator, sequentially performing from top to bottom from left to right from the upper left corner of the image, searching the i-th independent continuous contour in the image, performing RHT (random Hough transform) on the continuous contour, removing the continuous contour with straight line, undersize or oversize, and screening out the trapezoid contour with the area within a preset range, wherein the continuous contour meeting the condition is the contour of the front vehicle tail.
And (3) carrying out image segmentation on the road image information according to the selected continuous contour, and extracting the image information surrounded by the continuous contour to obtain the image to be identified.
S102, inputting an image to be recognized into a pre-trained vehicle tail lamp recognition model to obtain a tail lamp state recognition result.
Specifically, the vehicle tail lamp recognition model of the embodiment of the invention is obtained through convolutional neural network training, and the image to be recognized is input into the vehicle tail lamp recognition model, so that the corresponding tail lamp state recognition result can be obtained.
Further as an optional embodiment, the machine vision-based vehicle driving control method further includes a step of pre-training a vehicle tail lamp recognition model, which specifically includes:
a1, acquiring a plurality of preset vehicle tail lamp sample images, and determining a plurality of tail lamp state labels corresponding to each vehicle tail lamp sample image;
a2, constructing a training data set according to the vehicle tail lamp sample image and the tail lamp state label;
a3, inputting the training data set into a pre-constructed convolutional neural network for training to obtain a trained vehicle tail lamp recognition model;
the tail lamp state label comprises a left steering lamp bright state, a left steering lamp dark state, a right steering lamp bright state, a right steering lamp dark state, a reversing lamp bright state, a reversing lamp dark state, a brake lamp bright state and a brake lamp dark state.
Specifically, when the training data set is constructed, a plurality of vehicle tail lamp sample images covering all the vehicle lamp state types are acquired, and a plurality of vehicle tail lamp sample images of each vehicle lamp state type are also available, and the label information of each vehicle tail lamp sample image is determined according to the corresponding vehicle lamp state type. It will be appreciated that the number of labels is not limited for each of the vehicle taillight sample images, and for example, four labels, i.e., a left turn light on, a right turn light off, a reverse light off, and a brake light off, may be provided simultaneously, two labels, i.e., a left turn light on and a right turn light off, may be provided simultaneously, or only any one label may be provided.
Further as an optional implementation manner, the step of inputting the training data set into a pre-constructed convolutional neural network to perform training to obtain a trained vehicle tail lamp recognition model A3 specifically includes:
a31, inputting the training data set into a convolutional neural network to obtain a tail lamp state prediction result;
a32, determining a loss value of the convolutional neural network according to the tail lamp state prediction result and the tail lamp state label;
a33, updating model parameters of the convolutional neural network through a back propagation algorithm according to the loss value, and returning to the step of inputting the training data set into the convolutional neural network;
and A34, stopping training when the loss value reaches a preset first threshold value or the iteration number reaches a preset second threshold value, and obtaining a trained vehicle tail lamp recognition model.
Specifically, after data in the training data set is input into the initialized convolutional neural network model, a recognition result output by the model, namely a tail lamp state prediction result, can be obtained, and the accuracy of model prediction can be evaluated according to the tail lamp state prediction result and the label information, so that parameters of the model are updated. For a vehicle taillight recognition model, the accuracy of the model prediction result can be measured by a Loss Function (Loss Function), which is defined on a single training data and is used for measuring the prediction error of one training data, specifically determining the Loss value of the training data through the label of the single training data and the prediction result of the model on the training data. In actual training, one training data set has a lot of training data, so that a Cost Function (Cost Function) is generally adopted to measure the overall error of the training data set, and the Cost Function is defined on the whole training data set and is used for calculating the average value of the prediction errors of all the training data, so that the prediction effect of the model can be better measured. For a general machine learning model, based on the cost function, a regular term for measuring the complexity of the model can be used as a training objective function, and based on the objective function, the loss value of the whole training data set can be obtained. There are many kinds of common loss functions, such as 0-1 loss function, square loss function, absolute loss function, logarithmic loss function, cross entropy loss function, etc., which can be used as the loss function of the machine learning model, and will not be described in detail herein. In the embodiment of the invention, one loss function can be selected to determine the loss value of training. Based on the trained loss value, updating the parameters of the model by adopting a back propagation algorithm, and iterating for several rounds to obtain the trained vehicle tail lamp identification model. Specifically, the number of iteration rounds may be preset, or training may be considered complete when the test set meets the accuracy requirements.
And S103, determining the vehicle running behavior of the vehicle to be monitored in the front area of the target vehicle according to the tail lamp state identification result.
Specifically, the vehicle running behavior of the front vehicle can be judged according to the tail lamp state identification result and the lighting logic. Step S103 specifically includes the following steps:
s1031, when the tail lamp state identification result is that the left steering lamp is on, and the right steering lamp, the reversing lamp and the brake lamp are all dark, determining that the running behavior of the vehicle is left steering running;
s1032, when the tail lamp state identification result is that the right steering lamp is on, and the left steering lamp, the reversing lamp and the brake lamp are all dark, determining that the running behavior of the vehicle is right steering running;
s1033, when the tail lamp state identification result is that the reversing light is on, and the left steering light, the right steering light and the brake light are all dark, determining that the vehicle running behavior is reversing running;
s1034, when the tail lamp state recognition result is that the brake lamp is on, determining that the running behavior of the vehicle is braking.
Specifically, when only the left/right/reverse lamps are on, it may be determined that the vehicle running behavior of the front vehicle is left/right/reverse running; when the brake light is on, whether other lights are on or not, the front vehicle is braked.
S104, determining first position information and first speed information of the vehicle to be monitored, and performing running control on the target vehicle according to the first position information, the first speed information and the running behavior of the vehicle.
Further as an optional embodiment, the step of determining the first position information and the first speed information of the vehicle to be monitored specifically includes:
s1041, acquiring azimuth information of a vehicle to be monitored and distance information between the vehicle to be monitored and a target vehicle through a radar detection device pre-installed on the target vehicle;
s1042, acquiring real-time position information of a target vehicle, and determining first position information and first speed information according to the real-time position information, the azimuth information and the distance information.
Specifically, the radar detection device can sense the azimuth and distance change of the front vehicle by continuously transmitting radar waves, and can accurately calculate the first position information and the first speed information of the front vehicle by combining the real-time position of the target vehicle.
Further as an alternative embodiment, the step of performing travel control on the target vehicle according to the first position information, the first speed information, and the vehicle travel behavior specifically includes:
s1043, predicting the driving track information of the vehicle to be monitored according to the first position information, the first speed information and the driving behavior of the vehicle;
s1044, acquiring running state information of the target vehicle, determining a running strategy of the target vehicle according to the running state information and the running track information, and further performing running control on the target vehicle according to the running strategy.
Specifically, according to the position information and the speed information of the front vehicle detected by the radar and the vehicle running behavior obtained through the vehicle lamp state identification, the running track of the vehicle to be monitored can be predicted, including a left turning track, a right turning track, a back track and a deceleration advancing track, and by combining with the current running state of the target vehicle, whether the target vehicle collides or scratches with the front vehicle or not can be judged, and the running strategy of the target vehicle can be automatically adjusted when the risk exists, so that the intelligent obstacle avoidance of the automatic driving or advanced driving auxiliary system can be realized.
The method steps of the embodiments of the present invention are described above. It can be understood that the embodiment of the invention acquires the road image of the front area of the target vehicle and identifies the state of the front vehicle lamp, so that the vehicle running behavior of the front vehicle can be accurately determined, and meanwhile, the position information and the speed information of the front vehicle are captured in real time, so that the movement track of the front vehicle can be accurately predicted by combining the vehicle running behavior, the position information and the speed information of the front vehicle, the running strategy of the target vehicle is adjusted accordingly, and the accuracy of vehicle control and the safety of vehicle running are improved.
Referring to fig. 2, an embodiment of the present invention provides a machine vision-based vehicle travel control system, including:
the image extraction module is used for acquiring road image information of a region in front of the target vehicle, carrying out edge detection on the road image information to obtain a plurality of continuous contours, and extracting a plurality of images to be identified from the road image information according to the continuous contours;
the tail lamp state recognition module is used for inputting an image to be recognized into a pre-trained vehicle tail lamp recognition model to obtain a tail lamp state recognition result;
the driving behavior determining module is used for determining the vehicle driving behavior of the vehicle to be monitored in the front area of the target vehicle according to the tail lamp state identification result;
and the running control module is used for determining the first position information and the first speed information of the vehicle to be monitored and controlling the running of the target vehicle according to the first position information, the first speed information and the running behavior of the vehicle.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
Referring to fig. 3, an embodiment of the present invention provides a machine vision-based vehicle travel control apparatus, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a machine vision-based vehicle travel control method as described above.
The content in the method embodiment is applicable to the embodiment of the device, and the functions specifically realized by the embodiment of the device are the same as those of the method embodiment, and the obtained beneficial effects are the same as those of the method embodiment.
The embodiment of the present invention also provides a computer-readable storage medium in which a processor-executable program is stored, which when executed by a processor, is for performing the above-described machine vision-based vehicle running control method.
The computer readable storage medium of the embodiment of the invention can execute the vehicle running control method based on machine vision, can execute any combination implementation steps of the embodiment of the method, and has the corresponding functions and beneficial effects of the method.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the present invention has been described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features described above may be integrated in a single physical device and/or software module or one or more of the functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or a part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above-described method of the various embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium upon which the program described above is printed, as the program described above may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A machine vision-based vehicle travel control method, comprising the steps of:
acquiring road image information of a region in front of a target vehicle, performing edge detection on the road image information to obtain a plurality of continuous outlines, and extracting a plurality of images to be identified from the road image information according to the continuous outlines;
inputting the image to be identified into a pre-trained vehicle tail lamp identification model to obtain a tail lamp state identification result;
determining the vehicle running behavior of the vehicle to be monitored in the front area of the target vehicle according to the tail lamp state identification result;
and determining first position information and first speed information of the vehicle to be monitored, and performing running control on the target vehicle according to the first position information, the first speed information and the vehicle running behavior.
2. The machine vision-based vehicle running control method according to claim 1, wherein the steps of acquiring road image information of a region in front of a target vehicle, performing edge detection on the road image information to obtain a plurality of continuous contours, and extracting a plurality of images to be identified from the road image information according to the continuous contours, specifically include:
acquiring the road image information of a region in front of the target vehicle by an image pickup device mounted in advance on the target vehicle;
performing edge detection on the road image information through a Canny operator to obtain first image information, and determining continuous contours in the first image information;
and carrying out image segmentation on the road image information according to the continuous contour to obtain the image to be identified.
3. The machine vision-based vehicle running control method according to claim 2, further comprising a step of training the vehicle tail lamp recognition model in advance, specifically comprising:
acquiring a plurality of preset vehicle tail lamp sample images, and determining a plurality of tail lamp state labels corresponding to each vehicle tail lamp sample image;
constructing a training data set according to the vehicle tail lamp sample image and the tail lamp state label;
inputting the training data set into a convolutional neural network constructed in advance for training to obtain a trained vehicle tail lamp recognition model;
the tail lamp state label comprises a left steering lamp bright state, a left steering lamp dark state, a right steering lamp bright state, a right steering lamp dark state, a reversing lamp bright state, a reversing lamp dark state, a brake lamp bright state and a brake lamp dark state.
4. A machine vision-based vehicle driving control method according to claim 3, wherein the step of inputting the training data set into a convolutional neural network constructed in advance to perform training, and obtaining a trained vehicle tail lamp recognition model specifically comprises the following steps:
inputting the training data set into the convolutional neural network to obtain a tail lamp state prediction result;
determining a loss value of the convolutional neural network according to the tail lamp state prediction result and the tail lamp state label;
updating model parameters of the convolutional neural network through a back propagation algorithm according to the loss value, and returning to the step of inputting the training data set into the convolutional neural network;
and stopping training when the loss value reaches a preset first threshold value or the iteration number reaches a preset second threshold value, and obtaining a trained vehicle tail lamp recognition model.
5. A machine vision-based vehicle running control method according to claim 3, characterized in that the step of determining the vehicle running behavior of the vehicle to be monitored in the area in front of the target vehicle based on the tail light state recognition result specifically comprises:
when the tail lamp state identification result is that the left steering lamp is on, and the right steering lamp, the reversing lamp and the brake lamp are all dark, determining that the vehicle running behavior is left steering running;
when the tail lamp state identification result is that the right steering lamp is on, and the left steering lamp, the reversing lamp and the brake lamp are all dark, determining that the vehicle running behavior is right steering running;
when the tail lamp state identification result is that the reversing lamp is on, and the left steering lamp, the right steering lamp and the brake lamp are all dark, determining that the vehicle running behavior is reversing running;
and when the tail lamp state identification result is that the brake lamp is on, determining that the vehicle running behavior is brake.
6. The machine vision-based vehicle running control method according to claim 1, wherein the step of determining the first position information and the first speed information of the vehicle to be monitored specifically includes:
acquiring azimuth information of the vehicle to be monitored and distance information between the vehicle to be monitored and the target vehicle through a radar detection device pre-installed on the target vehicle;
and acquiring real-time position information of the target vehicle, and determining the first position information and the first speed information according to the real-time position information, the azimuth information and the distance information.
7. The machine vision-based vehicle running control method according to any one of claims 1 to 6, characterized in that the step of running control of the target vehicle according to the first position information, the first speed information, and the vehicle running behavior specifically includes:
predicting the driving track information of the vehicle to be monitored according to the first position information, the first speed information and the driving behavior of the vehicle;
and acquiring the running state information of the target vehicle, determining the running strategy of the target vehicle according to the running state information and the running track information, and further performing running control on the target vehicle according to the running strategy.
8. A machine vision-based vehicle travel control system, comprising:
the image extraction module is used for acquiring road image information of a region in front of a target vehicle, carrying out edge detection on the road image information to obtain a plurality of continuous outlines, and extracting a plurality of images to be identified from the road image information according to the continuous outlines;
the tail lamp state recognition module is used for inputting the image to be recognized into a pre-trained vehicle tail lamp recognition model to obtain a tail lamp state recognition result;
the driving behavior determining module is used for determining the vehicle driving behavior of the vehicle to be monitored in the area in front of the target vehicle according to the tail lamp state identification result;
and the running control module is used for determining the first position information and the first speed information of the vehicle to be monitored and controlling the running of the target vehicle according to the first position information, the first speed information and the running behavior of the vehicle.
9. A machine vision-based vehicle travel control apparatus, comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to implement a machine vision-based vehicle travel control method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium in which a processor-executable program is stored, characterized in that the processor-executable program is for performing a machine vision-based vehicle running control method according to any one of claims 1 to 7 when being executed by a processor.
CN202310198258.6A 2023-03-01 2023-03-01 Vehicle running control method, system, device and medium based on machine vision Pending CN116176625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310198258.6A CN116176625A (en) 2023-03-01 2023-03-01 Vehicle running control method, system, device and medium based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310198258.6A CN116176625A (en) 2023-03-01 2023-03-01 Vehicle running control method, system, device and medium based on machine vision

Publications (1)

Publication Number Publication Date
CN116176625A true CN116176625A (en) 2023-05-30

Family

ID=86450519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310198258.6A Pending CN116176625A (en) 2023-03-01 2023-03-01 Vehicle running control method, system, device and medium based on machine vision

Country Status (1)

Country Link
CN (1) CN116176625A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935173A (en) * 2024-03-21 2024-04-26 安徽蔚来智驾科技有限公司 Target vehicle identification method, field end server and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935173A (en) * 2024-03-21 2024-04-26 安徽蔚来智驾科技有限公司 Target vehicle identification method, field end server and readable storage medium

Similar Documents

Publication Publication Date Title
US9449236B2 (en) Method for object size calibration to aid vehicle detection for video-based on-street parking technology
CN110533950A (en) Detection method, device, electronic equipment and the storage medium of parking stall behaviour in service
US20230177796A1 (en) Methods and systems for video processing
EP3584742A1 (en) System and method for traffic sign recognition
CN113505671B (en) Machine vision-based carriage congestion degree determination method, system, device and medium
CN111723625A (en) Traffic light image recognition processing method and device, auxiliary traffic system and storage medium
CN114926441A (en) Defect detection method and system for machining and molding injection molding part
CN113343837A (en) Intelligent driving method, system, device and medium based on vehicle lamp language recognition
CN113361299B (en) Abnormal parking detection method and device, storage medium and electronic equipment
CN116176625A (en) Vehicle running control method, system, device and medium based on machine vision
CN115482672B (en) Method, device, terminal equipment and storage medium for detecting vehicle reverse running
CN113470385B (en) Traffic light control method, system and device based on machine vision and storage medium
Dragaš et al. Development and Implementation of Lane Departure Warning System on ADAS Alpha Board
CN116704475A (en) High beam recognition processing method, system, device and medium based on machine vision
CN115546744A (en) Lane detection using DBSCAN
CN116189146A (en) Vehicle running safety monitoring method, system, device and storage medium
CN114022848A (en) Control method and system for automatic illumination of tunnel
CN116946172A (en) Vehicle driving safety early warning method, system, device and storage medium
CN111814559A (en) Parking state identification method and system
CN115272984B (en) Method, system, computer and readable storage medium for detecting lane occupation operation
EP4386693A1 (en) Methods and systems for determining a conversion rule
CN116385949B (en) Mobile robot region detection method, system, device and medium
CN116188444A (en) Charging socket control method, system, device and medium based on machine vision
US20220309799A1 (en) Method for Automatically Executing a Vehicle Function, Method for Evaluating a Computer Vision Method and Evaluation Circuit for a Vehicle
CN115345838A (en) Pantograph disease detection method, system, device and medium based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination