CN112232314A - Vehicle control method and device for target detection based on deep learning - Google Patents

Vehicle control method and device for target detection based on deep learning Download PDF

Info

Publication number
CN112232314A
CN112232314A CN202011434379.9A CN202011434379A CN112232314A CN 112232314 A CN112232314 A CN 112232314A CN 202011434379 A CN202011434379 A CN 202011434379A CN 112232314 A CN112232314 A CN 112232314A
Authority
CN
China
Prior art keywords
vehicle
frame
training
road
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011434379.9A
Other languages
Chinese (zh)
Inventor
吴志洋
朱磊
孟绍旭
黄自瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202011434379.9A priority Critical patent/CN112232314A/en
Publication of CN112232314A publication Critical patent/CN112232314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0062Adapting control system settings
    • B60W2050/0075Automatic parameter input, automatic initialising or calibrating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/55External transmission of data to or from the vehicle using telemetry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention provides a vehicle control method and device for target detection based on deep learning. In the process of carrying out model training on the initial model to obtain the preset machine learning model, the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame. And the clustering sample frame is determined by clustering operation on the sample identification frame. The initial model can generate the recognition frame by referring to the clustering sample frame without trying recognition frames of various sizes, so that the convergence rate of model training is increased, the operation time of the model training process is reduced, the model training method is favorable for timely training a required model for an unmanned vehicle or timely updating the model, and the on-road requirement of the unmanned vehicle is met.

Description

Vehicle control method and device for target detection based on deep learning
Technical Field
The invention relates to the technical field of machine learning and automatic driving, in particular to a vehicle control method and device for target detection based on deep learning.
Background
With the rapid development of technologies such as big data, cloud computing and 5G, the artificial intelligence technology is gradually mature, and unmanned driving becomes possible. The automatic driving technology can not only reduce the burden of a driver, but also effectively reduce the occurrence of traffic accidents. The combination of the internet and traffic is to apply the new internet technology to the intelligent traffic field, so that the machine can identify a series of traffic targets such as vehicles, pedestrians, signal lamps and traffic barriers, and infrastructure is provided for intelligent traffic and automatic driving. The most important technical point of the automatic driving technology is environment perception, and the image recognition technology is utilized to perceive the surrounding environment and detect road vehicles, obstacles, traffic lights and traffic signs. In recent years, video image-based detection techniques have been developed, which perform image acquisition of a detection target in real time by using an image acquisition apparatus and analyze the acquired data.
In the prior art, machine learning trained models are mostly adopted to identify physical objects (such as people, other vehicles, signal lamps and the like) existing in the environment in front of the vehicle, and the models need to be trained through a large number of training samples. However, the existing model training mode has large calculation amount and low model convergence speed, so that the required model cannot be trained in time for the unmanned vehicle or the existing model of the vehicle cannot be updated in time, and the road-going requirement of the unmanned vehicle cannot be met in time.
Disclosure of Invention
The invention provides a vehicle control method and device for target detection based on deep learning, which are used for solving the problem that the existing model of an unmanned vehicle cannot be timely updated or a required model cannot be timely trained in the prior art, so that the on-road requirement of the unmanned vehicle cannot be timely met.
In view of the above technical problems, a first aspect of the present invention provides a vehicle control method for performing target detection based on deep learning, including:
acquiring a front environment image acquired in the running process of a vehicle;
inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame;
adjusting the driving path of the vehicle according to the position of each entity object appearing in the detection result;
the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
The invention provides a vehicle control method for target detection based on deep learning, and on the basis of the method, the training of a preset machine learning model comprises the following steps:
acquiring a plurality of training samples, and performing clustering operation on sample identification boxes marked in the training samples to obtain at least one clustering sample box; the training sample takes any front environment image acquired in the running process of the vehicle as input, and takes an image obtained by marking a solid object in any front environment image through a sample identification frame as expected output;
training the initial model through the training samples to obtain the preset machine learning model; and generating a sample identification frame for marking the entity object in the front environment image in the training model by the initial model according to the frame information of the clustering sample frame each time.
The invention provides a vehicle control method for detecting a target based on deep learning, which is based on the method, and the method is used for adjusting the running path of the vehicle according to the position of each entity object appearing in the detection result, and comprises the following steps:
for any entity object marked in the detection result, determining the orientation information of the entity object relative to the vehicle according to the position of the entity object in the front environment image;
judging whether any entity object is in the road according to the relative position of the entity object relative to the vehicle and the edge line of the road where the vehicle is located;
controlling the vehicle to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle.
The present invention provides a vehicle control method for detecting a target based on deep learning, wherein the method controls the vehicle to travel according to a relative position of any one of the physical objects with respect to the vehicle, which is determined by the orientation information and the relative distance, and a road on which the vehicle is located, and includes:
judging whether any entity object is in the road according to the relative position of the entity object relative to the vehicle and the edge line of the road where the vehicle is located;
controlling the vehicle to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle.
The invention provides a vehicle control method for detecting a target based on deep learning, which is based on the method and controls the vehicle to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a first physical object in the road is included in the front environment image, if the first physical object is a movable entity, controlling the vehicle to stop when the relative distance between the first physical object and the vehicle is smaller than or equal to a first preset distance; wherein the movable entities include people, animals, and vehicles;
if the first entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the first entity object and the vehicle is greater than the first preset distance and less than or equal to a second preset distance;
and if the first entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
The invention provides a vehicle control method for detecting a target based on deep learning, which is based on the method and controls the vehicle to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a second entity object outside the road is included in the front environment image, if the second entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the second entity object and the vehicle is less than or equal to a third preset distance; wherein the movable entities include people, animals, and vehicles;
and if the second entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
The invention provides a vehicle control method for target detection based on deep learning, on the basis of the method, the step of inputting the front environment image into a preset machine learning model comprises the following steps:
and adjusting the collected front environment image according to a preset size and a preset resolution, and inputting the adjusted front environment image into the preset machine learning model.
In a second aspect, the present invention provides a vehicle control device that performs target detection based on deep learning, including:
the acquisition module is used for acquiring a front environment image acquired in the running process of the vehicle;
the detection module is used for inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame;
the control module is used for adjusting the running path of the vehicle according to the position of each entity object appearing in the detection result;
the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
In a third aspect, the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the method for vehicle control based on deep learning for target detection as described in any of the above.
In a fourth aspect, the invention provides a non-transitory readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle control method for object detection based on deep learning as set forth in any one of the above.
The invention provides a vehicle control method and device for target detection based on deep learning. In the process of carrying out model training on the initial model to obtain the preset machine learning model, the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame. And the clustering sample frame is determined by clustering operation on the sample identification frame. The initial model can generate the recognition frame by referring to the clustering sample frame without trying recognition frames of various sizes, so that the convergence rate of model training is increased, the operation time of the model training process is reduced, the model training method is favorable for timely training a required model for an unmanned vehicle or timely updating the model, and the on-road requirement of the unmanned vehicle is met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a vehicle control method for target detection based on deep learning according to the present invention;
FIG. 2 is a schematic diagram of the model training provided by the present invention and a process for controlling vehicle driving by the trained model;
FIG. 3 is a block diagram of a vehicle control apparatus for performing target detection based on deep learning according to the present invention;
fig. 4 is a schematic physical structure diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a vehicle control method for performing target detection based on deep learning according to the present invention, and referring to fig. 1, the vehicle control method for performing target detection based on deep learning includes:
step 101: and acquiring a front environment image acquired in the running process of the vehicle.
Step 102: inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame;
step 103: adjusting the driving path of the vehicle according to the position of each entity object appearing in the detection result;
the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
The front environment image can be collected by a camera arranged in front of the vehicle, the front environment image is collected in real time in the running process of the vehicle, and the step 101 and the step 103 are executed to control the running of the vehicle, avoid physical objects such as pedestrians, animals, vehicles and the like, and ensure the running safety.
The physical object may be a pedestrian, a vehicle, a road signal lamp, etc., which is not specifically limited in this embodiment.
It should be noted that, in the method provided in this embodiment, in the model training process, the clustering sample frame is provided as a reference for generating the recognition frame, so that the initial model does not need to adjust parameters for recognition frames of various sizes in the training process, and the clustering sample frame can be directly referred to generate the recognition frame, thereby reducing the operation time in the model training process, improving the convergence rate of the model, and accelerating the speed of training the preset machine learning model through the initial model.
Further, for the unmanned vehicle, the method provided by the embodiment can be used for quickly training the model required by the vehicle, or timely updating the currently used model of the unmanned vehicle, so that the unmanned vehicle can timely and accurately identify the entity object in front of the vehicle, the requirement of the unmanned vehicle on the road is met, and the driving safety of the vehicle is improved.
The embodiment provides a vehicle control method for target detection based on deep learning, which detects an entity object existing in an environment in front of a vehicle through a preset machine learning model in the vehicle running process, and then controls the vehicle to run according to the detected entity object. In the process of carrying out model training on the initial model to obtain the preset machine learning model, the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame. And the clustering sample frame is determined by clustering operation on the sample identification frame. The initial model can generate the recognition frame by referring to the clustering sample frame without trying recognition frames of various sizes, so that the convergence rate of model training is increased, the operation time of the model training process is reduced, the model training method is favorable for timely training a required model for an unmanned vehicle or timely updating the model, and the on-road requirement of the unmanned vehicle is met.
Further, on the basis of the above embodiment, the training of the preset machine learning model includes:
acquiring a plurality of training samples, and performing clustering operation on sample identification boxes marked in the training samples to obtain at least one clustering sample box; the training sample takes any front environment image acquired in the running process of the vehicle as input, and takes an image obtained by marking a solid object in any front environment image through a sample identification frame as expected output;
training the initial model through the training samples to obtain the preset machine learning model; and generating a sample identification frame for marking the entity object in the front environment image in the training model by the initial model according to the frame information of the clustering sample frame each time.
The sample identification boxes in the training samples can be clustered through a Kmeans clustering algorithm. The training sample can be formed by labeling 14 types of traffic targets including vehicles, pedestrians, traffic lights, traffic signs and traffic barriers in advance through manual labeling. The at least one cluster sample box may be specifically 9 identification boxes.
In the embodiment, model training is performed through the labeled training samples, and the clustering sample frame obtained through clustering operation accelerates the convergence rate of model training and improves the training efficiency of the model.
Fig. 2 is a schematic diagram of a process of training a model and controlling a vehicle to run through the trained model according to this embodiment, and referring to fig. 2, the process includes the following steps:
1. the traffic road video shot by the vehicle-mounted camera is utilized to collect traffic road images of a plurality of cities, and preprocessing work such as early-stage screening is carried out on the images.
2. And carrying out manual marking on 14 types of traffic targets including vehicles, pedestrians, traffic lights, traffic signs and traffic barriers.
3. And performing K-means clustering on the target frame in the marked data set, and clustering 9 anchors frames with different scales.
4. And modifying the configuration file and the parameter file of the YOLOV5 model, and starting the training of the model to obtain the model for target detection.
5. A preset machine learning model is deployed at the vehicle end, so that real-time multi-target detection can be performed on video data, and position information and category information of targets are output.
Further, on the basis of the foregoing embodiments, the adjusting the driving path of the vehicle according to the position of each physical object appearing in the detection result includes:
for any entity object marked in the detection result, determining the orientation information of the entity object relative to the vehicle according to the position of the entity object in the front environment image;
determining the relative distance of any entity object relative to the vehicle according to the ranging signal sent at the position determined by the position information;
and controlling the vehicle to run according to the relative position of any physical object relative to the vehicle, which is determined by the position information and the relative distance, and the road on which the vehicle is located.
It should be noted that a distance measuring device (for example, an infrared distance measuring device or a radar distance measuring device) is further installed on the unmanned vehicle, and is used for sending a distance measuring signal to the front of the vehicle to test the distance between a physical object in front of the vehicle and the vehicle.
The camera installed in front of the vehicle is fixed in position, and each solid object appearing in the image collected by the camera can determine the azimuth information of the solid object relative to the vehicle according to the position of the solid object in the front environment image. The position information may include an angle between a line from the physical object to a predetermined position of the vehicle (e.g., an intersection of a center line of the vehicle along the road and a line connecting two headlights in front of the vehicle) and a center line of the vehicle along the road.
After determining the orientation information of a certain physical object in the forward environment image relative to the vehicle, the relative distance between the vehicle and the physical object may be determined according to the ranging signal transmitted at the orientation determined by the orientation information. The relative position of the entity object and the vehicle can be determined based on the orientation information and the relative distance, and then the vehicle is controlled to run according to the relative position, so that the control process of the vehicle running according to the entity object identified in the front environment image is realized.
Further, on the basis of the above embodiments, the controlling the vehicle to travel according to the relative position of the any physical object with respect to the vehicle, which is determined by the orientation information and the relative distance, and the road on which the vehicle is located includes:
judging whether any entity object is in the road according to the relative position of the entity object relative to the vehicle and the edge line of the road where the vehicle is located;
controlling the vehicle to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle.
The edge line of the road can be determined by an identification line or a road edge determined by image recognition, or a sideline determined by extending a tire on the left side of the vehicle by a preset distance and a sideline determined by extending a tire on the right side of the vehicle by a preset distance can be used as the edge line of the road where the vehicle is located.
In the embodiment, the vehicle is controlled more accurately according to whether the entity object is in the driving road of the vehicle and the information of the entity object, so that the driving safety is ensured.
Further, on the basis of the above embodiments, the vehicle is controlled to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a first physical object in the road is included in the front environment image, if the first physical object is a movable entity, controlling the vehicle to stop when the relative distance between the first physical object and the vehicle is smaller than or equal to a first preset distance; wherein the movable entities include people, animals, and vehicles;
if the first entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the first entity object and the vehicle is greater than the first preset distance and less than or equal to a second preset distance;
and if the first entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
Wherein the movable entity is an object that can move by itself, such as a person (mainly a pedestrian on a road), an animal or a vehicle. In addition to the movable entities, vehicle travel may be controlled based on non-movable entities, including objects that are not capable of moving on their own, such as, for example, road lights, trees, buildings, etc. The object content of the entity object can be specifically judged through image recognition.
Because the movable entity can move by oneself, consequently in order to guarantee the safety of vehicle and entity object, can detect the first entity object that the place ahead environment exists the road, and when first entity object was the dynamic object, control the vehicle according to first preset distance and second preset distance and travel, guarantee driving safety.
It can be understood that when the first physical object is a traffic light, the color of the traffic light needs to be recognized through the image to control the vehicle to run.
In the embodiment, the control of the vehicle is realized through the relative distance aiming at the first entity object appearing in the front environment, and the driving safety is ensured.
Further, on the basis of the above embodiments, the vehicle is controlled to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a second entity object outside the road is included in the front environment image, if the second entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the second entity object and the vehicle is less than or equal to a third preset distance; wherein the movable entities include people, animals, and vehicles;
and if the second entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
When a second physical object which is not in the road is detected in the environment in front of the vehicle, in order to avoid the second physical object from intruding into the road to cause a collision, the vehicle is controlled to reduce the speed when the vehicle is detected to be closer to the second physical object, so that the vehicle can be stopped in time when an emergency situation (for example, the second physical object suddenly intrudes into the road) occurs, and an accident is avoided.
In this embodiment, the vehicle deceleration is controlled for the entity object outside the road through the third preset distance, so that the vehicle can respond in time in an emergency state, and the driving safety of the vehicle is ensured.
Further, on the basis of the foregoing embodiments, the inputting the front environment image into a preset machine learning model includes:
and adjusting the collected front environment image according to a preset size and a preset resolution, and inputting the adjusted front environment image into the preset machine learning model.
Furthermore, the size of a convolution kernel in the preset machine learning model can be adjusted, and the operation amount is reduced.
In the embodiment, in the model training process and the subsequent entity object identification process, the collected front environment image needs to be processed first, the standard of the front environment image is unified, and the accuracy of the subsequent entity object identification through the front environment image is improved.
Fig. 3 is a block diagram of a vehicle control device for target detection based on deep learning according to the present embodiment, and referring to fig. 3, the vehicle control device for target detection based on deep learning includes an acquisition module 301, a detection module 302 and a control module 303, wherein,
the acquisition module 301 is used for acquiring a front environment image acquired in the running process of a vehicle;
the detection module 302 is configured to input the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame;
a control module 303, configured to adjust a driving path of the vehicle according to a position of each of the physical objects appearing in the detection result;
the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
The vehicle control device for performing target detection based on deep learning provided in this embodiment is suitable for the vehicle control method for performing target detection based on deep learning provided in each embodiment, and is not described herein again.
The embodiment provides a vehicle control device for target detection based on deep learning, which detects a physical object existing in an environment in front of a vehicle through a preset machine learning model during the running process of the vehicle, and then controls the running of the vehicle according to the detected physical object. In the process of carrying out model training on the initial model to obtain the preset machine learning model, the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame. And the clustering sample frame is determined by clustering operation on the sample identification frame. The initial model can generate the recognition frame by referring to the clustering sample frame without trying recognition frames of various sizes, so that the convergence rate of model training is increased, the calculation amount of the model training process is reduced, the model training method is favorable for timely training a required model for an unmanned vehicle or timely updating the model, and the on-road requirement of the unmanned vehicle is met.
Optionally, the training of the preset machine learning model includes:
acquiring a plurality of training samples, and performing clustering operation on sample identification boxes marked in the training samples to obtain at least one clustering sample box; the training sample takes any front environment image acquired in the running process of the vehicle as input, and takes an image obtained by marking a solid object in any front environment image through a sample identification frame as expected output;
training the initial model through the training samples to obtain the preset machine learning model; and generating a sample identification frame for marking the entity object in the front environment image in the training model by the initial model according to the frame information of the clustering sample frame each time.
Optionally, the adjusting the driving path of the vehicle according to the position of each physical object appearing in the detection result includes:
for any entity object marked in the detection result, determining the orientation information of the entity object relative to the vehicle according to the position of the entity object in the front environment image;
determining the relative distance of any entity object relative to the vehicle according to the ranging signal sent at the position determined by the position information;
and controlling the vehicle to run according to the relative position of any physical object relative to the vehicle, which is determined by the position information and the relative distance, and the road on which the vehicle is located.
Optionally, the controlling the vehicle to run according to the relative position of the any physical object determined by the orientation information and the relative distance relative to the vehicle and the road on which the vehicle is located includes:
judging whether any entity object is in the road according to the relative position of the entity object relative to the vehicle and the edge line of the road where the vehicle is located;
controlling the vehicle to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle.
Optionally, the vehicle is controlled to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a first physical object in the road is included in the front environment image, if the first physical object is a movable entity, controlling the vehicle to stop when the relative distance between the first physical object and the vehicle is smaller than or equal to a first preset distance; wherein the movable entities include people, animals, and vehicles;
if the first entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the first entity object and the vehicle is greater than the first preset distance and less than or equal to a second preset distance;
and if the first entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
Optionally, the vehicle is controlled to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a second entity object outside the road is included in the front environment image, if the second entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the second entity object and the vehicle is less than or equal to a third preset distance; wherein the movable entities include people, animals, and vehicles;
and if the second entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
Optionally, the inputting the front environment image into a preset machine learning model includes:
and adjusting the collected front environment image according to a preset size and a preset resolution, and inputting the adjusted front environment image into the preset machine learning model.
Fig. 4 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 4: a processor (processor)401, a communication Interface (communication Interface)402, a memory (memory)403 and a communication bus 404, wherein the processor 401, the communication Interface 402 and the memory 403 complete communication with each other through the communication bus 404. Processor 401 may invoke logic instructions in memory 403 to perform a method of vehicle control for deep learning based target detection, the method comprising: acquiring a front environment image acquired in the running process of a vehicle; inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame; adjusting the driving path of the vehicle according to the position of each entity object appearing in the detection result; the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
In addition, the logic instructions in the memory 403 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Further, an embodiment of the present invention discloses a computer program product, the computer program product includes a computer program stored on a non-transitory readable storage medium, the computer program includes program instructions, when the program instructions are executed by a computer, the computer can execute the vehicle control method for target detection based on deep learning provided by the above method embodiments, the method includes: acquiring a front environment image acquired in the running process of a vehicle; inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame; adjusting the driving path of the vehicle according to the position of each entity object appearing in the detection result; the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
In another aspect, an embodiment of the present invention further provides a non-transitory readable storage medium, on which a computer program is stored, where the computer program is implemented to, when executed by a processor, perform the method for controlling a vehicle based on deep learning for target detection provided in the foregoing embodiments, where the method includes: acquiring a front environment image acquired in the running process of a vehicle; inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame; adjusting the driving path of the vehicle according to the position of each entity object appearing in the detection result; the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A vehicle control method for performing target detection based on deep learning, characterized by comprising:
acquiring a front environment image acquired in the running process of a vehicle;
inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame;
adjusting the driving path of the vehicle according to the position of each entity object appearing in the detection result;
the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
2. The vehicle control method for performing target detection based on deep learning according to claim 1, wherein the training of the preset machine learning model comprises:
acquiring a plurality of training samples, and performing clustering operation on sample identification boxes marked in the training samples to obtain at least one clustering sample box; the training sample takes any front environment image acquired in the running process of the vehicle as input, and takes an image obtained by marking a solid object in any front environment image through a sample identification frame as expected output;
training the initial model through the training samples to obtain the preset machine learning model; and generating a sample identification frame for marking the entity object in the front environment image in the training model by the initial model according to the frame information of the clustering sample frame each time.
3. The method according to claim 1, wherein the adjusting the travel path of the vehicle according to the position of each physical object appearing in the detection result comprises:
for any entity object marked in the detection result, determining the orientation information of the entity object relative to the vehicle according to the position of the entity object in the front environment image;
determining the relative distance of any entity object relative to the vehicle according to the ranging signal sent at the position determined by the position information;
and controlling the vehicle to run according to the relative position of any physical object relative to the vehicle, which is determined by the position information and the relative distance, and the road on which the vehicle is located.
4. The deep learning-based target detection vehicle control method according to claim 3, wherein the controlling of the vehicle driving according to the relative position of the any one of the physical objects with respect to the vehicle, which is determined by the orientation information and the relative distance, and the road on which the vehicle is located, includes:
judging whether any entity object is in the road according to the relative position of the entity object relative to the vehicle and the edge line of the road where the vehicle is located;
controlling the vehicle to run according to at least one of the following information: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle.
5. The deep learning-based target detection vehicle control method according to claim 4, wherein the vehicle is controlled to travel according to at least one of: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a first physical object in the road is included in the front environment image, if the first physical object is a movable entity, controlling the vehicle to stop when the relative distance between the first physical object and the vehicle is smaller than or equal to a first preset distance; wherein the movable entities include people, animals, and vehicles;
if the first entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the first entity object and the vehicle is greater than the first preset distance and less than or equal to a second preset distance;
and if the first entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
6. The deep learning-based target detection vehicle control method according to claim 4, wherein the vehicle is controlled to travel according to at least one of: whether the any physical object is within the road, object content of the any physical object, relative distance of the any physical object with respect to the vehicle, including:
when a second entity object outside the road is included in the front environment image, if the second entity object is a movable entity, controlling the vehicle to decelerate when the relative distance between the second entity object and the vehicle is less than or equal to a third preset distance; wherein the movable entities include people, animals, and vehicles;
and if the second entity object is a road signal lamp, controlling the vehicle to continuously run, decelerate or stop according to a signal sent by the road signal lamp.
7. The vehicle control method for target detection based on deep learning of claim 1, wherein the inputting the front environment image into a preset machine learning model comprises:
and adjusting the collected front environment image according to a preset size and a preset resolution, and inputting the adjusted front environment image into the preset machine learning model.
8. A vehicle control apparatus that performs target detection based on deep learning, characterized by comprising:
the acquisition module is used for acquiring a front environment image acquired in the running process of the vehicle;
the detection module is used for inputting the front environment image into a preset machine learning model to obtain a detection result output by the preset machine learning model; wherein at least one physical object existing in the environment in front of the vehicle is marked in the detection result through an identification frame;
the control module is used for adjusting the running path of the vehicle according to the position of each entity object appearing in the detection result;
the preset machine learning model is obtained by training an initial model, and the initial model generates an identification frame for marking an entity object in a training sample according to frame information of a clustering sample frame; the clustering sample box is determined by clustering operation on a sample identification box marked in the training sample; the frame information at least includes any one of the following information: frame shape, dimensions of each side of the frame.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements the steps of the deep learning based object detection vehicle control method according to any one of claims 1 to 7.
10. A non-transitory readable storage medium having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, realizes the steps of the vehicle control method for object detection based on deep learning according to any one of claims 1 to 7.
CN202011434379.9A 2020-12-11 2020-12-11 Vehicle control method and device for target detection based on deep learning Pending CN112232314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011434379.9A CN112232314A (en) 2020-12-11 2020-12-11 Vehicle control method and device for target detection based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011434379.9A CN112232314A (en) 2020-12-11 2020-12-11 Vehicle control method and device for target detection based on deep learning

Publications (1)

Publication Number Publication Date
CN112232314A true CN112232314A (en) 2021-01-15

Family

ID=74124498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011434379.9A Pending CN112232314A (en) 2020-12-11 2020-12-11 Vehicle control method and device for target detection based on deep learning

Country Status (1)

Country Link
CN (1) CN112232314A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112896042A (en) * 2021-03-02 2021-06-04 广州通达汽车电气股份有限公司 Vehicle driving early warning method, device, equipment and storage medium
CN112949595A (en) * 2021-04-01 2021-06-11 哈尔滨理工大学 Improved pedestrian and vehicle safety distance detection algorithm based on YOLOv5
CN112985462A (en) * 2021-04-21 2021-06-18 智道网联科技(北京)有限公司 Method and device for correcting inertial measurement data based on convolutional neural network model
CN113581199A (en) * 2021-06-30 2021-11-02 银隆新能源股份有限公司 Vehicle control method and device
CN113689491A (en) * 2021-09-02 2021-11-23 广州市奥威亚电子科技有限公司 Target positioning method, multi-target tracking method and device
CN113807407A (en) * 2021-08-25 2021-12-17 西安电子科技大学广州研究院 Target detection model training method, model performance detection method and device
CN114018215A (en) * 2022-01-04 2022-02-08 智道网联科技(北京)有限公司 Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258878A1 (en) * 2018-02-18 2019-08-22 Nvidia Corporation Object detection and detection confidence suitable for autonomous driving
CN110271544A (en) * 2018-03-15 2019-09-24 本田技研工业株式会社 Controller of vehicle, control method for vehicle and storage medium
CN110728770A (en) * 2019-09-29 2020-01-24 深圳市大拿科技有限公司 Vehicle running monitoring method, device and system and electronic equipment
CN111553387A (en) * 2020-04-03 2020-08-18 上海物联网有限公司 Yolov 3-based personnel target detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258878A1 (en) * 2018-02-18 2019-08-22 Nvidia Corporation Object detection and detection confidence suitable for autonomous driving
CN110271544A (en) * 2018-03-15 2019-09-24 本田技研工业株式会社 Controller of vehicle, control method for vehicle and storage medium
CN110728770A (en) * 2019-09-29 2020-01-24 深圳市大拿科技有限公司 Vehicle running monitoring method, device and system and electronic equipment
CN111553387A (en) * 2020-04-03 2020-08-18 上海物联网有限公司 Yolov 3-based personnel target detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
韩江洪 等: ""基于深度学习的井下巷道行人视觉定位算法"", 《计算机应用》 *
龙翔 等: ""一种自动驾驶汽车系统架构开发与测试验证"", 《重庆理工大学学报(自然科学)》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112896042A (en) * 2021-03-02 2021-06-04 广州通达汽车电气股份有限公司 Vehicle driving early warning method, device, equipment and storage medium
CN112949595A (en) * 2021-04-01 2021-06-11 哈尔滨理工大学 Improved pedestrian and vehicle safety distance detection algorithm based on YOLOv5
CN112985462A (en) * 2021-04-21 2021-06-18 智道网联科技(北京)有限公司 Method and device for correcting inertial measurement data based on convolutional neural network model
CN113581199A (en) * 2021-06-30 2021-11-02 银隆新能源股份有限公司 Vehicle control method and device
CN113807407A (en) * 2021-08-25 2021-12-17 西安电子科技大学广州研究院 Target detection model training method, model performance detection method and device
CN113807407B (en) * 2021-08-25 2023-04-18 西安电子科技大学广州研究院 Target detection model training method, model performance detection method and device
CN113689491A (en) * 2021-09-02 2021-11-23 广州市奥威亚电子科技有限公司 Target positioning method, multi-target tracking method and device
CN113689491B (en) * 2021-09-02 2023-12-01 广州市奥威亚电子科技有限公司 Target positioning method, multi-target tracking method and device
CN114018215A (en) * 2022-01-04 2022-02-08 智道网联科技(北京)有限公司 Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation

Similar Documents

Publication Publication Date Title
CN112232314A (en) Vehicle control method and device for target detection based on deep learning
US10885777B2 (en) Multiple exposure event determination
CN107346612B (en) Vehicle anti-collision method and system based on Internet of vehicles
CN108647638B (en) Vehicle position detection method and device
US20190143992A1 (en) Self-driving learning apparatus and method using driving experience information
CN109085829B (en) Dynamic and static target identification method
US20200117912A1 (en) System and method for determining vehicle data set familiarity
CN113147752A (en) Unmanned driving method and system
CN112793567A (en) Driving assistance method and system based on road condition detection
CN114694060B (en) Road casting detection method, electronic equipment and storage medium
CN111145554B (en) Scene positioning method and device based on automatic driving AEB
CN113071515A (en) Movable carrier control method, device, movable carrier and storage medium
CN110727269B (en) Vehicle control method and related product
CN116767281A (en) Auxiliary driving method, device, equipment, vehicle and medium
WO2021199584A1 (en) Detecting debris in a vehicle path
CN113370991A (en) Driving assistance method, device, equipment, storage medium and computer program product
CN113353087A (en) Driving assistance method, device and system
CN112232312A (en) Automatic driving method and device based on deep learning and electronic equipment
CN116811884B (en) Intelligent driving environment perception analysis method and system
US20240127694A1 (en) Method for collision warning, electronic device, and storage medium
JP7323716B2 (en) Image processing device and image processing method
US10831194B2 (en) Method and device that recognizes road users in an environment of a vehicle
CN115147795A (en) Bus station water-splashing prevention method, device, equipment and medium based on image recognition
CN117227760A (en) Vehicle running control method, device, equipment and storage medium
Singh et al. Computer Vision Based Approach for Overspeeding Problem in Smart Traffic System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210115

RJ01 Rejection of invention patent application after publication