CN114648504A - Automatic driving method, device, electronic equipment and storage medium - Google Patents

Automatic driving method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114648504A
CN114648504A CN202210278849.XA CN202210278849A CN114648504A CN 114648504 A CN114648504 A CN 114648504A CN 202210278849 A CN202210278849 A CN 202210278849A CN 114648504 A CN114648504 A CN 114648504A
Authority
CN
China
Prior art keywords
detected
images
target object
frames
road condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210278849.XA
Other languages
Chinese (zh)
Other versions
CN114648504B (en
Inventor
刘霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210278849.XA priority Critical patent/CN114648504B/en
Publication of CN114648504A publication Critical patent/CN114648504A/en
Application granted granted Critical
Publication of CN114648504B publication Critical patent/CN114648504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an automatic driving method, apparatus, electronic device, and storage medium, the method comprising: acquiring multiple frames of images to be detected, which are continuously acquired by a camera module of a vehicle; identifying the multiple frames of images to be detected, and determining whether a target object exists in each frame of image to be detected and the position of the existing target object in the image to be detected, wherein the target object is an appointed accessory of a front vehicle; determining road condition information according to the positions of the target object in at least two frames of images to be detected under the condition that the target object exists in at least two frames of images to be detected; and controlling the vehicle to execute corresponding driving actions according to the road condition information.

Description

Automatic driving method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to an automatic driving method, an automatic driving apparatus, an electronic device, and a storage medium.
Background
In recent years, automobiles have been developed at a high speed, and particularly, the more vehicles are added with an automatic driving technology, which may be a full-automatic driving technology and a semi-automatic driving technology (or an auxiliary driving technology), and the semi-automatic driving technology may assist a driving work of a driver in various situations, thereby improving driving safety. In the related art, research on the automatic driving technology has been around driving safety, but the research is rarely directed to driving comfort, so that the riding comfort of a vehicle using the automatic driving technology is poor.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide an automatic driving method, an automatic driving apparatus, an electronic device, and a storage medium, which are used to solve the disadvantages in the related art.
According to a first aspect of an embodiment of the present disclosure, there is provided an automatic driving method including:
acquiring multiple frames of images to be detected, which are continuously acquired by a camera module of a vehicle;
identifying the multiple frames of images to be detected, and determining whether a target object exists in each frame of image to be detected and the position of the existing target object in the image to be detected, wherein the target object is an appointed accessory of a front vehicle;
determining road condition information according to the positions of the target objects in at least two frames of images to be detected under the condition that the target objects exist in at least two frames of images to be detected;
and controlling the vehicle to execute corresponding driving actions according to the road condition information.
In an embodiment, the identifying the multiple frames of images to be detected to determine whether a target object exists in each frame of images to be detected and a position of the existing target object in the images to be detected includes:
inputting each frame of image to be detected into a pre-trained neural network identification model, wherein the neural network identification model correspondingly outputs an identification result of each frame of image to be detected, and the identification result comprises whether a target object exists in the image to be detected and the position of the existing target object in the image to be detected.
In one embodiment, further comprising:
inputting at least one training image in a set of training images into the neural network recognition model, wherein the neural network recognition model outputs a recognition result of the training image, the training image has a label, and the label comprises whether a target object exists in the training image and the position of the existing target object in the training image;
determining a network loss value according to the recognition result of the training image and the label of the training image;
and adjusting the network parameters of the neural network identification model according to the network loss value until the neural network identification model converges.
In one embodiment, the position of the target object comprises a position of the target object in a preset direction of the image to be detected;
under the condition that the target object exists in the at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected, wherein the determining comprises the following steps:
determining the road condition information to be a flat road condition under the condition that the position difference value of the target object in any two adjacent frames of images to be detected in the at least two frames of images to be detected does not exceed the error range;
and determining the road condition information to be the non-flat road condition under the condition that the position difference of the target object in the two adjacent frames of images to be detected exceeds the error range in the at least two frames of images to be detected.
In one embodiment, further comprising:
under the condition that the target object exists in the at least two frames of images to be detected, determining the moving speed of the front vehicle according to the depth values of the target object in the at least two frames of images to be detected;
and determining the error range according to the moving speed of the front vehicle and the time difference between the two adjacent frames of images to be detected.
In one embodiment, further comprising:
acquiring suspension shaking information of the vehicle;
under the condition that the target object exists in the at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected, comprising the following steps:
and under the conditions that the suspension vibration information of the vehicle is not vibrated and a target object exists in at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected.
In one embodiment, the controlling the vehicle to perform the corresponding driving action according to the road condition information includes:
under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value; and/or the presence of a gas in the gas,
and under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the moving speed by a preset proportion.
In one embodiment, the controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value in the case that the road condition information is a non-flat road condition includes:
under the condition that the road condition information is the uneven road condition, acquiring an image to be detected when a front vehicle leaves the uneven road condition according to the at least two frames of images to be detected and the subsequent multiple frames of images to be detected;
determining the target time of the vehicle reaching the uneven road condition according to the depth value of the target object in the image to be detected when the front vehicle leaves the uneven road condition and the moving speed of the vehicle;
and controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value under the condition that the time reaches the target time.
In one embodiment, the obtaining the image to be detected when the front vehicle leaves the non-flat road condition according to the at least two frames of images to be detected and the following multiple frames of images to be detected includes:
in the at least two frames of images to be detected and the subsequent multi-frame images to be detected, under the condition that the position of the target object passes through the preset position change process twice continuously, the last frame image in the position change process is acquired as the image to be detected when the front vehicle leaves the non-flat road condition, wherein the position change process comprises starting from the initial position and returning to the initial position after passing through at least one other position.
According to a second aspect of the embodiments of the present disclosure, there is provided an automatic driving apparatus including:
the acquisition module is used for acquiring multiple frames of images to be detected, which are continuously acquired by the camera module of the vehicle;
the identification module is used for identifying the multiple frames of images to be detected, and determining whether a target object exists in each frame of image to be detected and the position of the existing target object in the image to be detected, wherein the target object is a designated accessory of a front vehicle;
the road condition determining module is used for determining road condition information according to the positions of the target objects in the at least two frames of images to be detected under the condition that the target objects exist in the at least two frames of images to be detected;
and the driving control module is used for controlling the vehicle to execute corresponding driving actions according to the road condition information.
In one embodiment, the identification module is specifically configured to:
inputting each frame of image to be detected into a pre-trained neural network identification model, wherein the neural network identification model correspondingly outputs an identification result of each frame of image to be detected, and the identification result comprises whether a target object exists in the image to be detected and the position of the existing target object in the image to be detected.
In one embodiment, the system further comprises a training module for:
inputting at least one training image in a set of training images into the neural network recognition model, wherein the neural network recognition model outputs a recognition result of the training image, the training image has a label, and the label comprises whether a target object exists in the training image and the position of the existing target object in the training image;
determining a network loss value according to the recognition result of the training image and the label of the training image;
and adjusting the network parameters of the neural network identification model according to the network loss value until the neural network identification model converges.
In one embodiment, the position of the target object comprises a position of the target object in a preset direction of the image to be detected;
the road condition determining module is specifically configured to:
determining the road condition information to be a flat road condition under the condition that the position difference value of the target object in any two adjacent frames of images to be detected in the at least two frames of images to be detected does not exceed the error range;
and determining the road condition information to be the non-flat road condition under the condition that the position difference of the target object in the two adjacent frames of images to be detected exceeds the error range in the at least two frames of images to be detected.
In one embodiment, the road condition determining module is further configured to:
under the condition that the target object exists in the at least two frames of images to be detected, determining the moving speed of the front vehicle according to the depth values of the target object in the at least two frames of images to be detected;
and determining the error range according to the moving speed of the front vehicle and the time difference between the two adjacent frames of images to be detected.
In one embodiment, further comprising:
acquiring suspension shaking information of the vehicle;
under the condition that the target object exists in the at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected, comprising the following steps:
and determining road condition information according to the positions of the target object in the at least two frames of images to be detected under the condition that the suspension frame vibration information of the vehicle is not vibrated and the target object exists in the at least two frames of images to be detected.
In one embodiment, the driving control module is specifically configured to:
under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value; and/or the presence of a gas in the gas,
and under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the moving speed by a preset proportion.
In one embodiment, the driving control module is configured to, when the vehicle is controlled to reduce the hardness of the suspension shock absorber to a preset hardness value under the condition that the road condition information is a non-flat road condition, specifically:
under the condition that the road condition information is the uneven road condition, acquiring an image to be detected when a front vehicle leaves the uneven road condition according to the at least two frames of images to be detected and the subsequent multiple frames of images to be detected;
determining the target time of the vehicle reaching the uneven road condition according to the depth value of the target object in the image to be detected when the front vehicle leaves the uneven road condition and the moving speed of the vehicle;
and controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value under the condition that the time reaches the target time.
In one embodiment, the driving control module is configured to, when acquiring the to-be-detected image of the preceding vehicle when leaving the non-flat road condition according to the at least two frames of to-be-detected images and the multiple frames of to-be-detected images thereafter, specifically configured to:
in the at least two frames of images to be detected and the subsequent multi-frame images to be detected, under the condition that the position of the target object passes through the preset position change process twice continuously, the last frame image in the position change process is acquired as the image to be detected when the front vehicle leaves the non-flat road condition, wherein the position change process comprises starting from the initial position and returning to the initial position after passing through at least one other position.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory for storing computer instructions executable on a processor, the processor being configured to perform the autopilot method according to the first aspect when executing the computer instructions.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method, multiple frames of images to be detected, which are continuously acquired by a camera module of a vehicle, are obtained, and the multiple frames of images to be detected are identified, so that whether a target object exists in each frame of image to be detected or not and the position of the existing target object in the image to be processed can be determined; because the target object is a designated accessory of the front vehicle, under the condition that the target object exists in at least two frames of images to be detected, the road condition information can be determined according to the positions of the target object in the at least two frames of images to be detected, and further the vehicle can be controlled to execute corresponding driving actions according to the road condition information. The position of the specific accessory of the front vehicle in the image to be detected can represent the road condition in front, so that the method can accurately determine the road condition in front, and the driving action executed based on the road condition can increase the riding comfort.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating an automated driving method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating an automated driving method according to another exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a following scenario shown in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating the structure of an autopilot device according to an exemplary embodiment of the present disclosure;
fig. 5 is a block diagram illustrating an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination," depending on the context.
In a first aspect, at least one embodiment of the present disclosure provides an automatic driving method, please refer to fig. 1, which illustrates a flow of the method, including step S101 and step S104.
The method is applied to vehicle-mounted terminals of vehicles, such as an automatic driving system, a semi-automatic driving system, an auxiliary driving system, a safe driving system and the like installed in the vehicle-mounted terminals. The vehicle may be equipped with a camera module, such as a binocular camera or a monocular camera. The collection direction of the camera module of the vehicle can be the front, namely the camera module can collect images in front of the vehicle, and the images in front can comprise roads, vehicles and the like. It can be understood that the vehicle can also be provided with other camera modules with other collection directions.
The method can be applied to scenes such as vehicle following running and the like, namely, the front vehicle is in the image acquisition range of the camera module, in other words, the front vehicle exists in the image acquired by the camera module.
In step S101, a plurality of frames of images to be detected continuously collected by the camera module of the vehicle are acquired.
The camera module can continuously acquire images to be detected according to a certain frequency, and the acquired images to be detected can be acquired in the step; or the camera module can record video, and the recorded video or video frame can be used as an image to be detected frame by frame or according to a certain frequency in the step.
Because the field of vision of the camera module is relatively fixed relative to the vehicle, the actual scene corresponding to the image to be detected is also relatively fixed relative to the vehicle. In other words, the image to be detected is used for displaying a real scene within a certain range in front of the vehicle at the acquisition moment.
In step S102, the multiple frames of images to be detected are subjected to identification processing, and it is determined whether a target object exists in each frame of image to be detected and a position of the existing target object in the image to be detected, where the target object is a designated accessory of a vehicle ahead.
After acquiring a frame of image to be detected each time, identifying the acquired image to be detected; alternatively, a batch number (for example, 2 frames, 5 frames, 10 frames, etc.) may be set in advance, and then after the acquired images to be detected reach the batch number, the images to be detected are subjected to batch identification processing.
The designated parts may be general parts of the vehicle, that is, all vehicles have the designated parts (of course, abnormal vehicles such as a faulty vehicle are not within the range of the vehicle); and the appointed accessory can be the accessory of the afterbody of vehicle, this is because the preceding vehicle appears when the detection image that the camera module was gathered, and the probability that the afterbody of preceding vehicle appears is great. For example, the specific accessory may be a tail lamp of the vehicle, that is, the target object is a tail lamp of a preceding vehicle.
The position of the target object in the image to be detected can be represented by the position of the pixels contained in the target object. The position of the target object in the image to be detected may be a position of the target object in a preset direction of the image to be detected, for example, a position in a width direction of the image to be detected, or a position in a height direction of the image to be detected. Illustratively, the position of the target object in the image to be detected is the position of the target object in the height direction of the image to be detected, and is represented by the range of pixels included in the target object in the height direction, for example, the position of the target object is the pixels in the n-th to n + 5-th rows in the image to be detected. Or exemplarily, the position of the target object in the image to be detected is the position of the target object in the width direction of the image to be detected, and is represented by the range of the pixels included in the target object in the width direction, for example, the position of the target object is the pixels in the m-th to m + 10-th columns in the image to be detected.
It can be understood that the position of the target object in the height direction of the image to be detected can represent the position of the vehicle ahead in the up-down direction, and the position change of the vehicle ahead in the up-down direction are often related to the road condition, so that the position of the target object in the height direction of the image to be detected can be determined in this step, thereby facilitating the determination of the road condition information in step S103.
In one possible embodiment, the detection image may be subjected to the recognition processing in the following manner: inputting each frame of image to be detected into a pre-trained neural network identification model, wherein the neural network identification model correspondingly outputs an identification result of each frame of image to be detected, and the identification result comprises whether a target object exists in the image to be detected and the position of the existing target object in the image to be detected.
The neural network recognition model can be trained in advance according to the following modes: firstly, inputting at least one training image in a training image set to the neural network recognition model, wherein the neural network recognition model outputs a recognition result of the training image, the training image is provided with a label, and the label comprises whether a target object exists in the training image and the position of the existing target object in the training image; determining a network loss value according to the recognition result of the training image and the label of the training image; and finally, adjusting the network parameters of the neural network identification model according to the network loss value until the neural network identification model converges.
Taking the target object as the vehicle tail light as an example, a large number (at least tens of thousands) of images of the vehicle tail light of different vehicle types and different angles can be combined into an image training set.
The neural network recognition model may employ a Vgg16 model.
In step S103, under the condition that the target object exists in at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected.
Under the condition that the target object does not exist in the image to be detected, a front vehicle does not exist in the image to be detected, and the vehicle is not in a vehicle following scene, so that the road condition cannot be judged according to the front vehicle. And under the condition that the target object exists in the image to be detected, a front vehicle exists in the image to be detected, and the vehicle is in a vehicle following scene, so that the road condition can be judged according to the front vehicle. When the road condition is judged according to the front vehicle, the road condition can be judged according to the position relation of the target object in different images to be detected, so that the road condition can be judged according to the positions of the target objects in the images to be detected only when the target objects exist in at least two frames of images to be detected, namely, when the positions of the target objects in the different images to be detected are the same, the front vehicle does not shake, the front road condition is relatively flat, and when the positions of the target objects in the different images to be detected are different, the front vehicle shakes, and the front road condition is uneven. It can be understood that the position relationship of the target object in the to-be-detected image with the similar acquisition time can accurately represent whether the vehicle in front shakes, so that the position relationship of the target object in the to-be-detected image of the adjacent frame can most accurately indicate whether the vehicle shakes.
It should be noted that the shake of the vehicle may cause the field of view of the camera module to change, and thus may cause the position of the target object in the to-be-detected image to be different in different frames. Based on the factor, suspension shaking information of the vehicle, such as height information collected by an air spring height sensor in the vehicle suspension, can be acquired to judge whether the vehicle shakes. And then when judging the road condition information, determining the road condition information according to the positions of the target object in the at least two frames of images to be detected under the conditions that the suspension vibration information of the vehicle is not vibrated and the target object exists in the at least two frames of images to be detected. Therefore, the interference of the vehicle shaking on the road condition judgment result can be overcome, and the accuracy of the road condition judgment is improved.
In one possible embodiment, the road condition information includes flat and non-flat. The uneven road condition can be the condition that a deceleration strip, an obstacle, a pit and the like are arranged on the road.
When a target object exists in a certain frame of image to be detected, the Tracking algorithm in yollov 5 can be adopted to obtain the position of the target object in each frame of image to be detected.
In step S104, the vehicle is controlled to execute a corresponding driving action according to the traffic information.
Under the condition that the road condition information is a flat road condition, the vehicle can be controlled not to increase any driving action, namely, various driving actions triggered by the driver are kept, or various driving actions triggered by the automatic driving system are kept, or various driving actions triggered by the driver and the automatic driving system are kept.
Under the condition that the road condition information is a non-flat road condition, the vehicle can be controlled to reduce the hardness of the suspension shock absorber to a preset hardness value on the basis of the current frame posture action of the vehicle; and/or, under the condition that the road condition information is the uneven road condition, the vehicle can be controlled to reduce the moving speed by a preset proportion on the basis of the current frame posture action of the vehicle.
The method comprises the steps that the hardness of the suspension shock absorber is reduced by adjusting an oil proportional valve in a piston of the shock absorber, so that the hardness of the shock absorber can be represented by using a proportional value of the oil proportional valve, an oil proportion corresponding to a preset hardness value is preset, and then the proportional value of the oil proportional valve is adjusted to the oil proportion corresponding to the preset hardness value in the step; the hardness that reduces the suspension bumper shock absorber can make the suspension become soft to the shock attenuation that can the great degree when making the vehicle pass through uneven road conditions, thereby obtain more experience by bus comfortable.
The control method is used for controlling the vehicle to reduce the moving speed by a preset proportion, so that the speed of the vehicle is low when the vehicle passes through uneven road conditions, great shaking is avoided, and riding comfort is improved as much as possible. It should be noted that, the setting of the preset ratio of the deceleration needs to avoid the influence of the deceleration process on the riding comfort and even the riding safety.
According to the method, multiple frames of images to be detected, which are continuously acquired by a camera module of a vehicle, are obtained, and the multiple frames of images to be detected are identified, so that whether a target object exists in each frame of image to be detected or not and the position of the existing target object in the image to be processed can be determined; because the target object is a designated accessory of a front vehicle, under the condition that the target object exists in at least two frames of images to be detected, the road condition information can be determined according to the positions of the target object in the at least two frames of images to be detected, and further, the vehicle can be controlled to execute corresponding driving actions according to the road condition information. The position of the specific accessory of the front vehicle in the image to be detected can represent the road condition in front, so that the method can accurately determine the road condition in front, and the driving action executed based on the road condition can increase the riding comfort.
Under the scene of traveling with the car, the driver or the module of making a video recording can't discover the uneven road conditions in the place ahead, for example deceleration strip etc. adopt the discovery uneven road conditions that this method that this disclosure provided can be accurate and in time make corresponding driving action to improve riding comfort.
Under driving environment such as light is darker, or haze, the driver is difficult to discover the place ahead road conditions change, adopt the discovery that the method that this disclosure provided can be accurate uneven road conditions and in time make corresponding driving action to improve security and travelling comfort by bus.
When a vehicle passes through the hollow part, the chassis of the vehicle is very easy to collide, the hollow part can be accurately found and the speed of the vehicle can be reduced, the chassis is prevented from colliding, and the vehicle is effectively protected.
In some embodiments of the present disclosure, the position of the target object includes a position of the target object in a preset direction of the image to be detected, where the preset direction may be a height direction of the image to be detected. Under the condition that the target object exists in at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected in the following mode: determining the road condition information to be a flat road condition under the condition that the position difference value of the target object in any two adjacent frames of images to be detected in the at least two frames of images to be detected does not exceed the error range; and determining the road condition information to be the non-flat road condition under the condition that the position difference of the target object in the two adjacent frames of images to be detected exceeds the error range in the at least two frames of images to be detected.
According to the analysis content mentioned in the above embodiment, the position change of the target object in the two adjacent frames of images to be detected can accurately represent the shaking condition of the front vehicle, and further accurately represent the front road condition.
Since the position of the target object is the position of the target object in the preset direction, the position difference value of the target object may be the position difference value of the target object in the preset direction, for example, the position difference value in the height direction of the image to be detected. For example, the difference value of the range of pixels included in the target object may be used to represent the position difference value of the target object, for example, the pixels included in the target object in a certain frame of the image to be detected are pixels on the n-th to n +5 th lines, the pixels included in the target object in an adjacent frame of the image to be detected after the certain frame are pixels on the n +10 th to n +15 th lines, and the difference value of the target object in two adjacent frames of the image to be detected is 10 lines of pixels.
When the front vehicle shakes, the difference value of the target objects in the two adjacent frames of images to be detected is the amplitude of the shake of the front vehicle.
Wherein, the error range can be a fixed value set in advance. For example, when the position difference value of the target object is expressed by the difference value of the pixel range included in the target object, the error range may be set to several rows or several columns of pixels.
It can be understood that the moving speed of the vehicle in front and the time difference between two adjacent frames of images to be detected both affect the amplitude of the vehicle in front shaking, so that for the accuracy of road condition judgment, the amplitudes of the vehicle in shaking process at different moving distances can be detected in advance, and then amplitude threshold values corresponding to different moving distances are set, and the amplitude threshold values are error ranges. Therefore, under the condition that the target object exists in the at least two frames of images to be detected, the moving speed of the front vehicle is determined according to the depth values of the target object in the at least two frames of images to be detected, then the error range is determined according to the moving speed of the front vehicle and the time difference between two adjacent frames of images to be detected, namely, the moving distance is calculated according to the moving speed of the front vehicle and the time difference between two adjacent frames of images to be detected, and then the amplitude threshold corresponding to the moving distance is determined as the error range.
In the embodiment, the jitter information of the front vehicle is accurately determined through the position difference value of the target object in the to-be-detected image with the target object in at least two frames and the to-be-detected image in two adjacent frames, so that the road condition information in front is accurately determined, and the method is convenient and fast.
In some embodiments of the present disclosure, when the road condition information is a non-flat road condition, the vehicle may be controlled to reduce the hardness of the suspension damper to a preset hardness value in a manner as shown in fig. 2, including steps S201 to S203.
In step S201, under the condition that the road condition information is the non-flat road condition, the image to be detected when the front vehicle leaves the non-flat road condition is obtained according to the at least two frames of images to be detected and the next multiple frames of images to be detected.
The at least two frames of images to be detected are the initial stage of the shaking of the vehicle ahead, so that multiple frames of images to be detected can be continuously acquired, and each frame of images to be detected is continuously subjected to identification processing to obtain the position of the target object in each frame of images to be detected, and the specific mode of the identification processing can adopt the mode introduced in the embodiment.
When a vehicle passes through a deceleration strip, an obstacle or a pit and other factors of uneven road conditions, the vehicle needs to shake twice when front wheels and rear wheels pass through, and after the second shake is finished, the vehicle is indicated to pass through the deceleration strip and other factors. Based on the common knowledge, in the at least two frames of images to be detected and the subsequent frames of images to be detected, under the condition that the position of the target object passes through the preset position change process twice continuously, the last frame of image in the second time of position change process is the image to be detected when the front vehicle leaves the non-flat road condition, wherein the position change process comprises starting from the initial position and returning to the initial position after passing through at least one other position. Illustratively, the initial position of the target object is the n to n +5 th lines of the image to be detected, the position of the target object in the first frame of the image to be detected is the n +10 to n +15 th lines, the position of the target object in the second frame of the image to be detected is the n +20 to n +25 th lines, the position of the target object in the third frame of the image to be detected is the n +30 to n +35 th lines, the position of the target object in the fourth frame of the image to be detected is the n +20 to n +25 th lines, the position of the target object in the fifth frame of the image to be detected is the n +10 to n +15 th lines, and the position of the target object in the sixth frame of the image to be detected is the n to n +5 th lines.
In step S202, a target time when the vehicle reaches the uneven road condition is determined according to the depth value of the target object in the image to be detected when the front vehicle leaves the uneven road condition and the moving speed of the vehicle.
Referring to fig. 3, a distance S1 from the installation position of the camera module to a target object (e.g., tail light) of the vehicle ahead is determined according to a depth value of the target object in the to-be-detected image when the vehicle ahead leaves the uneven road condition, then a pre-calibrated distance S2 between the front wheel of the vehicle and the camera module, a distance S3 between the rear wheel of the vehicle ahead and the target object (e.g., tail light) of the vehicle ahead, and a width S4 of the uneven road condition (e.g., width of a deceleration strip) are obtained, and then a distance S0 between the front wheel of the vehicle and the uneven road condition is calculated according to the following formula: s0 is S1-S2+ S3-S4, the moving speed V of the vehicle is acquired (for example, acquired from a driving system on the vehicle-mounted terminal), and the target time t is calculated as follows: and t is S0/t.
In step S203, in the case where the time reaches the target time, the vehicle is controlled to lower the hardness of the suspension damper to a preset hardness value.
In the embodiment, the target time when the vehicle reaches the position of the uneven road condition is accurately calculated through at least two frames and a plurality of frames later to-be-detected images and the moving speed of the vehicle, and the hardness of the shock absorber is reduced after the time reaches the target time, so that the shock-absorbing driving action can be accurately executed when the vehicle passes through the uneven road condition, more comfortable riding experience is obtained, and higher loss of the vehicle caused by the fact that the shock-absorbing driving action is executed in other time periods is avoided.
According to a second aspect of the embodiments of the present disclosure, there is provided an automatic driving apparatus, referring to fig. 4, the apparatus including:
the acquisition module 401 is configured to acquire multiple frames of images to be detected, which are continuously acquired by a camera module of a vehicle;
the identification module 402 is configured to perform identification processing on the multiple frames of images to be detected, and determine whether a target object exists in each frame of image to be detected and a position of the existing target object in the image to be detected, where the target object is a designated accessory of a front vehicle;
a road condition determining module 403, configured to determine road condition information according to positions of target objects in at least two frames of images to be detected when the target objects exist in the at least two frames of images to be detected;
and a driving control module 404, configured to control the vehicle to execute a corresponding driving action according to the road condition information.
In some embodiments of the disclosure, the identification module is specifically configured to:
inputting each frame of image to be detected into a pre-trained neural network identification model, wherein the neural network identification model correspondingly outputs an identification result of each frame of image to be detected, and the identification result comprises whether a target object exists in the image to be detected and the position of the existing target object in the image to be detected.
In some embodiments of the present disclosure, a training module is further included to:
inputting at least one training image in a set of training images into the neural network recognition model, wherein the neural network recognition model outputs a recognition result of the training image, the training image has a label, and the label comprises whether a target object exists in the training image and the position of the existing target object in the training image;
determining a network loss value according to the recognition result of the training image and the label of the training image;
and adjusting the network parameters of the neural network identification model according to the network loss value until the neural network identification model converges.
In some embodiments of the present disclosure, the position of the target object includes a position of the target object in a preset direction of the image to be detected;
the road condition determining module is specifically configured to:
determining the road condition information to be a flat road condition under the condition that the position difference value of the target object in any two adjacent frames of images to be detected in the at least two frames of images to be detected does not exceed the error range;
and determining the road condition information to be the non-flat road condition under the condition that the position difference of the target object in the two adjacent frames of images to be detected exceeds the error range in the at least two frames of images to be detected.
In some embodiments of the present disclosure, the road condition determining module is further configured to:
under the condition that the target object exists in the at least two frames of images to be detected, determining the moving speed of the front vehicle according to the depth values of the target object in the at least two frames of images to be detected;
and determining the error range according to the moving speed of the front vehicle and the time difference between the two adjacent frames of images to be detected.
In some embodiments of the present disclosure, further comprising:
acquiring suspension shaking information of the vehicle;
under the condition that the target object exists in the at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected, comprising the following steps:
and determining road condition information according to the positions of the target object in the at least two frames of images to be detected under the condition that the suspension frame vibration information of the vehicle is not vibrated and the target object exists in the at least two frames of images to be detected.
In some embodiments of the present disclosure, the driving control module is specifically configured to:
under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value; and/or the presence of a gas in the atmosphere,
and under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the moving speed by a preset proportion.
In some embodiments of the present disclosure, the driving control module is configured to, when the vehicle is controlled to reduce the hardness of the suspension damper to a preset hardness value under the condition that the road condition information is a non-flat road condition, specifically:
under the condition that the road condition information is the uneven road condition, acquiring an image to be detected when a front vehicle leaves the uneven road condition according to the at least two frames of images to be detected and the subsequent multiple frames of images to be detected;
determining the target time of the vehicle reaching the uneven road condition according to the depth value of the target object in the image to be detected when the front vehicle leaves the uneven road condition and the moving speed of the vehicle;
and controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value under the condition that the time reaches the target time.
In some embodiments of the present disclosure, the driving control module is configured to, when acquiring the to-be-detected image when the front vehicle leaves the non-flat road condition according to the at least two frames of to-be-detected images and the subsequent multi-frame to-be-detected images, specifically:
in the at least two frames of images to be detected and the subsequent multi-frame images to be detected, under the condition that the position of the target object passes through the preset position change process twice continuously, the last frame image in the position change process is acquired as the image to be detected when the front vehicle leaves the non-flat road condition, wherein the position change process comprises starting from the initial position and returning to the initial position after passing through at least one other position.
With regard to the apparatus in the above-mentioned embodiments, the specific manner in which each module performs the operation has been described in detail in the first aspect with respect to the embodiment of the method, and will not be elaborated here.
According to a third aspect of the embodiments of the present disclosure, please refer to fig. 5, which schematically illustrates a block diagram of an electronic device. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 500 may include one or more of the following components: processing component 502, memory 505, power component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 515, and communications component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
Memory 505 is configured to store various types of data to support operation at device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 505 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 506 provide power to the various components of device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, audio component 510 includes a Microphone (MIC) configured to receive external audio signals when apparatus 500 is in operating modes, such as call mode, record mode, and voice recognition mode. The received audio signal may further be stored in memory 505 or transmitted via communications component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 515 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 515 may detect an open/closed state of the device 500, relative autopilot of the assembly, such as a display and keypad of the device 500, the sensor assembly 515 may also detect a change in position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, orientation or acceleration/deceleration of the device 500, and a change in temperature of the device 500. The sensor assembly 515 may also include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 515 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 515 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the power supply method of the electronic devices.
In a fourth aspect, the present disclosure also provides, in an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 505 comprising instructions, executable by the processor 520 of the apparatus 500 to perform the method for powering the electronic device. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. An automatic driving method, characterized by comprising:
acquiring multiple frames of images to be detected, which are continuously acquired by a camera module of a vehicle;
identifying the multiple frames of images to be detected, and determining whether a target object exists in each frame of image to be detected and the position of the existing target object in the image to be detected, wherein the target object is an appointed accessory of a front vehicle;
determining road condition information according to the positions of the target object in at least two frames of images to be detected under the condition that the target object exists in at least two frames of images to be detected;
and controlling the vehicle to execute corresponding driving actions according to the road condition information.
2. The automatic driving method according to claim 1, wherein the identifying the plurality of frames of images to be detected to determine whether a target object exists in each frame of images to be detected and a position of the existing target object in the images to be detected comprises:
inputting each frame of image to be detected into a pre-trained neural network recognition model, wherein the neural network recognition model correspondingly outputs a recognition result of each frame of image to be detected, and the recognition result comprises whether a target object exists in the image to be detected and the position of the existing target object in the image to be detected.
3. The automated driving method of claim 2, further comprising:
inputting at least one training image in a set of training images into the neural network recognition model, wherein the neural network recognition model outputs a recognition result of the training image, the training image has a label, and the label comprises whether a target object exists in the training image and the position of the existing target object in the training image;
determining a network loss value according to the recognition result of the training image and the label of the training image;
and adjusting the network parameters of the neural network recognition model according to the network loss value until the neural network recognition model converges.
4. The automatic driving method according to claim 1, wherein the position of the target object includes a position of the target object in a preset direction of the image to be detected;
under the condition that the target object exists in the at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected, comprising the following steps:
determining the road condition information to be a flat road condition under the condition that the position difference value of the target object in any two adjacent frames of images to be detected in the at least two frames of images to be detected does not exceed the error range;
and determining the road condition information to be the non-flat road condition under the condition that the position difference of the target object in the two adjacent frames of images to be detected exceeds the error range in the at least two frames of images to be detected.
5. The automated driving method of claim 4, further comprising:
under the condition that the target object exists in the at least two frames of images to be detected, determining the moving speed of the front vehicle according to the depth values of the target object in the at least two frames of images to be detected;
and determining the error range according to the moving speed of the front vehicle and the time difference between the two adjacent frames of images to be detected.
6. The automated driving method according to claim 1 or 4, further comprising:
acquiring suspension shaking information of the vehicle;
under the condition that the target object exists in the at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected, comprising the following steps:
and determining road condition information according to the positions of the target object in the at least two frames of images to be detected under the condition that the suspension frame vibration information of the vehicle is not vibrated and the target object exists in the at least two frames of images to be detected.
7. The automatic driving method according to claim 4, wherein the controlling the vehicle to perform the corresponding driving action according to the traffic information comprises:
under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value; and/or the presence of a gas in the gas,
and under the condition that the road condition information is the uneven road condition, controlling the vehicle to reduce the moving speed by a preset proportion.
8. The automatic driving method according to claim 7, wherein the controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value in the case where the road condition information is a non-flat road condition includes:
under the condition that the road condition information is the uneven road condition, acquiring an image to be detected when a front vehicle leaves the uneven road condition according to the at least two frames of images to be detected and the subsequent multiple frames of images to be detected;
determining the target time when the vehicle reaches the uneven road condition according to the depth value of the target object in the image to be detected when the front vehicle leaves the uneven road condition and the moving speed of the vehicle;
and controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value under the condition that the time reaches the target time.
9. The automatic driving method according to claim 8, wherein said obtaining the image to be detected when the vehicle in front leaves the uneven road condition based on the at least two frames of the image to be detected and the following frames of the image to be detected comprises:
in the at least two frames of images to be detected and the subsequent multi-frame images to be detected, under the condition that the position of the target object passes through the preset position change process twice continuously, the last frame image in the position change process is acquired as the image to be detected when the front vehicle leaves the non-flat road condition, wherein the position change process comprises starting from the initial position and returning to the initial position after passing through at least one other position.
10. An autopilot device, comprising:
the acquisition module is used for acquiring multiple frames of images to be detected, which are continuously acquired by the camera module of the vehicle;
the identification module is used for identifying the multiple frames of images to be detected and determining whether a target object exists in each frame of image to be detected and the position of the existing target object in the image to be detected, wherein the target object is a designated accessory of a front vehicle;
the road condition determining module is used for determining road condition information according to the positions of the target objects in the at least two frames of images to be detected under the condition that the target objects exist in the at least two frames of images to be detected;
and the driving control module is used for controlling the vehicle to execute corresponding driving actions according to the road condition information.
11. The autopilot device of claim 10 wherein the identification module is specifically configured to:
inputting each frame of image to be detected into a pre-trained neural network identification model, wherein the neural network identification model correspondingly outputs an identification result of each frame of image to be detected, and the identification result comprises whether a target object exists in the image to be detected and the position of the existing target object in the image to be detected.
12. The autopilot device of claim 11 further comprising a training module for:
inputting at least one training image in a set of training images into the neural network recognition model, wherein the neural network recognition model outputs a recognition result of the training image, the training image has a label, and the label comprises whether a target object exists in the training image and the position of the existing target object in the training image;
determining a network loss value according to the recognition result of the training image and the label of the training image;
and adjusting the network parameters of the neural network identification model according to the network loss value until the neural network identification model converges.
13. The automatic driving apparatus according to claim 10, wherein the position of the target object includes a position of the target object in a preset direction of the image to be detected;
the road condition determining module is specifically configured to:
determining the road condition information to be a flat road condition under the condition that the position difference value of the target object in any two adjacent frames of images to be detected in the at least two frames of images to be detected does not exceed the error range;
and determining the road condition information to be the non-flat road condition under the condition that the position difference of the target object in the two adjacent frames of images to be detected exceeds the error range in the at least two frames of images to be detected.
14. The autopilot device of claim 13 wherein the road condition determining module is further configured to:
under the condition that the target object exists in the at least two frames of images to be detected, determining the moving speed of the front vehicle according to the depth values of the target object in the at least two frames of images to be detected;
and determining the error range according to the moving speed of the front vehicle and the time difference between the two adjacent frames of images to be detected.
15. The autopilot device of claim 10 or 13 further comprising:
acquiring suspension shaking information of the vehicle;
under the condition that the target object exists in the at least two frames of images to be detected, determining road condition information according to the positions of the target object in the at least two frames of images to be detected, wherein the determining comprises the following steps:
and determining road condition information according to the positions of the target object in the at least two frames of images to be detected under the condition that the suspension frame vibration information of the vehicle is not vibrated and the target object exists in the at least two frames of images to be detected.
16. The autopilot device of claim 13 wherein the drive control module is specifically configured to:
under the condition that the road condition information is a non-flat road condition, controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value; and/or the presence of a gas in the gas,
and under the condition that the road condition information is the uneven road condition, controlling the vehicle to reduce the moving speed by a preset proportion.
17. The autopilot device of claim 16 wherein the driving control module is configured to, when the vehicle is controlled to reduce the stiffness of the suspension damper to a predetermined stiffness value when the road condition information is a non-flat road condition, specifically:
under the condition that the road condition information is the uneven road condition, acquiring an image to be detected when the front vehicle leaves the uneven road condition according to the at least two frames of images to be detected and the multiple frames of images to be detected later;
determining the target time of the vehicle reaching the uneven road condition according to the depth value of the target object in the image to be detected when the front vehicle leaves the uneven road condition and the moving speed of the vehicle;
and controlling the vehicle to reduce the hardness of the suspension shock absorber to a preset hardness value under the condition that the time reaches the target time.
18. The autopilot device of claim 17 wherein the drive control module is configured to, when acquiring an image to be detected when the vehicle ahead is off of a non-smooth road condition based on the at least two frames of images to be detected and the subsequent multiple frames of images to be detected, specifically:
in the at least two frames of images to be detected and the subsequent multi-frame images to be detected, under the condition that the position of the target object passes through the preset position change process twice continuously, the last frame image in the position change process is acquired as the image to be detected when the front vehicle leaves the non-flat road condition, wherein the position change process comprises starting from the initial position and returning to the initial position after passing through at least one other position.
19. An electronic device, comprising a memory for storing computer instructions executable on a processor, a processor for, when executing the computer instructions, an autopilot method according to any one of claims 1 to 9.
20. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method of any one of claims 1 to 9.
CN202210278849.XA 2022-03-17 2022-03-17 Automatic driving method, device, electronic equipment and storage medium Active CN114648504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210278849.XA CN114648504B (en) 2022-03-17 2022-03-17 Automatic driving method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210278849.XA CN114648504B (en) 2022-03-17 2022-03-17 Automatic driving method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114648504A true CN114648504A (en) 2022-06-21
CN114648504B CN114648504B (en) 2022-12-02

Family

ID=81995683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210278849.XA Active CN114648504B (en) 2022-03-17 2022-03-17 Automatic driving method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114648504B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844222A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 System and method for front vehicle collision early warning based on visual sense
CN109017785A (en) * 2018-08-09 2018-12-18 北京智行者科技有限公司 Vehicle lane-changing running method
CN110023722A (en) * 2017-02-28 2019-07-16 松下知识产权经营株式会社 Load instrument and load measurement method
CN111169381A (en) * 2019-10-10 2020-05-19 中国第一汽车股份有限公司 Vehicle image display method and device, vehicle and storage medium
CN111523385A (en) * 2020-03-20 2020-08-11 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN112634611A (en) * 2020-12-15 2021-04-09 北京百度网讯科技有限公司 Method, device, equipment and storage medium for identifying road conditions
CN112926510A (en) * 2021-03-25 2021-06-08 深圳市商汤科技有限公司 Abnormal driving behavior recognition method and device, electronic equipment and storage medium
CN113066285A (en) * 2021-03-15 2021-07-02 北京百度网讯科技有限公司 Road condition information determining method and device, electronic equipment and storage medium
CN113561977A (en) * 2021-09-22 2021-10-29 国汽智控(北京)科技有限公司 Vehicle adaptive cruise control method, device, equipment and storage medium
CN113619608A (en) * 2021-09-16 2021-11-09 东软睿驰汽车技术(大连)有限公司 Vehicle driving method and device based on driving assistance system and electronic equipment
CN113807167A (en) * 2021-08-03 2021-12-17 深圳市商汤科技有限公司 Vehicle collision detection method and device, electronic device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844222A (en) * 2016-03-18 2016-08-10 上海欧菲智能车联科技有限公司 System and method for front vehicle collision early warning based on visual sense
CN110023722A (en) * 2017-02-28 2019-07-16 松下知识产权经营株式会社 Load instrument and load measurement method
CN109017785A (en) * 2018-08-09 2018-12-18 北京智行者科技有限公司 Vehicle lane-changing running method
CN111169381A (en) * 2019-10-10 2020-05-19 中国第一汽车股份有限公司 Vehicle image display method and device, vehicle and storage medium
CN111523385A (en) * 2020-03-20 2020-08-11 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN112634611A (en) * 2020-12-15 2021-04-09 北京百度网讯科技有限公司 Method, device, equipment and storage medium for identifying road conditions
CN113066285A (en) * 2021-03-15 2021-07-02 北京百度网讯科技有限公司 Road condition information determining method and device, electronic equipment and storage medium
CN112926510A (en) * 2021-03-25 2021-06-08 深圳市商汤科技有限公司 Abnormal driving behavior recognition method and device, electronic equipment and storage medium
CN113807167A (en) * 2021-08-03 2021-12-17 深圳市商汤科技有限公司 Vehicle collision detection method and device, electronic device and storage medium
CN113619608A (en) * 2021-09-16 2021-11-09 东软睿驰汽车技术(大连)有限公司 Vehicle driving method and device based on driving assistance system and electronic equipment
CN113561977A (en) * 2021-09-22 2021-10-29 国汽智控(北京)科技有限公司 Vehicle adaptive cruise control method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU J等: "Combined speed and steering control in high-speed autonomous ground vehicles for obstacle avoidance using model predictive control", 《EEE TRANSACTIONS ON VEHICULAR》 *
熊 璐等: "无人驾驶车辆的运动控制发展现状综述", 《机械工程学报》 *

Also Published As

Publication number Publication date
CN114648504B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN108596116B (en) Distance measuring method, intelligent control method and device, electronic equipment and storage medium
CN105810007B (en) Balance car parking method and device
EP3232343A1 (en) Method and apparatus for managing video data, terminal, and server
US9997197B2 (en) Method and device for controlling playback
US10217487B2 (en) Method and device for controlling playback
US11104345B2 (en) Methods, systems, and media for determining characteristics of roads
JP2021509516A (en) Collision control methods and devices, electronic devices and storage media
US10937319B2 (en) Information provision system, server, and mobile terminal
CN108447146B (en) Shooting direction deviation detection method and device
CN105160898A (en) Vehicle speed limiting method and vehicle speed limiting device
CN109484304B (en) Rearview mirror adjusting method and device, computer readable storage medium and rearview mirror
JP2013109639A (en) Image processing system, server, portable terminal device and image processing method
CN114648504B (en) Automatic driving method, device, electronic equipment and storage medium
CN112833880A (en) Vehicle positioning method, positioning device, storage medium, and computer program product
CN109919126B (en) Method and device for detecting moving object and storage medium
US11043126B2 (en) Vehicle, vehicle control method, and vehicle control program
CN113460092A (en) Method, device, equipment, storage medium and product for controlling vehicle
CN112277948B (en) Method and device for controlling vehicle, storage medium and electronic equipment
CN109733411B (en) Vehicle speed control method and device
KR20200082317A (en) Video processing apparatus and operating method for the same
CN116424243B (en) Intelligent vehicle-mounted multimedia system control method and device
CN115743098B (en) Parking method, device, storage medium, electronic equipment and vehicle
US20230227075A1 (en) Method and apparatus for setting a driving mode of a vehicle
CN113928073A (en) Active suspension adjusting method, device and equipment
CN118015571A (en) Lane prediction method, lane prediction device, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant