CN115995075A - Vehicle self-adaptive navigation method and device, electronic equipment and storage medium - Google Patents

Vehicle self-adaptive navigation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115995075A
CN115995075A CN202310091682.0A CN202310091682A CN115995075A CN 115995075 A CN115995075 A CN 115995075A CN 202310091682 A CN202310091682 A CN 202310091682A CN 115995075 A CN115995075 A CN 115995075A
Authority
CN
China
Prior art keywords
detection
vehicle
information
types
network module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310091682.0A
Other languages
Chinese (zh)
Inventor
张宏宇
冯革楠
徐志明
陶训强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinli Intelligent Technology Shanghai Co ltd
Original Assignee
Xinli Intelligent Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinli Intelligent Technology Shanghai Co ltd filed Critical Xinli Intelligent Technology Shanghai Co ltd
Priority to CN202310091682.0A priority Critical patent/CN115995075A/en
Publication of CN115995075A publication Critical patent/CN115995075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle self-adaptive navigation method, a device, electronic equipment and a storage medium. Comprising the following steps: acquiring a running environment image of a vehicle; performing target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects; determining an overhead traveling detection result of the vehicle based on the detection results of the plurality of types of detection objects; and performing navigation processing on the vehicle according to the overhead traveling detection result. The method comprises the steps of processing a vehicle running environment image, constructing a training data set, processing a road running environment image by adopting a deep learning algorithm, determining a target detection model, determining an overhead running detection result according to the target detection model, and carrying out self-adaptive navigation on the vehicle, so that the problem that the road cannot be automatically judged because the vehicle navigation cannot be accurately positioned in an overhead running scene is solved, automatic real-time dynamic path planning is realized, and the accuracy of the vehicle navigation and the safety of the vehicle running are improved.

Description

Vehicle self-adaptive navigation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of vehicle control technologies, and in particular, to a vehicle adaptive navigation method, apparatus, electronic device, and storage medium.
Background
Along with the development of economy, road traffic construction develops rapidly, the number of roads between cities and villages is increased, the running environment of vehicles is more and more complex, and the requirements on vehicle navigation are higher and higher.
The vehicle navigation technology is an important component of an intelligent traffic system, and navigation information such as the position, the speed, the direction, the surrounding geographic environment and the like of a vehicle is generally provided in real time by means of an electronic map, so that a driver is guided to quickly, accurately and safely arrive at a destination. In a vehicle navigation system, in order to be able to accurately perform navigation, it is necessary to accurately determine whether a vehicle is in an overhead area. For determining whether a vehicle is in an elevated area, the elevated entrance identification and ramp information are often determined by collecting road environment image information to complete the judgment. Based on the prior art scheme, the collected data information is limited, so that the accuracy of judging whether the vehicle is in an overhead area is low.
Disclosure of Invention
The embodiment of the invention provides a vehicle self-adaptive navigation method, a device, electronic equipment and a storage medium, which are used for solving the problem that a current vehicle-mounted system cannot automatically judge a road because single navigation software cannot sense height information well.
According to an aspect of the present invention, there is provided a vehicle adaptive navigation method including:
acquiring a running environment image of a vehicle;
performing target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects;
determining an overhead traveling detection result of the vehicle based on the detection results of the plurality of types of detection objects;
and performing navigation processing on the vehicle according to the overhead traveling detection result.
Optionally, performing target detection on the driving environment image based on a preset target detection model to obtain detection results of multiple types of detection objects, including:
carrying out feature extraction on the running environment image based on the feature extraction network module to obtain multi-layer image feature information;
performing feature fusion processing on the multi-layer image feature information based on the neck network module to obtain fusion feature information;
and detecting the fusion characteristic information by a detection network module based on various types of detection objects to obtain detection results of the detection objects of the various types.
Optionally, after performing feature extraction on the driving environment image based on the feature extraction network module to obtain the multi-layer image feature information, the method further includes:
The multi-layer image characteristic information at the current moment and the multi-layer image characteristic information at the previous moment are subjected to fusion processing of the corresponding characteristic information to obtain updated multi-layer image characteristic information;
correspondingly, the neck network module is based on carrying out feature fusion processing on the multi-layer image feature information to obtain fusion feature information, and the method comprises the following steps:
and carrying out feature fusion processing on the updated multi-layer image feature information based on the neck network module to obtain fusion feature information.
Optionally, the target detection model includes a plurality of parallel sub-models, and any sub-model includes a feature extraction network module, a neck network module, and a detection network module of a type of detection object, which are sequentially connected;
or,
the target detection model comprises a feature extraction network module, a neck network module and a plurality of types of detection network modules of detection objects, wherein the detection network modules of the types of detection objects are respectively connected with the neck network module.
Optionally, the detection object includes a vehicle, a pedestrian, a lane line and an indicator light;
determining an overhead traveling detection result of the vehicle based on detection results of a plurality of types of detection objects, comprising:
determining the confidence coefficient of the overhead driving scene based on detection results respectively corresponding to the vehicle, the pedestrian, the lane line and the indicator lamp;
And determining an overhead traveling detection result of the vehicle based on the overhead traveling scene confidence.
Optionally, the training method of the target detection model includes:
acquiring a sample image, wherein the sample image comprises marking information of a plurality of types of detection objects;
iteratively executing the following training steps until the training ending condition is met, so as to obtain a trained target detection model;
inputting the sample image into a target detection model to be trained to obtain training detection results of a plurality of types of detection objects;
and obtaining a loss value based on the training detection result and the marking information, and adjusting model parameters of the target detection model based on the loss value.
Optionally, the mark information of the plurality of types of detection objects includes mark type and mark position information of each detection object;
the detection result of any type of detection object comprises the detection type and detection position information of the detection object;
obtaining a loss value based on the training detection result and the marking information, including:
obtaining a type loss item based on the mark type of each detection object in the mark information and the detection type of the detection object in the training detection result;
obtaining a position loss item based on the mark position information of each detection object in the mark information and the detection position information of the detection object in the training detection result;
A penalty value is derived based on the type penalty term and the location penalty term.
According to another aspect of the present invention, there is provided a vehicle adaptive navigation apparatus including:
the driving environment image acquisition module is used for acquiring driving environment images of the vehicle;
the detection result determining module is used for carrying out target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects;
an overhead traveling detection result determining module for determining an overhead traveling detection result of the vehicle based on detection results of the plurality of types of detection objects;
and the navigation processing module is used for performing navigation processing on the vehicle according to the overhead traveling detection result.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the vehicle adaptive navigation method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute a vehicle adaptive navigation method according to any one of the embodiments of the present invention.
According to the technical scheme, the vehicle running environment image is processed, the training data set is constructed, the road running environment image is processed by adopting the deep learning algorithm, the target detection model is determined, the overhead running detection result is determined according to the target detection model, and the vehicle is subjected to self-adaptive navigation, so that the problem that the road cannot be automatically judged due to the fact that the vehicle navigation cannot be accurately positioned in an overhead running scene is solved, automatic real-time dynamic path planning is realized, and the accuracy of the vehicle navigation and the safety of the vehicle running are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for adaptive navigation of a vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the structure of a target detection model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of feature extraction and feature fusion of a detection model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a target detection model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a target detection model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a model training process for a target detection model according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a model training process of the object detection model according to an embodiment of the present invention
Fig. 8 is a schematic structural diagram of a vehicle adaptive navigation device according to a second embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device implementing a vehicle adaptive navigation method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a vehicle adaptive navigation method according to an embodiment of the present invention, where the method may be performed by a vehicle adaptive navigation method device, and the vehicle adaptive navigation method device may be implemented in hardware and/or software, and the vehicle adaptive navigation method device may be configured in a vehicle control system. As shown in fig. 1, the method includes:
S110, acquiring a running environment image of the vehicle.
The running environment image may be specifically understood as an environment image of all directions around the vehicle during running of the vehicle, the running environment image may be in a single picture form, and the plurality of running environment images may also form a video, where information in the running environment image includes, but is not limited to, one or more of information of the vehicle, a pedestrian, a lane, a traffic sign, a building, and the like. The driving environment image of the vehicle can be acquired in real time through an information acquisition device, and the information acquisition device comprises, but is not limited to, a vehicle-mounted camera, a video camera and the like.
Specifically, after the vehicle system is started, the vehicle-mounted camera can be used for directly collecting the environment images in the running process of the vehicle, and road environment images in the directions of the front, the rear, the left side, the right side and the like of the vehicle are collected. Further, distance information between the vehicle and objects in the surrounding environment is collected through a radar system on the vehicle to serve as auxiliary information for judging the driving scene of the vehicle.
In some embodiments, after the driving environment image of the vehicle is acquired, the overhead sign may be first identified from the driving environment image information, and in combination with the GPS system acquired by the vehicle system and the position information on the map software, whether the vehicle has a tendency to enter or exit the overhead area may be determined in advance, if the vehicle has a tendency to enter the overhead area, a precise determination may be made as to whether the subsequent driving environment is in the overhead area, and if not, a precise determination as to whether the driving environment is in the overhead area may not be required to be started, and unnecessary data processing may be reduced.
In this embodiment, the image acquisition device acquires the environmental image information such as the front, the rear, the side and the ground of the vehicle, so as to provide a data base for the subsequent judgment of the vehicle driving scene, and thus the accuracy of the vehicle driving scene judgment is improved.
Further, for the preprocessing of the acquired driving environment image of the vehicle, namely the data processing before the image feature extraction, the image data is processed mainly by adopting methods such as scaling, overturning, mean normalization, tone change, gray level interpolation and the like, irrelevant information in the image is eliminated, interference and noise are filtered, useful real information is recovered, the detectability of relevant information is enhanced, and the data is optimized to the maximum extent, so that the reliability of the feature extraction is improved.
And S120, performing target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects.
The target detection model may be specifically understood as a detection model obtained by learning a training data set based on deep learning, and training data may be learned and trained in advance to obtain a corresponding target detection model. Wherein the training data set includes, but is not limited to, a running environment image of the vehicle, and the like. The detection object may be specifically understood as information recognized from the running environment image in the training data set, including, but not limited to, vehicles, pedestrians, lane lines, indicator lights, traffic signs, and the like.
Specifically, the target detection of the detection object by the preset target detection model may be performed by using the driving environment image as an input of the target detection model, and outputting detection results corresponding to a plurality of types of detection objects by processing the target detection model.
Optionally, the detection objects include, but are not limited to, vehicles, pedestrians, lane lines, and indicator lights. Optionally, performing target detection on the driving environment image based on a preset target detection model to obtain detection results of multiple types of detection objects, including: carrying out feature extraction on the running environment image based on the feature extraction network module to obtain multi-layer image feature information; performing feature fusion processing on the multi-layer image feature information based on the neck network module to obtain fusion feature information; and detecting the fusion characteristic information by a detection network module based on various types of detection objects to obtain detection results of the detection objects of the various types.
The object detection model mainly comprises a feature extraction network, a neck network module, a detection network module and the like, wherein the connection between the modules is shown in fig. 2, and fig. 2 is a schematic structural diagram of the object detection model. The feature extraction network module is mainly used for extracting information in pictures and generating an abstract semantic feature module, and can perform fine adjustment on parameters of the feature extraction network module according to actual feature extraction requirements, so that the feature extraction network module can meet the actual feature extraction requirements, and commonly used feature extraction networks include, but are not limited to ResNet, googleNet, VGGNet and the like. The neck network module is mainly used for strengthening the feature extraction network and carrying out fusion processing on the feature information extracted by the feature extraction network module. The neck network further processes the output feature layer generated by the feature extraction network module to extract more complex feature information, and the feature information is subjected to multi-scale fusion through the FPN (Feature Pyramid Networks, feature pyramid), as shown in FIG. 3, and FIG. 3 is a schematic diagram of feature extraction and feature fusion of the detection model. The feature fusion can be specifically understood as a method for jointly modeling different features by utilizing the features with different characteristics, and the fusion mode comprises serial and parallel and is mainly used for improving the modeling capability of the model on nonlinearity. The detection network module can be specifically understood as a module for detecting the fusion characteristic information through a detection head network and predicting through a plurality of convolution layers, and three-scale output is generally adopted for detecting targets with different sizes, for example, the sizes can be 1/8, 1/16 and 1/32 of the original input sizes. The head network detects large, medium and small targets respectively for the fusion characteristics of the three scales. The output of the header network generally includes a detection bounding box, a detection bounding box offset, a confidence level, and a type, and each scale output has three bounding boxes of different sizes.
Specifically, in the process of performing target detection on an environment image through a preset target detection model, a feature extraction network module is adopted to perform feature extraction on the running environment image, features generated by the feature extraction network module are generally classified according to stage and respectively marked as C1, C2, C3, C4, C5, C6, C7 and the like, wherein numbers are the same as the serial numbers of stage, the number of times of halving resolution is represented, for example, C2 represents a feature image output by stage2, resolution is represented by 1/4 of an input picture, C5 represents a feature image output by stage5, and resolution is 1/32 of the input picture. ResNet, googleNet, VGGNet and the like are often used as feature extraction networks in practice. The characteristic extraction network module outputs multi-layer image characteristic information corresponding to each type of detection object in the driving environment image. The neck network module is adopted to fuse the multi-layer image characteristic information, the commonly used fusion mode is FPN, the multi-layer image characteristic information is used as the input of the FPN, the characteristics after fusion are output, and the output characteristics are generally marked by P as a number. If the input of FPN is C2, C3, C4, C5, C6, the output is P2, P3, P4, P5, P6 after fusion. And detecting the fusion characteristic information by adopting a detection network module of each type of detection object, and decoupling the type and position information of the detection object in the image from the fusion characteristic information, thereby obtaining detection results of a plurality of types of detection objects.
Optionally, the object detection model includes a plurality of parallel sub-models, and any sub-model includes a feature extraction network module, a neck network module, and a detection network module of a type of detection object connected in sequence. Or the target detection model comprises a feature extraction network module, a neck network module and a plurality of types of detection network modules of detection objects, wherein the detection network modules of the types of detection objects are respectively connected with the neck network module.
Specifically, as shown in fig. 4, fig. 4 is a schematic diagram of a target detection model, where one model can be trained for each type, and then the target detection model includes a plurality of parallel sub-models, and any sub-model includes a feature extraction network module, a neck network module, and a detection network module for detecting an object of a type, which are sequentially connected; for example, for a vehicle type, the corresponding model is a vehicle sub-model that includes a corresponding feature extraction network module, a neck network module, and a detection network module for the vehicle; for pedestrian types, the corresponding model is a pedestrian sub-model that includes a corresponding feature extraction network module, a neck network module, and a pedestrian detection network module, among others. As shown in fig. 5, fig. 5 is a schematic diagram of a target detection model, where all types share a feature extraction network module and a neck network module, and then the target detection model includes the feature extraction network module, the neck network module, and a plurality of types of detection network modules of detection objects, where the plurality of types of detection network modules are respectively connected with the neck network module.
Optionally, after performing feature extraction on the driving environment image based on the feature extraction network module to obtain the multi-layer image feature information, the method further includes: and carrying out fusion processing of the corresponding characteristic information on the multi-layer image characteristic information at the current moment and the multi-layer image characteristic information at the previous moment to obtain updated multi-layer image characteristic information. Correspondingly, the neck network module is based on carrying out feature fusion processing on the multi-layer image feature information to obtain fusion feature information, and the method comprises the following steps: and carrying out feature fusion processing on the updated multi-layer image feature information based on the neck network module to obtain fusion feature information.
Specifically, after the multi-layer image characteristic information is obtained, time sequence information is introduced, the multi-layer image characteristic information corresponding to the current time t and the multi-layer image characteristic information corresponding to the previous time t-1 are subjected to fusion processing, and the multi-layer image characteristic information corresponding to the two times is subjected to fusion processing by adopting an RNN/LSTM/GRU structure, so that updated multi-layer image characteristic information is obtained. Correspondingly, the neck network module needs to perform feature fusion processing on the updated multi-layer image feature information, so as to obtain corresponding fusion feature information.
Optionally, the training method of the target detection model includes: acquiring a sample image, wherein the sample image comprises marking information of a plurality of types of detection objects; and (3) iteratively executing the training step 1 and the training step 2 until the training ending condition is met, and obtaining a trained target detection model.
Step 1: inputting the sample image into a target detection model to be trained to obtain training detection results of a plurality of types of detection objects;
step 2: and obtaining a loss value based on the training detection result and the marking information, and adjusting model parameters of the target detection model based on the loss value.
Wherein the loss value is calculated by a loss function. The loss function (loss function) is understood to be an operational function used to measure the degree of difference between the training test result F of the model and the marker information Y, and is a non-negative real value function, generally denoted by L (Y, F). The smaller the loss value, the better the performance of the model. The loss function is mainly used in the training stage of the model, after training data of each batch are sent into the model, training detection results are output through forward propagation, and then the loss function calculates the difference value between the model training detection results and the marking information, namely the loss value. After the loss value is obtained, the model updates each parameter in the model through back propagation to reduce the loss of the model training detection result and the marking information, so that the training detection result generated by the model is close to the marking information direction, and the learning purpose is achieved. Commonly employed loss functions include, but are not limited to, mean square error loss functions, cross entropy loss functions, absolute value loss functions, and the like.
Specifically, road environment images around the vehicle in the running process of the vehicle are obtained by using a vehicle-mounted camera, the images are used as sample images required by training of a target detection model, the road environment images are marked by marking tools, types of detection objects such as vehicles, pedestrians, lane lines and indicator lamps in the images are marked, a training data set is established, and for example, the type information, the position information and the like of the detection objects can be highlighted. Furthermore, data enhancement means such as scaling, overturning, mean normalization, tone change and the like can be adopted to preprocess the image, so that the diversity and definition of training data are enhanced, the quality of the image is improved, and errors caused by inaccuracy of acquired image information are avoided. And (3) iteratively executing the training step 1 and the step 2 on the preprocessed data, and determining a loss value corresponding to the training detection result and the marking information under the condition that a plurality of types of detection objects are obtained after executing the step 1, and adjusting model parameters of the target detection model according to the loss value.
In the continuous adjustment process, when the loss value is in a converged state or the loss value approaches to a certain value, the current training result can be judged to meet the training ending condition, or a training frequency threshold can be set, when the frequency of iteratively executing the training step 1 and the step 2 meets the training frequency threshold, the current training result can be judged to meet the training ending condition, training is ended, the current model parameters are used as the parameters of the target detection model, and the trained target detection model is determined.
Optionally, the mark information of the plurality of types of detection objects includes mark type and mark position information of each detection object; the detection result of any type of detection object comprises the detection type and detection position information of the detection object; obtaining a loss value based on the training detection result and the marking information, including: obtaining a type loss item based on the mark type of each detection object in the mark information and the detection type of the detection object in the training detection result; obtaining a position loss item based on the mark position information of each detection object in the mark information and the detection position information of the detection object in the training detection result; a penalty value is derived based on the type penalty term and the location penalty term.
Specifically, the object detection model contains two types of losses, one is detection type loss (classification) and the other is detection position loss (regression). These two types of losses are often used for detecting the last part of the model, determining type loss items and position loss items according to model output (type and position) and actual annotation frames (type and position), and fusing the type loss items and the position loss items to obtain loss values. The fusion method may use addition algorithm, weighting algorithm, etc., which is not limited herein. And obtaining a gradient value of each parameter in the network model through calculation of the loss value and back propagation of errors, optimizing and updating the network parameters through an optimization algorithm, and finally outputting a target training model.
For example, referring to fig. 6 and 7, fig. 6 and 7 are schematic diagrams of model training processes of the object detection model, respectively.
In this embodiment, the training process of the target detection model is completed through the feature extraction network module, the neck network module, the detection network module and other modules, so as to obtain a target training model with excellent network parameters, and the running environment images acquired in real time can be identified and detected through the trained target detection model, so that the detection results of multiple types of detection objects can be determined. Before starting the target detection model, the collected running environment images can be identified in advance, whether the vehicle has a trend of running into an overhead area or not is judged according to the information on the road identification plate, the information of the GPS system and the information of the map software in the picture, if so, the target detection model is started again, whether the running environment of the vehicle is in the overhead area or not is detected, unnecessary running environment image detection can be reduced, and the judgment efficiency of the running environment of the vehicle is improved.
S130, determining an overhead traveling detection result of the vehicle based on detection results of a plurality of types of detection objects.
Specifically, detection results corresponding to a plurality of types of detection objects in the driving environment image are determined through the target detection model, and whether the driving environment of the vehicle is in an overhead driving area is determined according to the detection results.
Optionally, determining the overhead traveling detection result of the vehicle based on the detection results of the plurality of types of detection objects includes: determining the confidence coefficient of the overhead driving scene based on detection results respectively corresponding to the vehicle, the pedestrian, the lane line and the indicator lamp; and determining an overhead traveling detection result of the vehicle based on the overhead traveling scene confidence.
The confidence level can be understood as specifically representing the probability or weight of the detection result reaching a certain specified detection result in the detection process.
Specifically, the confidence that the vehicle is in the overhead traveling scene is determined according to detection results respectively corresponding to types of detection objects such as vehicles, pedestrians, lane lines and indicator lamps, and when the obtained confidence of the overhead traveling scene meets a preset confidence threshold, the overhead traveling detection result of the vehicle is determined to be that the vehicle is in the overhead traveling scene, otherwise, the overhead traveling detection result of the vehicle is determined to be that the vehicle is not in the overhead traveling scene.
In the present embodiment, the confidence of the overhead traveling scene is determined from the detection results of the plurality of types of detection objects, and the overhead traveling detection result of the vehicle is determined from the confidence. According to the comprehensive judgment of the detection results of the detection objects of the types, the overhead running detection results of the vehicle are determined, the accuracy of the overhead running detection results of the vehicle is improved, and the accuracy of vehicle navigation is improved.
And S140, performing navigation processing on the vehicle according to the overhead traveling detection result.
Specifically, if the detection result of the overhead driving scene indicates that the driving environment of the vehicle is in the overhead area, the vehicle navigation is automatically corrected, and the path planning is performed in real time. If the detection result of the overhead traveling scene indicates that the traveling environment of the vehicle is not in the overhead area, the vehicle navigation is not corrected.
According to the technical scheme, the vehicle running environment image is processed, the training data set is constructed, the road running environment image is processed by adopting the deep learning algorithm, the target detection model is determined, the overhead running detection result is determined according to the target detection model, and the vehicle is navigated, so that the problem that the current main vehicle machine system cannot automatically judge the road because single navigation software cannot accurately position whether the vehicle is in an overhead running scene is solved, automatic real-time dynamic path planning is realized, and the accuracy of vehicle navigation and the safety of vehicle running are improved.
Example two
Fig. 8 is a schematic structural diagram of a vehicle adaptive navigation device according to a second embodiment of the present invention. As shown in fig. 8, the apparatus includes:
a driving environment image acquisition module 210 for acquiring a driving environment image of the vehicle;
The detection result determining module 220 is configured to perform target detection on the driving environment image based on a preset target detection model, so as to obtain detection results of multiple types of detection objects;
an overhead traveling detection result determining module 230 for determining an overhead traveling detection result of the vehicle based on detection results of a plurality of types of detection objects;
the navigation processing module 240 is configured to perform navigation processing on the vehicle according to the overhead traveling detection result.
Optionally, the detection result determining module 220 is specifically configured to:
performing target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects, wherein the detection results comprise:
carrying out feature extraction on the running environment image based on the feature extraction network module to obtain multi-layer image feature information;
performing feature fusion processing on the multi-layer image feature information based on the neck network module to obtain fusion feature information;
and detecting the fusion characteristic information by a detection network module based on various types of detection objects to obtain detection results of the detection objects of the various types.
The method further comprises the steps of after feature extraction is carried out on the running environment image based on the feature extraction network module to obtain multi-layer image feature information: the multi-layer image characteristic information at the current moment and the multi-layer image characteristic information at the previous moment are subjected to fusion processing of the corresponding characteristic information to obtain updated multi-layer image characteristic information;
Correspondingly, the neck network module is based on carrying out feature fusion processing on the multi-layer image feature information to obtain fusion feature information, and the method comprises the following steps: and carrying out feature fusion processing on the updated multi-layer image feature information based on the neck network module to obtain fusion feature information.
The target detection model comprises a plurality of parallel sub-models, and any sub-model comprises a feature extraction network module, a neck network module and a detection network module of a type of detection object which are connected in sequence; or the target detection model comprises a feature extraction network module, a neck network module and a plurality of types of detection network modules of detection objects, wherein the detection network modules of the types of detection objects are respectively connected with the neck network module.
The detection objects comprise vehicles, pedestrians, lane lines and indicator lights.
The training method of the target detection model comprises the following steps:
acquiring a sample image, wherein the sample image comprises marking information of a plurality of types of detection objects;
iteratively executing the following training steps until the training ending condition is met, so as to obtain a trained target detection model;
inputting the sample image into a target detection model to be trained to obtain training detection results of a plurality of types of detection objects;
And obtaining a loss value based on the training detection result and the marking information, and adjusting model parameters of the target detection model based on the loss value.
The mark information of the plurality of types of detection objects includes mark type and mark position information of each detection object;
the detection result of any type of detection object comprises the detection type and detection position information of the detection object;
obtaining a loss value based on the training detection result and the marking information, including:
obtaining a type loss item based on the mark type of each detection object in the mark information and the detection type of the detection object in the training detection result;
obtaining a position loss item based on the mark position information of each detection object in the mark information and the detection position information of the detection object in the training detection result;
a penalty value is derived based on the type penalty term and the location penalty term.
Optionally, the overhead driving detection result determining module 230 is specifically configured to:
determining an overhead traveling detection result of the vehicle based on detection results of a plurality of types of detection objects, comprising: determining the confidence coefficient of the overhead driving scene based on detection results respectively corresponding to the vehicle, the pedestrian, the lane line and the indicator lamp; and determining an overhead traveling detection result of the vehicle based on the overhead traveling scene confidence.
The vehicle self-adaptive navigation method device provided by the embodiment of the invention can execute the vehicle self-adaptive navigation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example III
Fig. 9 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 9, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as a vehicle adaptive navigation method.
In some embodiments, the vehicle adaptive navigation method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the vehicle adaptive navigation method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the vehicle adaptive navigation method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
The computer program for implementing the vehicle adaptive navigation method of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
Example IV
The fourth embodiment of the present invention also provides a computer readable storage medium storing computer instructions for causing a processor to execute a vehicle adaptive navigation method, the method comprising:
acquiring a running environment image of a vehicle;
performing target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects;
determining an overhead traveling detection result of the vehicle based on the detection results of the plurality of types of detection objects;
and performing navigation processing on the vehicle according to the overhead traveling detection result.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of adaptive navigation for a vehicle, comprising:
acquiring a running environment image of a vehicle;
performing target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects;
determining an overhead traveling detection result of the vehicle based on the detection results of the plurality of types of detection objects;
and carrying out navigation processing on the vehicle according to the overhead traveling detection result.
2. The method according to claim 1, wherein performing object detection on the running environment image based on a preset object detection model to obtain detection results of a plurality of types of detection objects, comprises:
extracting features of the running environment image based on a feature extraction network module to obtain multi-layer image feature information;
performing feature fusion processing on the multi-layer image feature information based on a neck network module to obtain fusion feature information;
and detecting the fusion characteristic information by a detection network module based on various types of detection objects to obtain detection results of the plurality of types of detection objects.
3. The method according to claim 2, wherein after the feature extraction network module performs feature extraction on the driving environment image to obtain multi-layer image feature information, the method further comprises:
The multi-layer image characteristic information at the current moment and the multi-layer image characteristic information at the previous moment are subjected to fusion processing of the corresponding characteristic information to obtain updated multi-layer image characteristic information;
correspondingly, the neck network module is based on the feature fusion processing of the multi-layer image feature information to obtain fusion feature information, which comprises the following steps:
and carrying out feature fusion processing on the updated multi-layer image feature information based on the neck network module to obtain fusion feature information.
4. The method of claim 2, wherein the object detection model comprises a plurality of parallel sub-models, any one of the sub-models comprising a feature extraction network module, a neck network module, and a detection network module of one type of detection object connected in sequence;
or,
the target detection model comprises a feature extraction network module, a neck network module and a plurality of types of detection network modules of detection objects, wherein the detection network modules of the types of detection objects are respectively connected with the neck network module.
5. The method of claim 1, wherein the detection object comprises a vehicle, a pedestrian, a lane line, and an indicator light;
The determining the overhead traveling detection result of the vehicle based on the detection results of the plurality of types of detection objects includes:
determining the confidence of the overhead driving scene based on detection results corresponding to the vehicle, the pedestrian, the lane line and the indicator lamp respectively;
and determining an overhead traveling detection result of the vehicle based on the overhead traveling scene confidence.
6. The method of claim 1, wherein the training method of the object detection model comprises:
acquiring a sample image, wherein the sample image comprises marking information of a plurality of types of detection objects;
iteratively executing the following training steps until the training ending condition is met, so as to obtain a trained target detection model;
inputting the sample image into a target detection model to be trained to obtain training detection results of the multiple types of detection objects;
and obtaining a loss value based on the training detection result and the marking information, and adjusting model parameters of the target detection model based on the loss value.
7. The method according to claim 6, comprising: the mark information of the plurality of types of detection objects comprises mark type and mark position information of each detection object;
The detection result of any type of detection object comprises the detection type and detection position information of the detection object;
the obtaining a loss value based on the training detection result and the marking information includes:
obtaining a type loss item based on the mark type of each detection object in the mark information and the detection type of the detection object in the training detection result;
obtaining a position loss item based on the mark position information of each detection object in the mark information and the detection position information of the detection object in the training detection result;
the loss value is derived based on the type loss term and the location loss term.
8. A vehicle adaptive navigation device, characterized by comprising:
the driving environment image acquisition module is used for acquiring driving environment images of the vehicle;
the detection result determining module is used for carrying out target detection on the running environment image based on a preset target detection model to obtain detection results of a plurality of types of detection objects;
an overhead traveling detection result determining module configured to determine an overhead traveling detection result of the vehicle based on detection results of the plurality of types of detection objects;
And the navigation processing module is used for performing navigation processing on the vehicle according to the overhead traveling detection result.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the vehicle adaptive navigation method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the vehicle adaptive navigation method of any one of claims 1-7 when executed.
CN202310091682.0A 2023-02-03 2023-02-03 Vehicle self-adaptive navigation method and device, electronic equipment and storage medium Pending CN115995075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310091682.0A CN115995075A (en) 2023-02-03 2023-02-03 Vehicle self-adaptive navigation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310091682.0A CN115995075A (en) 2023-02-03 2023-02-03 Vehicle self-adaptive navigation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115995075A true CN115995075A (en) 2023-04-21

Family

ID=85990139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310091682.0A Pending CN115995075A (en) 2023-02-03 2023-02-03 Vehicle self-adaptive navigation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115995075A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117589177A (en) * 2024-01-18 2024-02-23 青岛创新奇智科技集团股份有限公司 Autonomous navigation method based on industrial large model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117589177A (en) * 2024-01-18 2024-02-23 青岛创新奇智科技集团股份有限公司 Autonomous navigation method based on industrial large model
CN117589177B (en) * 2024-01-18 2024-04-05 青岛创新奇智科技集团股份有限公司 Autonomous navigation method based on industrial large model

Similar Documents

Publication Publication Date Title
CN113902897B (en) Training of target detection model, target detection method, device, equipment and medium
CN109087510B (en) Traffic monitoring method and device
CN110798805B (en) Data processing method and device based on GPS track and storage medium
CN115797736B (en) Training method, device, equipment and medium for target detection model and target detection method, device, equipment and medium
CN115410173B (en) Multi-mode fused high-precision map element identification method, device, equipment and medium
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN115100741B (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN115861959A (en) Lane line identification method and device, electronic equipment and storage medium
CN115995075A (en) Vehicle self-adaptive navigation method and device, electronic equipment and storage medium
CN116469073A (en) Target identification method, device, electronic equipment, medium and automatic driving vehicle
CN113379719A (en) Road defect detection method, road defect detection device, electronic equipment and storage medium
CN117636307A (en) Object detection method and device based on semantic information and automatic driving vehicle
CN117373285A (en) Risk early warning model training method, risk early warning method and automatic driving vehicle
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN116226782A (en) Sensor data fusion method, device, equipment and storage medium
CN115761698A (en) Target detection method, device, equipment and storage medium
CN113514053B (en) Method and device for generating sample image pair and method for updating high-precision map
CN114998387A (en) Object distance monitoring method and device, electronic equipment and storage medium
CN114998863A (en) Target road identification method, target road identification device, electronic equipment and storage medium
CN114495049A (en) Method and device for identifying lane line
CN113569803A (en) Multi-mode data fusion lane target detection method and system based on multi-scale convolution
CN116663650B (en) Training method of deep learning model, target object detection method and device
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination