CN113673403A - Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile - Google Patents

Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile Download PDF

Info

Publication number
CN113673403A
CN113673403A CN202110926976.1A CN202110926976A CN113673403A CN 113673403 A CN113673403 A CN 113673403A CN 202110926976 A CN202110926976 A CN 202110926976A CN 113673403 A CN113673403 A CN 113673403A
Authority
CN
China
Prior art keywords
real
image
area
driving
judged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110926976.1A
Other languages
Chinese (zh)
Other versions
CN113673403B (en
Inventor
戴勇
阳春分
黄永红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN PERCHERRY TECHNOLOGY CO LTD
Original Assignee
SHENZHEN PERCHERRY TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN PERCHERRY TECHNOLOGY CO LTD filed Critical SHENZHEN PERCHERRY TECHNOLOGY CO LTD
Priority to CN202110926976.1A priority Critical patent/CN113673403B/en
Publication of CN113673403A publication Critical patent/CN113673403A/en
Application granted granted Critical
Publication of CN113673403B publication Critical patent/CN113673403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed

Abstract

The invention relates to a driving environment detection method, a system, a device, computer equipment, a computer readable storage medium and an automobile, belonging to the technical field of communication, wherein the method comprises the following steps: acquiring real-time vehicle speed information, and comparing the real-time vehicle speed information with a preset threshold; when the real-time vehicle speed information is smaller than a preset threshold value, comparing the real-time road surface image of the area to be judged with the reference image of each driving scene respectively; when the real-time vehicle speed information is larger than or equal to a preset threshold value, acquiring real-time driving information, determining a region to be judged according to the real-time driving information, acquiring real-time pavement images of the region to be judged and a driven region, comparing the real-time pavement images of the region to be judged and the driven region, and when the comparison result meets a preset similarity condition, obtaining the judgment result of the region to be judged as the driven region. The invention can improve the safety during driving.

Description

Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile
Technical Field
The invention relates to the technical field of communication, in particular to a driving environment detection method, a system, a device, computer equipment, a computer readable storage medium and an automobile.
Background
At present, with the rapid development of images and computer vision technologies, more and more technologies are applied to the field of automotive electronics, a traditional image-based reversing image system only installs a camera at the tail of a car and can only cover a limited area around the tail of the car, while blind areas around the car and the head of the car undoubtedly increase the hidden danger of safe driving, and the situation that collision and scratch events occur in narrow and congested urban areas and parking lots happens sometimes.
In order to enlarge the visual field of the driver, the driver needs to be capable of sensing 360-degree all-directional environment, therefore, a 360-degree all-around vision system is developed, the system can visually present the position and the peripheral situation of the vehicle, and the sensing capability of the driver to the periphery and the environment is greatly expanded.
A common 360 ° around-the-road system acquires road images around a vehicle through more than four cameras simultaneously, and presents the road images to a user in a top view after being spliced, but the top view is usually displayed in a small range, so the system usually provides a front view, a left view, a right view, a rear view and the like. In the actual driving process, the driving environment of the vehicle is variable and complex, and when the vehicle drives to different scenes, a driver needs to judge the different driving scenes and then switch the view.
With respect to the related art in the above, the inventor believes that it is easy for the driver to make a wrong judgment on the driving scene due to a limited range of the human sight, and when the driving speed is fast, the driver does not conveniently observe the driving vehicle, the obstacle, etc. in the blind area of the visual field in time, thereby making the safety in driving low.
Disclosure of Invention
The invention provides a driving environment detection method, a system, a device, computer equipment, a computer readable storage medium and an automobile, aiming at improving the safety during driving.
In a first aspect, the following technical solutions are adopted for a driving environment detection method, a driving environment detection system, a driving environment detection apparatus, a computer device, a computer-readable storage medium, and an automobile:
a running environment detection method includes,
comparing the current vehicle speed information with a preset threshold value, judging whether the current vehicle speed information is smaller than the preset threshold value, if so, acquiring a real-time road surface image and a reference image set of an area to be judged, wherein the reference image set comprises pre-stored reference images of a plurality of driving scenes, respectively comparing the real-time road surface image of the area to be judged with the reference images of the driving scenes, and if the comparison result meets a preset similarity condition, taking the driving scene corresponding to the reference image of the driving scene as the driving scene of the area to be judged;
if not, acquiring real-time driving information, determining a to-be-determined area according to the real-time driving information, acquiring a real-time pavement image of the to-be-determined area and a real-time pavement image of a driven area, comparing the real-time pavement image of the to-be-determined area with the real-time pavement image of the driven area, if the comparison result meets a preset similarity condition, obtaining that the determination result of the to-be-determined area is a driven area, and if the comparison result does not meet the preset similarity condition, obtaining that the determination result of the to-be-determined area is an undriven area; the real-time driving information comprises steering information and gear information.
By adopting the technical scheme, in the actual driving process, the current vehicle speed information is compared with a preset threshold value, whether the current vehicle speed information is smaller than the preset threshold value or not is judged, if yes, the real-time road surface image of the area to be judged is respectively compared with the pre-stored reference image of each driving scene, and if the comparison result meets the preset similarity condition, the driving scene of the area to be judged can be judged, so that the driving scene can be accurately judged; if the real-time vehicle speed information is larger than or equal to the preset threshold value, determining the area to be judged according to the real-time driving information of the vehicle, comparing the real-time road surface image of the area to be judged with the real-time road surface image of the driven area, and judging whether the area to be judged is the drivable area according to whether the comparison result meets the preset similarity condition or not, so that a driver can know the road conditions around the vehicle in real time, the driving vehicle, the obstacles and the like in the visual field blind area can be observed conveniently in time, the safety during driving is improved, and the driving risk is reduced.
Optionally, the step of taking the driving scene corresponding to the reference image of the driving scene as the driving scene of the region to be determined further includes,
and obtaining a view corresponding to the driving scene based on preset logic according to the driving scene of the area to be judged, and sending the view to a display module for displaying.
By adopting the technical scheme, after the driving scene of the area to be judged is obtained through judgment, the view corresponding to the driving scene is obtained according to the preset logic and is sent to the display module to be displayed, so that a driver can conveniently check the road conditions around the vehicle, the complexity of manual switching is reduced, the driving experience comfort is improved, and the safety and the timeliness are increased.
Optionally, the step of obtaining that the second determination result is that the area to be determined is an area that cannot be driven further includes,
and sending an alarm signal, wherein the alarm signal is used for reminding a driver that the area to be judged is the non-driving area.
By adopting the technical scheme, after the area to be judged is the non-driving area, the alarm signal is utilized to facilitate the auxiliary reminding function when the vehicle needs to change lanes, and the driving safety is improved.
Optionally, the comparison method for comparing the real-time road surface image of the area to be determined with the reference image of each driving scene and/or the real-time road surface image of the area to be determined with the real-time road surface image of the driven area comprises,
and comparing the images through a pre-trained image comparison model to obtain a comparison result.
By adopting the technical scheme, the image comparison model trained in advance is utilized to carry out automatic comparison and generate comparison results, convenience and rapidness are realized, the comparison efficiency is improved, and the accuracy is enhanced.
Optionally, the method for generating the pre-trained image comparison model includes,
acquiring an image data set, wherein the image data set comprises a plurality of image data pairs marked with first comparison results; the image data pairs consist of every two image data, the image data comprise road surface image data around the vehicle when the vehicle runs, and the first comparison result is obtained by manually comparing the two image data in the image data pairs;
selecting a plurality of image data pairs marked with first comparison results from the image data set, respectively inputting the image data pairs into a predefined deep neural network model, and determining second comparison results corresponding to the image data pairs;
obtaining deviation values corresponding to the image data pairs according to the first comparison result and the second comparison result of each image data pair, and determining a loss function according to the deviation values corresponding to the image data pairs; and the number of the first and second groups,
and training the predefined deep neural network model based on the loss function to obtain the image comparison model.
By adopting the technical scheme, the pre-defined deep neural network model is subjected to model training in a supervised learning mode, model parameters are optimized through a large amount of image data training, and an optimal image comparison model is obtained, so that the accuracy of image comparison is improved.
In a second aspect, the present invention provides a driving environment detection system, which adopts the following technical solutions:
a running environment detection system, the detection system comprising,
the first processor is used for comparing the current vehicle speed information with a preset threshold value, judging whether the current vehicle speed information is smaller than the preset threshold value, if so, outputting a first judgment result, and if not, outputting a second judgment result;
the second processor is used for responding to the first judgment result, acquiring a real-time road surface image and a reference image set of a region to be judged, wherein the reference image set comprises pre-stored reference images of a plurality of driving scenes, comparing the real-time road surface image of the region to be judged with the reference image of each driving scene respectively, and taking the driving scene corresponding to the reference image of the driving scene as the driving scene of the region to be judged if the comparison result meets a preset similarity condition;
the third processor is used for responding to the second judgment result, acquiring real-time driving information, determining a region to be judged according to the real-time driving information, acquiring a real-time pavement image of the region to be judged and a real-time pavement image of a driven region, comparing the real-time pavement image of the region to be judged and the real-time pavement image of the driven region, if the comparison result meets a preset similarity condition, obtaining that the judgment result of the region to be judged is the driven region, and if the comparison result does not meet the preset similarity condition, obtaining that the judgment result of the region to be judged is the non-driven region; the real-time driving information comprises steering information and gear information;
a memory for storing a set of reference images.
By adopting the technical scheme, in the actual driving process, the current vehicle speed information is compared with a preset threshold value by utilizing a first processor, whether the real-time vehicle speed information is smaller than the preset threshold value is judged, if the judgment result is yes, a first judgment result is output, a second processor responds to the first judgment result, acquires a real-time road surface image and a reference image set of an area to be judged, compares the real-time road surface image of the area to be judged with the reference images of all driving scenes in the reference image set respectively, and can judge the driving scene of the area to be judged if the comparison result meets the preset similarity condition, so that the driving scene can be accurately judged; if the judgment result is negative, outputting a second judgment result, responding to the second judgment result by the third processor, acquiring real-time driving information, determining a region to be judged according to the real-time driving information, comparing the real-time road surface image of the region to be judged with the real-time road surface image of the driven region, and judging whether the region to be judged is the driven region according to whether the comparison result meets the preset similarity condition, so that a driver can know the road conditions around the vehicle in real time, the driving vehicle, the obstacles and the like in the vision blind area can be observed conveniently in time, the safety during driving is improved, and the driving risk is reduced.
In a third aspect, the present invention provides a driving environment detection apparatus, which adopts the following technical solution:
a running environment detection apparatus comprising an image pickup device, an alarm module, a display module, and a running environment detection system as in the second aspect;
the image acquisition equipment is used for acquiring a road surface image around the vehicle in real time when the vehicle drives;
the alarm module is used for receiving an alarm signal and reminding a driver that the area to be judged is the non-driving area;
the display module is used for displaying the view;
the driving environment detection system according to the second aspect is in communication connection with the image acquisition device, the alarm module and the display module respectively.
In a fourth aspect, the present invention provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as in the first aspect when executing the program.
In a fifth aspect, the present invention provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium storing a computer program that can be loaded by a processor and execute the method as in the first aspect.
In a sixth aspect, the present invention provides an automobile, which adopts the following technical scheme:
an automobile comprising a running environment detection apparatus according to the third aspect.
In summary, the invention includes at least one of the following beneficial technical effects: in the actual driving process, comparing the current vehicle speed information with a preset threshold value, judging whether the current vehicle speed information is smaller than the preset threshold value, if so, comparing the real-time road surface image of the area to be judged with the pre-stored reference image of each driving scene, and if the comparison result meets the preset similarity condition, judging the driving scene of the area to be judged, so that the driving scene can be accurately judged; if the real-time vehicle speed information is larger than or equal to the preset threshold value, determining the area to be judged according to the real-time driving information of the vehicle, comparing the real-time road surface image of the area to be judged with the real-time road surface image of the driven area, and judging whether the area to be judged is the drivable area according to whether the comparison result meets the preset similarity condition or not, so that a driver can know the road conditions around the vehicle in real time, the driving vehicle, the obstacles and the like in the visual field blind area can be observed conveniently in time, the safety during driving is improved, and the driving risk is reduced.
Drawings
Fig. 1 is a flowchart illustrating a driving environment detection method according to an embodiment of the present invention.
FIG. 2 is a schematic view of a vehicle driving area in accordance with one embodiment of the present invention.
FIG. 3 is a schematic diagram of model learning training according to an embodiment of the present invention.
FIG. 4 is a schematic flow chart of generating a pre-trained image alignment model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to fig. 1-4 and the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The intelligent driving environment perception is a key basis for vehicle decision and control, and how to detect the driving environment is an important research content.
The traditional machine vision method generally realizes the road surface segmentation based on color, texture, edge and road model; the road surface identification based on color segmentation is realized by manually marking a road surface color data set to realize non-road surface and road surface color segmentation areas, labeling the road surface and the non-road surface areas during manual marking, and obtaining the segmentation result of the road surface through machine learning; the texture extraction is realized by using a Gabor filter, and Gabor characteristics are sensitive to edges, can extract edge directions, are less influenced by illumination and have scale invariance; the edge information method is based on obvious road edge boundaries, the edge boundaries of the roads are extracted, the road pavement can be segmented, and common edge detection operators include sobel, prewitt and the like; the segmentation method based on the road model is based on the unchanged large contour features existing in the transformation of the vehicle advancing direction and the region, and the contour features are summarized into the trend of the road, mainly including straight running, turning and the like.
In the actual vehicle running environment detection process, on one hand, the traffic running environment is complex and changeable, and the judgment of a running scene is difficult; on the other hand, the intelligent driving has high requirements on the accuracy and the real-time performance of the recognition algorithm, so that the actual application conditions of the algorithm are very severe, and the recognition of the vehicle travelable area is difficult to achieve an ideal effect.
The embodiment of the invention discloses a driving environment detection method.
Referring to fig. 1, a driving environment detection method, the detection method includes,
comparing the current vehicle speed information with a preset threshold value, judging whether the current vehicle speed information is smaller than the preset threshold value, if so, acquiring a real-time road surface image and a reference image set of an area to be judged, wherein the reference image set comprises pre-stored reference images of a plurality of driving scenes, respectively comparing the real-time road surface image of the area to be judged with the reference images of the driving scenes, and if the comparison result meets a preset similarity condition, taking the driving scene corresponding to the reference image of the driving scene as the driving scene of the area to be judged;
if not, acquiring real-time driving information, determining a to-be-determined area according to the real-time driving information, acquiring a real-time pavement image of the to-be-determined area and a real-time pavement image of a driven area, comparing the real-time pavement image of the to-be-determined area with the real-time pavement image of the driven area, if the comparison result meets a preset similarity condition, obtaining that the determination result of the to-be-determined area is a driven area, and if the comparison result does not meet the preset similarity condition, obtaining that the determination result of the to-be-determined area is an undriven area; the real-time driving information comprises steering information and gear information.
In the embodiment, in the actual driving process, the current vehicle speed information is compared with a preset threshold value, whether the current vehicle speed information is smaller than the preset threshold value is judged, if yes, the real-time road surface image of the area to be judged is respectively compared with the pre-stored reference images of all driving scenes, and if the comparison result meets the preset similarity condition, the driving scene of the area to be judged can be judged, so that the driving scene can be accurately judged; if the real-time vehicle speed information is larger than or equal to the preset threshold value, determining the area to be judged according to the real-time driving information of the vehicle, comparing the real-time road surface image of the area to be judged with the real-time road surface image of the driven area, and judging whether the area to be judged is the drivable area according to whether the comparison result meets the preset similarity condition or not, so that a driver can know the road conditions around the vehicle in real time, the driving vehicle, the obstacles and the like in the visual field blind area can be observed conveniently in time, the safety during driving is improved, and the driving risk is reduced.
As an implementation mode of the real-time vehicle speed information and the preset threshold, the real-time vehicle speed information, namely the running speed of the current vehicle, can be acquired through a speed sensor on the vehicle; the preset threshold value can be obtained by calculation according to historical driving data, and can also be preset according to actual conditions.
As an embodiment of obtaining the real-time road surface image of the area to be determined and the reference image set, the area to be determined may be a left area of the vehicle and/or a right area of the vehicle, the reference image set includes reference images of a plurality of pre-stored driving scenes, the driving scenes may be toll booths, gas stations, zebra crossings, narrow lanes and the like, and each driving scene has at least one corresponding road surface image, so as to improve the accuracy and adaptability of the driving scene determination. In addition, the area to be determined may also be set as other side areas of the vehicle according to actual conditions, and the left side area and/or the right side area of the vehicle are selected as only one embodiment in the embodiment.
As a further embodiment of the running environment detection method, the step of using the running scene corresponding to the reference image of the running scene as the running scene of the area to be determined further includes,
and obtaining a view corresponding to the driving scene based on preset logic according to the driving scene of the area to be judged, and sending the view to a display module for displaying.
The display module can be an on-vehicle display screen on the vehicle.
In the embodiment, after the driving scene of the area to be judged is judged, the view corresponding to the driving scene is obtained according to the preset logic and is sent to the display module to be displayed, so that a driver can conveniently check the road conditions around the vehicle, the complexity of manual switching is reduced, the driving experience comfort is improved, and the safety and the timeliness are improved.
As an implementation of the view corresponding to the driving scene, the view corresponding to the driving scene may be preset according to a preset logic; referring to fig. 2, for example, when the driving scene of the area to be determined is determined as a driving scene with a narrow area, such as a toll station, a gas station, a narrow road, etc., the corresponding views may be set as a left side area view and a right side area view of the vehicle, so that the driver can conveniently view the distance between the left side and the right side, and the vehicle body is prevented from being scratched; when the driving scene of the area to be judged is judged to be the scene of the zebra crossing, the corresponding view can be set as the view of the front area of the vehicle, so that a driver can conveniently check the distance between the front wheel of the vehicle and the zebra crossing, and the condition of pressing the lines is avoided to a certain extent.
As one embodiment of the real-time driving information, the real-time driving information includes steering information and gear information; the steering information is used for judging the steering of the head of the vehicle, and the steering information can be obtained through a steering lamp signal, for example, when the steering lamp signal is received and the steering lamp signal is a left steering lamp signal, the steering information of the vehicle is judged to be left turn, and when the steering lamp signal is received and the steering lamp signal is a right steering lamp signal, the steering information of the vehicle is judged to be right turn; the gear information is used for judging whether the vehicle is in a reverse state or not, the gear information can be obtained through the gear signal, when the received gear signal is the reverse signal, the vehicle is judged to be in the reverse state, and when the received gear signal is the non-reverse signal, the vehicle is judged to be in the non-reverse state.
As an implementation manner for determining the area to be determined according to the real-time driving information, the area to be determined is a blind area of a field of view that is inconvenient for a driver to observe when the vehicle is driving; referring to fig. 2, for example, when the received turn signal is a left turn signal and the received shift signal is a non-reverse signal, it may be determined that the vehicle is in a left-turn state, and the region to be determined selects a left rear region of the vehicle; when the received turn signal is a right turn signal and the received gear signal is a non-reverse gear signal, judging that the vehicle is in a right turn state, and selecting a right rear area of the vehicle from the area to be judged; when the received gear signal is a non-reverse gear signal and the turn signal is not received, the vehicle can be judged to be in a forward state, and the area to be judged selects the area in front of the vehicle; when the received gear signal is a reverse gear signal and the turn signal is not received, judging that the vehicle is in a reverse state, and selecting a rear area of the vehicle in the area to be judged; when the received turn light signal is a left turn light signal and the received gear signal is a reverse gear signal, the vehicle can be judged to be in a reverse state towards the left rear, and the left rear area of the vehicle is selected from the area to be judged; when the received turn light signal is a right turn light signal and the received gear signal is a reverse gear signal, the vehicle can be judged to be in a reverse state towards the right rear, and then the region to be judged selects the right rear region of the vehicle.
As an embodiment of the traveled area, the traveled area is an area that the vehicle has traveled; for example, when the vehicle is running at a certain speed, if the gear signal is a reverse gear signal, it can be determined that the vehicle is in a reverse state, and the area in front of the vehicle can be regarded as a running area; if the gear signal is a non-reverse gear signal, the vehicle can be judged to be in a non-reverse state, and the rear area of the vehicle can be regarded as a driving area.
As an embodiment of acquiring real-time road surface images of the area to be determined and the driven area, the real-time road surface images can be acquired through an image acquisition device, and the image acquisition device can acquire road surface images around the vehicle in real time during the driving process of the vehicle based on a vehicle-mounted 360-degree all-around system.
As a further embodiment of the running environment detection method, the step of obtaining the second determination result that the area to be determined is the non-running area may be followed by a step of,
and sending an alarm signal, wherein the alarm signal is used for reminding a driver that the area to be judged is the non-driving area.
The form of the alarm signal can be that an alarm lamp on the vehicle exterior rearview mirror flickers or that a buzzer in the vehicle sounds.
In the embodiment, after the area to be determined is the non-driving area, the warning signal is used for playing a role in assisting to remind when the vehicle needs to change lanes, and the safety during driving is improved.
As a further embodiment of the driving environment detection method, a comparison method for comparing the real-time road surface image of the area to be determined with the reference image of each driving scene, and/or a comparison method for comparing the real-time road surface image of the area to be determined with the real-time road surface image of the driven area comprises,
and comparing the images through a pre-trained image comparison model to obtain a comparison result.
Extracting characteristic vectors of two road surface images to be compared, namely X0 and X1, by using a pre-trained image comparison model, calculating a weighted Euclidean distance between the two characteristic vectors X0 and X1, judging whether the weighted Euclidean distance is greater than or equal to a preset similarity threshold value, if so, outputting a comparison result as similar, and if not, outputting the comparison result as dissimilar; the preset similarity threshold value can be manually preset according to actual conditions.
Referring to fig. 3, as an embodiment of calculating the weighted euclidean distance between the two eigenvectors X0 and X1, the calculation formula of the weighted euclidean distance between the two eigenvectors X0 and X1 is: ew (x0, x1) = | | | Gw (x0) -Gw (x1) | |.
In the embodiment, the image comparison model trained in advance is used for carrying out automatic comparison and generating the comparison result, so that the method is convenient and fast, and the comparison efficiency is improved.
As a further embodiment of the driving environment detection method, referring to fig. 4, the method of generating the pre-trained image matching model includes,
step S101, an image data set is obtained, and the image data set comprises a plurality of image data pairs marked with first comparison results; the image data pairs are composed of every two image data, the image data comprise road surface image data around the vehicle when the vehicle runs, and the first comparison result is obtained by manually comparing the two image data in the image data pairs;
the vehicle-mounted 360-degree all-around vision system based image acquisition equipment can acquire road surface images around the vehicle in real time in the driving process of the vehicle; in addition, the collected vehicles can be unmanned vehicles or manned vehicles;
step S102, selecting a plurality of image data pairs marked with first comparison results in an image data set, respectively inputting the image data pairs into a predefined deep neural network model, and determining second comparison results corresponding to the image data pairs;
the pre-defined deep neural network model is a trained model, and most of training time can be saved because part of parameters are ideal; the second comparison result is an output result obtained by comparing the image data with a predefined deep neural network model;
step S103, obtaining deviation values corresponding to the image data pairs according to the first comparison result and the second comparison result of each image data pair, and determining a loss function according to the deviation values corresponding to the image data pairs;
the loss function is a non-negative real value function, and is often used for measuring the inconsistency degree of the predicted value and the real value of the model, and generally speaking, the smaller the loss function is, the better the robustness of the model is;
and step S104, training a predefined deep neural network model based on the loss function to obtain an image comparison model.
In the above embodiment, a model training is performed on the predefined deep neural network model in a supervised learning manner, and model parameters are optimized through a large amount of image data training to obtain an optimal image comparison model, so that the accuracy of image comparison is improved.
In order to improve the diversity of the training data and obtain a more accurate image comparison model, the environmental factors during real driving should be considered as much as possible, therefore, in step S101, the image data set includes image data collected for driving environments of various climatic conditions and various road conditions of different regional road segments, for example, the collection scene may be: acquiring n pieces of pavement image data on a highway in a scene with clear weather and better road condition information; acquiring n pieces of pavement image data on a highway in a scene with a cloudy weather and poor road condition information; collecting n pieces of pavement image data in a scene with clear weather and better road condition information on urban roads; acquiring n pieces of pavement image data on an urban road in a scene with cloudy weather and poor road condition information; collecting n pieces of pavement image data in an underground parking lot; n can be any natural number, n can be set according to actual needs, the larger the value of n is, the higher the data diversity is, the higher the accuracy of the model is, and the better the robustness is.
In step S101, the image data set is obtained by preprocessing a plurality of image data, as an embodiment of preprocessing, the preprocessing includes,
and data marking, wherein every two image data form an image data pair, the image data pair is marked with a first comparison result, and the first comparison result is obtained by manually comparing the two image data in the image data pair.
After two image data in the image data pair are manually compared, a first comparison result can be marked on the image data pair, the representation form of the first comparison result can be 1 or-1, 1 represents that the two image data are similar, and-1 represents that the two image data are not similar.
Data enhancement, methods of which include, but are not limited to, mirroring, rotating, scaling, cropping, etc., image data; by data enhancement, the data diversity can be increased and the model robustness can be improved under the condition of limited data quantity.
Data normalization, methods of data normalization including but not limited to scaling normalization, standard normalization, whitening (also called globalization), and the like; for original image data, the source and the measurement unit of each dimension feature are different, so that the distribution range difference of feature values is large, the data are normalized, each dimension feature is normalized to the same value interval, the correlation among different features is eliminated, and when gradient is reduced and solved, the gradient direction of each step basically points to the minimum value, so that the training efficiency is greatly improved.
And configuring an image training set and an image testing set, and configuring the image data set into the image training set and the image testing set according to a preset proportion.
The preset proportion can be preset according to requirements, the proportion setting of the image training set and the image testing set has important influence on a model training result, and when the image data in the image data set is limited, the general setting proportion is 3: 1; the pre-defined deep neural network model can be trained by utilizing the image training set, so that the model parameters are determined, and the performance of the optimal image comparison model can be tested by utilizing the image test set.
It should be noted that, in the preprocessing step, the data enhancement step and the data normalization step are not essential, and the order of the two steps can be adjusted as required.
In step S102, after the image data pair Xp is input into the pre-trained deep neural network model, the pre-trained deep neural network model may extract features of each image data; for example, a 512-dimensional vector is obtained by deep neural network model extraction, and the 512-dimensional vector can represent most features of the image data.
In step S102, the specific implementation manner of determining the second comparison result corresponding to the image data pair is: selecting an image data pair marked with a first comparison result from the image data set, and setting the image data pair as (Xp, Yp), wherein Xp is the image data pair, and Yp is the first comparison result; inputting the image data pair Xp into a pre-trained deep neural network model, and calculating corresponding actual output as a second comparison result Op according to the following formula:
Op=Fn(…(F2(F1 (Xp·W1)W2)…)Wn );
wherein, F is a mapping function, and W is a weight; in this stage, the information is transferred from the input layer to the output layer via a stepwise transformation, which is also a process performed when the network is normally running after training is completed, and in this process, a predefined deep neural network model performs a calculation process, i.e., the input Xp is multiplied by the weight matrix of each layer, thereby obtaining the final output result Op.
As an implementation of the predefined deep neural network model, the deep neural network model in the present embodiment has resnet18 as an infrastructure network structure.
In step S103, the specific implementation manner of obtaining the deviation value corresponding to each image data pair according to the first comparison result and the second comparison result of each image data pair is as follows: and calculating the difference value between the second comparison result Op and the first comparison result Yp of the image data pair, wherein the difference value is the corresponding deviation value of the image data pair.
After the difference between the first comparison result Yp and the second comparison result Op is obtained, a deviation value can be obtained, the deviation value can reflect the deviation degree of model prediction, the deviation degree comprises the deviation degree of the model in each dimension, and the parameters of the pre-defined deep neural network model can be updated according to the deviation degree predicted in each dimension, so that the deviation degree of model prediction is reduced, and the judgment accuracy is improved.
As an embodiment of determining the loss function according to the deviation values corresponding to the plurality of image data pairs, the loss function is usually used to measure the deviation degree of the model prediction, and the loss function can be determined according to the deviation values corresponding to the plurality of image data pairs, and the calculation formula of the loss function S is as follows:
Figure 761329DEST_PATH_IMAGE001
as a further embodiment of the driving environment detection method, in step S104, training a predefined deep neural network model based on a loss function, obtaining an image comparison model includes,
and optimizing the loss function, and updating the parameters of the predefined deep neural network model until the loss function meets the preset conditions or the iteration times of the predefined deep neural network model reach the preset times, so as to obtain an image comparison model.
When the loss function is used to update the parameters of the predefined deep neural network model, methods such as a gradient descent method and the like are generally adopted for updating, that is, the model parameters can be updated by solving the gradient of the loss function; after the predefined deep neural network model is repeatedly trained through the image data set, the loss function is smaller and smaller, the corresponding model prediction precision is higher and higher, when the loss function meets the preset condition or the iteration times of the predefined deep neural network model reach the preset times, the current model parameters can be used as the parameters of the obtained image comparison model, and the image comparison model is obtained according to the parameters.
It should be noted that the preset condition is a preset condition for measuring the mathematical characteristics of the loss function; in this embodiment, the predetermined condition may be that the predetermined condition is satisfied when the loss function tends to converge or the loss function reaches the minimum value.
The embodiment of the invention also discloses a running environment detection system.
The running environment detection system includes a running environment detection unit,
the first processor is used for comparing the current vehicle speed information with a preset threshold value, judging whether the current vehicle speed information is smaller than the preset threshold value, if so, outputting a first judgment result, and if not, outputting a second judgment result;
the second processor is used for responding to the first judgment result, acquiring a real-time road surface image and a reference image set of the area to be judged, wherein the reference image set comprises pre-stored reference images of a plurality of driving scenes, comparing the real-time road surface image of the area to be judged with the reference image of each driving scene respectively, and taking the driving scene corresponding to the reference image of the driving scene as the driving scene of the area to be judged if the comparison result meets a preset similarity condition;
the third processor is used for responding to the second judgment result, acquiring real-time driving information, determining a region to be judged according to the real-time driving information, acquiring a real-time pavement image of the region to be judged and a real-time pavement image of a driven region, comparing the real-time pavement image of the region to be judged and the real-time pavement image of the driven region, if the comparison result meets a preset similarity condition, obtaining that the judgment result of the region to be judged is the driven region, and if the comparison result does not meet the preset similarity condition, obtaining that the judgment result of the region to be judged is the non-driven region; the real-time driving information comprises steering information and gear information;
a memory for storing a set of reference images.
In the above embodiment, in the actual driving process, the first processor is used to compare the current vehicle speed information with a preset threshold value, and determine whether the real-time vehicle speed information is smaller than the preset threshold value, if so, the first determination result is output, the second processor responds to the first determination result, and acquires the real-time road surface image and the reference image set of the area to be determined, and compares the real-time road surface image of the area to be determined with the reference images of the driving scenes in the reference image set, if the comparison result meets the preset similarity condition, the driving scene of the area to be determined can be determined, so that the driving scene can be accurately determined; if the judgment result is negative, outputting a second judgment result, responding to the second judgment result by the third processor, acquiring real-time driving information, determining a region to be judged according to the real-time driving information, comparing the real-time road surface image of the region to be judged with the real-time road surface image of the driven region, and judging whether the region to be judged is the driven region according to whether the comparison result meets the preset similarity condition, so that a driver can know the road conditions around the vehicle in real time, the driving vehicle, the obstacles and the like in the vision blind area can be observed conveniently in time, the safety during driving is improved, and the driving risk is reduced.
As a further embodiment of the running environment detection system, the running environment detection system further includes,
and the view generation module is used for obtaining a view corresponding to the driving scene based on preset logic according to the driving scene of the area to be judged, and sending the view to the display module for displaying.
In the embodiment, after the driving scene of the area to be judged is judged, the view corresponding to the driving scene is generated by the view generation module and is sent to the display module to be displayed, so that a driver can conveniently check the road conditions around the vehicle, the complexity of manual switching is reduced, the driving experience comfort is improved, and the safety and the timeliness are increased.
As a further embodiment of the running environment detection system, the running environment detection system further includes,
and the alarm signal sending module is used for sending an alarm signal, and the alarm signal is used for reminding a driver that the area to be judged is the non-driving area.
In the embodiment, after the area to be determined is the non-driving area, the alarm signal sending module is used for sending the alarm signal, so that the auxiliary reminding function is conveniently played when the vehicle needs to change lanes, and the driving safety is improved.
As a further embodiment of the running environment detection system, the running environment detection system further includes,
and the pre-trained image comparison model is used for comparing the real-time road surface image of the area to be judged with the reference image of each driving scene respectively, and/or is used for comparing the real-time road surface image of the area to be judged with the real-time road surface image of the driven area to obtain a comparison result.
In the embodiment, the image comparison model trained in advance is used for carrying out automatic comparison and generating the comparison result, the method and the device are convenient and fast, the comparison efficiency is improved, and the accuracy is enhanced.
The driving environment detection system can realize any one of the driving environment detection methods, and the specific working process of the driving environment detection system can refer to the corresponding process in the embodiment of the method.
The embodiment of the invention also discloses a running environment detection device.
The driving environment detection device comprises image acquisition equipment, an alarm module, a display module and the driving environment detection system; the image acquisition equipment is used for acquiring road surface images around the vehicle in real time when the vehicle runs, the alarm module is used for receiving alarm signals and reminding a driver that an area to be judged is an area which cannot run, the display module is used for displaying a view, and the running environment detection system is respectively in communication connection with the image acquisition equipment, the alarm module and the display module.
As an implementation mode of the image acquisition device, the image acquisition device can be four or more than four monocular or multiocular cameras, and can simultaneously acquire road surface images around the vehicle based on a 360-degree all-around system.
As an embodiment of the alarm module, the alarm module may be an alarm lamp mounted on an outside rearview mirror of a vehicle, and the alarm lamp may be a light emitting diode, and reminds a driver by flashing light; the alarm module can also be a buzzer alarm arranged in the vehicle, and the alarm module can remind a driver by making a sound.
As an embodiment of the display module, the display module may be an on-board display screen on the vehicle for displaying a view of an area around the vehicle.
The embodiment of the invention also discloses computer equipment.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of detecting a driving environment as described above when executing the program.
The embodiment of the invention also discloses a computer readable storage medium.
A computer-readable storage medium storing a computer program that can be loaded by a processor and executes any one of the above-described running environment detection methods.
Wherein the computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device; program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The embodiment of the invention also discloses an automobile which comprises the running environment detection device.
The detection method about the driving environment in the market is mostly based on the traditional algorithm of road surface segmentation, but the invention develops a new method, obtains an image comparison model by training a deep neural network model, determines the driving scene of a region to be judged by adopting a comparison mode for the driving scene with obvious characteristics, does not need to train a detection model for each kind of scene independently, and is more flexible and convenient; and the drivable area is judged by utilizing the image comparison model obtained by training in a real-time image comparison mode, and the sampling reference comes from a real-time environment, so that the judgment precision is greatly improved.
It should be noted that, in the foregoing embodiments, descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
In the several embodiments provided in the present invention, it should be understood that the provided method, system and apparatus can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative; for example, a module may be divided into only one logical function, and another division may be implemented in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The foregoing is a preferred embodiment of the present invention and is not intended to limit the scope of the invention in any way, and any feature disclosed in this specification (including the abstract and drawings) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.

Claims (10)

1. A running environment detection method characterized by: the detection method comprises the following steps of,
comparing the current vehicle speed information with a preset threshold value, judging whether the current vehicle speed information is smaller than the preset threshold value, if so, acquiring a real-time road surface image and a reference image set of an area to be judged, wherein the reference image set comprises pre-stored reference images of a plurality of driving scenes, respectively comparing the real-time road surface image of the area to be judged with the reference images of the driving scenes, and if the comparison result meets a preset similarity condition, taking the driving scene corresponding to the reference image of the driving scene as the driving scene of the area to be judged;
if not, acquiring real-time driving information, determining a to-be-determined area according to the real-time driving information, acquiring a real-time pavement image of the to-be-determined area and a real-time pavement image of a driven area, comparing the real-time pavement image of the to-be-determined area with the real-time pavement image of the driven area, if the comparison result meets a preset similarity condition, obtaining that the determination result of the to-be-determined area is a driven area, and if the comparison result does not meet the preset similarity condition, obtaining that the determination result of the to-be-determined area is an undriven area; the real-time driving information comprises steering information and gear information.
2. The running environment detection method according to claim 1, characterized in that: the step of using the driving scene corresponding to the reference image of the driving scene as the driving scene of the region to be determined further comprises,
and obtaining a view corresponding to the driving scene based on preset logic according to the driving scene of the area to be judged, and sending the view to a display module for displaying.
3. The running environment detection method according to claim 1, characterized in that: the step of obtaining that the second determination result is that the area to be determined is a non-drivable area further comprises,
and sending an alarm signal, wherein the alarm signal is used for reminding a driver that the area to be judged is the non-driving area.
4. The running environment detecting method according to any one of claims 1 to 3, characterized in that: the comparison method for comparing the real-time road surface image of the area to be judged with the reference image of each driving scene respectively and/or comparing the real-time road surface image of the area to be judged with the real-time road surface image of the driven area comprises,
and comparing the images through a pre-trained image comparison model to obtain a comparison result.
5. The running environment detection method according to claim 4, characterized in that: the method for generating the pre-trained image comparison model comprises the following steps,
acquiring an image data set, wherein the image data set comprises a plurality of image data pairs marked with first comparison results; the image data pairs consist of every two image data, the image data comprise road surface image data around the vehicle when the vehicle runs, and the first comparison result is obtained by manually comparing the two image data in the image data pairs;
selecting a plurality of image data pairs marked with first comparison results from the image data set, respectively inputting the image data pairs into a predefined deep neural network model, and determining second comparison results corresponding to the image data pairs;
obtaining deviation values corresponding to the image data pairs according to the first comparison result and the second comparison result of each image data pair, and determining a loss function according to the deviation values corresponding to the image data pairs; and the number of the first and second groups,
and training the predefined deep neural network model based on the loss function to obtain the image comparison model.
6. A running environment detection system characterized in that: the detection system comprises a detection device and a detection device,
the first processor is used for comparing the current vehicle speed information with a preset threshold value, judging whether the current vehicle speed information is smaller than the preset threshold value, if so, outputting a first judgment result, and if not, outputting a second judgment result;
the second processor is used for responding to the first judgment result, acquiring a real-time road surface image and a reference image set of a region to be judged, wherein the reference image set comprises pre-stored reference images of a plurality of driving scenes, comparing the real-time road surface image of the region to be judged with the reference image of each driving scene respectively, and taking the driving scene corresponding to the reference image of the driving scene as the driving scene of the region to be judged if the comparison result meets a preset similarity condition;
the third processor is used for responding to the second judgment result, acquiring real-time driving information, determining a region to be judged according to the real-time driving information, acquiring a real-time pavement image of the region to be judged and a real-time pavement image of a driven region, comparing the real-time pavement image of the region to be judged and the real-time pavement image of the driven region, if the comparison result meets a preset similarity condition, obtaining that the judgment result of the region to be judged is the driven region, and if the comparison result does not meet the preset similarity condition, obtaining that the judgment result of the region to be judged is the non-driven region; the real-time driving information comprises steering information and gear information;
a memory for storing a set of reference images.
7. A running environment detection device characterized in that: the detection device comprises image acquisition equipment, an alarm module, a display module and a driving environment detection system according to claim 6;
the image acquisition equipment is used for acquiring a road surface image around the vehicle in real time when the vehicle drives;
the alarm module is used for receiving an alarm signal and reminding a driver that the area to be judged is the non-driving area;
the display module is used for displaying the view;
the driving environment detection system according to claim 6, wherein the driving environment detection system is in communication connection with the image acquisition device, the alarm module and the display module respectively.
8. A computer device, characterized by: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any of claims 1 to 5 when executing the program.
9. A computer-readable storage medium characterized by: a computer program which can be loaded by a processor and which executes the method according to any of claims 1 to 5.
10. An automobile, characterized in that: a running environment detecting device comprising the device according to claim 7.
CN202110926976.1A 2021-08-12 2021-08-12 Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile Active CN113673403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110926976.1A CN113673403B (en) 2021-08-12 2021-08-12 Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110926976.1A CN113673403B (en) 2021-08-12 2021-08-12 Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile

Publications (2)

Publication Number Publication Date
CN113673403A true CN113673403A (en) 2021-11-19
CN113673403B CN113673403B (en) 2022-10-11

Family

ID=78542591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110926976.1A Active CN113673403B (en) 2021-08-12 2021-08-12 Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile

Country Status (1)

Country Link
CN (1) CN113673403B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913202A (en) * 2022-04-07 2022-08-16 北京拙河科技有限公司 Target tracking method and system of micro-lens array
CN114941710A (en) * 2022-05-12 2022-08-26 上海伯镭智能科技有限公司 Gear switching control method for unmanned mine car

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106202379A (en) * 2016-07-09 2016-12-07 兰州交通大学 A kind of matching inquiry method based on spatial scene similarity
CN107609502A (en) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
CN109919144A (en) * 2019-05-15 2019-06-21 长沙智能驾驶研究院有限公司 Drivable region detection method, device, computer storage medium and drive test visual apparatus
CN110084137A (en) * 2019-04-04 2019-08-02 百度在线网络技术(北京)有限公司 Data processing method, device and computer equipment based on Driving Scene
CN110893858A (en) * 2018-09-12 2020-03-20 华为技术有限公司 Intelligent driving method and intelligent driving system
CN111767839A (en) * 2020-06-28 2020-10-13 平安科技(深圳)有限公司 Vehicle driving track determining method, device, equipment and medium
CN112508310A (en) * 2021-02-01 2021-03-16 智道网联科技(北京)有限公司 Driving track simulation method and device and storage medium
CN112985440A (en) * 2021-02-20 2021-06-18 北京嘀嘀无限科技发展有限公司 Method, device, storage medium and program product for detecting deviation of driving track

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106202379A (en) * 2016-07-09 2016-12-07 兰州交通大学 A kind of matching inquiry method based on spatial scene similarity
CN107609502A (en) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
CN110893858A (en) * 2018-09-12 2020-03-20 华为技术有限公司 Intelligent driving method and intelligent driving system
CN110084137A (en) * 2019-04-04 2019-08-02 百度在线网络技术(北京)有限公司 Data processing method, device and computer equipment based on Driving Scene
CN109919144A (en) * 2019-05-15 2019-06-21 长沙智能驾驶研究院有限公司 Drivable region detection method, device, computer storage medium and drive test visual apparatus
CN111767839A (en) * 2020-06-28 2020-10-13 平安科技(深圳)有限公司 Vehicle driving track determining method, device, equipment and medium
CN112508310A (en) * 2021-02-01 2021-03-16 智道网联科技(北京)有限公司 Driving track simulation method and device and storage medium
CN112985440A (en) * 2021-02-20 2021-06-18 北京嘀嘀无限科技发展有限公司 Method, device, storage medium and program product for detecting deviation of driving track

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913202A (en) * 2022-04-07 2022-08-16 北京拙河科技有限公司 Target tracking method and system of micro-lens array
CN114913202B (en) * 2022-04-07 2022-11-29 北京拙河科技有限公司 Target tracking method and system of micro-lens array
CN114941710A (en) * 2022-05-12 2022-08-26 上海伯镭智能科技有限公司 Gear switching control method for unmanned mine car
CN114941710B (en) * 2022-05-12 2024-03-01 上海伯镭智能科技有限公司 Unmanned mining vehicle gear switching control method

Also Published As

Publication number Publication date
CN113673403B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
US10691962B2 (en) Systems and methods for rear signal identification using machine learning
CN111291676B (en) Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN108230731B (en) Parking lot navigation system and method
US8699754B2 (en) Clear path detection through road modeling
JP4624594B2 (en) Object recognition method and object recognition apparatus
US8751154B2 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
US8611585B2 (en) Clear path detection using patch approach
US8634593B2 (en) Pixel-based texture-less clear path detection
US11727799B2 (en) Automatically perceiving travel signals
CN113673403B (en) Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile
US10650256B2 (en) Automatically perceiving travel signals
US20090268948A1 (en) Pixel-based texture-rich clear path detection
US20100097458A1 (en) Clear path detection using an example-based approach
US20100097455A1 (en) Clear path detection using a vanishing point
CN111094095B (en) Method and device for automatically sensing driving signal and vehicle
US20180299893A1 (en) Automatically perceiving travel signals
KR20170124299A (en) A method and apparatus of assisting parking by creating virtual parking lines
CN111354222A (en) Driving assisting method and system
CN110901638B (en) Driving assistance method and system
US20180300566A1 (en) Automatically perceiving travel signals
Cheng et al. Modeling weather and illuminations in driving views based on big-video mining
CN114881241A (en) Deep learning-based lane line detection method and device and automatic driving method
WO2023173699A1 (en) Machine learning-based assisted driving method and apparatus, and computer-readable medium
CN113752940B (en) Control method, equipment, storage medium and device for tunnel entrance and exit lamp
CN115829387A (en) Driving capability assessment method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant