CN112487986A - Driving assistance recognition method based on high-precision map - Google Patents
Driving assistance recognition method based on high-precision map Download PDFInfo
- Publication number
- CN112487986A CN112487986A CN202011379666.4A CN202011379666A CN112487986A CN 112487986 A CN112487986 A CN 112487986A CN 202011379666 A CN202011379666 A CN 202011379666A CN 112487986 A CN112487986 A CN 112487986A
- Authority
- CN
- China
- Prior art keywords
- information
- driving
- image
- road
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000011218 segmentation Effects 0.000 claims abstract description 17
- 230000001133 acceleration Effects 0.000 claims abstract description 8
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 7
- 238000012790 confirmation Methods 0.000 claims abstract description 4
- 230000035622 drinking Effects 0.000 claims abstract description 4
- 238000005457 optimization Methods 0.000 claims abstract description 4
- 230000008569 process Effects 0.000 claims description 18
- 238000009826 distribution Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000004397 blinking Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 210000000887 face Anatomy 0.000 claims description 3
- 230000008921 facial expression Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract 1
- 230000008859 change Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012827 research and development Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention has proposed the driving auxiliary identification method based on high-accuracy map, step 1, before the car starts, the camera in the car gathers the human face picture to the driver, transmit the picture to MCU and carry on the identity confirmation, judge whether the driver is driving after drinking at the same time, take the photo at random while driving, check and judge the human face state of driver, judge whether it is fatigue driving; step 2, acquiring vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, realizing pixel-level end-to-end semantic segmentation, and converting color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; experiments show that the method has high identification precision and good effect.
Description
Technical Field
The invention relates to the technical field of GIS application, in particular to a driving auxiliary identification method based on a high-precision map.
Background
The vehicle auxiliary system utilizes various sensors (millimeter wave radar, laser radar, single/double-eye camera and satellite navigation) installed on a vehicle to sense the surrounding environment at any time in the driving process of the vehicle, collect data, identify, detect and track static and dynamic objects, and combine navigation map data to perform systematic operation and analysis, so that a driver can perceive possible dangers in advance, and the comfort and the safety of vehicle driving are effectively improved. However, domestic automobile manufacturers are limited by capital and research and development strength, and the investment in the research and development of the advanced driving assistance system is small, so that the development of the technology needs to be further improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a driving assistance identification method based on a high-precision map.
The technical scheme adopted by the invention is as follows: the driving assistance identification method based on the high-precision map comprises the following steps of:
step 1, before the automobile is started, a camera in the automobile collects a face image for a driver, the image is transmitted to an MCU (microprogrammed control unit) for identity confirmation, whether the driver drives after drinking or not is judged, pictures are randomly shot in the process of driving, the face state of the driver is checked and judged, and whether the driver drives in a fatigue state or not is judged;
step 2, the GIS ensemble learning module acquires vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, achieves pixel-level end-to-end semantic segmentation, and converts color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; the road information comprises a road traffic network, a current road speed limit, a road warning board and road sign information.
Step 1, in the process of acquiring a face image by a driver, scanning the face, acquiring the features of the face image and sending the feature information to a main control module; the vehicle-mounted control module generates a vehicle control instruction according to the judgment information sent by the main control module; the information such as the driving account number comprises a driving identity, a borrower account number, a timestamp and a binding account password.
The method comprises the following specific processes of randomly taking pictures in the driving process, checking and judging the face state of a driver, and judging whether the driver is fatigue driving:
the method comprises the following steps of setting that a characteristic in j in a car belongs to a face, wherein o represents passenger personnel, z represents a potential theme, t represents a position in a vector corresponding to each seat information, and randomly replacing {1,2, … } by a random replacement method and ordering, wherein the face image acquired by a camera comprises the following steps:
all the human face features in the car are removed from the current category of the human face features:
(9)
theme distribution:
(10)
and selecting new belonged subjects of all the faces in the car through learning:
(11)
the feature is then re-added to the new topic to which it belongs:
(12)
fixing, updating the feature vector, and constructing constraint:
(13)
and (3) the geometric constraint satisfies Gaussian distribution, wherein the mean value of the Gaussian distribution represents the variance, the steps (1) to (5) are repeated, after multiple iterative cycles, an in-vehicle face detection model is obtained, the human face and the non-human face correspond to different distributions, and if the facial expression, particularly eyes are in a long-time blinking or eye closing process, fatigue driving is judged.
The method comprises the following steps of performing multi-frame super-resolution reconstruction on a target face image, wherein the method comprises the steps of adjusting the placement position of a camera, obtaining a video of the face image with a larger face area as far as possible in a complex scene, performing comprehensive evaluation according to the modes of weighting and normalizing the face image with the face area, definition, illumination intensity, size and motion change intensity when performing quality evaluation on the target face image, intuitively selecting a plurality of frames with higher quality to perform multi-frame super-resolution reconstruction, and performing joint solution on registration parameters, fuzzy parameters and the super-resolution image when performing multi-frame super-resolution reconstruction, so that the reconstruction accuracy is improved; the method is realized by minimizing a loss function between the reconstructed image and the corresponding high-resolution image to obtain the required estimation parameters, and finally, the super-resolution reconstruction of the face image in the monitoring video is realized.
Further, the specific process of acquiring the vehicle driving state information, the surrounding vehicle state information and the road information in the trunk road through the convolutional neural network is as follows:
adopting a full convolution neural network FCN to realize end-to-end semantic segmentation of pixel levels, then optimizing semantic segmentation results by utilizing a conditional random field, distinguishing road and road scenes, sky and other irrelevant features in a color image, wherein each pixel has a category label, each pixel point is taken as a node, the connection between the pixel and the pixel is taken as an edge, a conditional random field is formed, the category label corresponding to the pixel is presumed through an observation variable, and a conditional probability function is maximized:
the observation variable of a single pixel point is a category label of the single pixel point, is a category label of four adjacent pixel points and is a normalization factor; after the segmentation of the color image at the pixel level is obtained, traversing the color image and only reserving information of vehicles, pedestrians, roads and warning boards in the color image, then traversing a depth image corresponding to the color image and only reserving the depth information of the vehicles, the pedestrians, the roads and the warning boards in the depth image, then reading the color information and the distance information of the vehicles, the pedestrians, the roads and the warning boards, calculating coordinates of the pixels under a camera coordinate system, and calculating a pair of point clouds corresponding to the color image and the depth image according to camera parameters.
The invention has the beneficial effects that: the invention solves the problems that the traditional image needs to establish a model for each posture respectively and the recognition rate of the model is low due to factors such as the posture, light, environment and the like, and can effectively improve the image recognition accuracy of multi-posture pedestrians, roads and warning boards.
Detailed Description
The following further illustrates the practice of the invention.
1. The driving assistance recognition method based on the high-precision map is characterized by comprising the following steps of: step 1, before the automobile is started, a camera in the automobile collects a face image for a driver, the image is transmitted to an MCU (microprogrammed control unit) for identity confirmation, whether the driver drives after drinking or not is judged, pictures are randomly shot in the process of driving, the face state of the driver is checked and judged, and whether the driver drives in a fatigue state or not is judged;
step 1, in the process of acquiring a face image by a driver, scanning the face, acquiring the features of the face image and sending the feature information to a main control module; the vehicle-mounted control module generates a vehicle control instruction according to the judgment information sent by the main control module; the information such as the driving account number comprises a driving identity, a borrower account number, a timestamp and a binding account password.
The method comprises the following specific processes of randomly taking pictures in the driving process, checking and judging the face state of a driver, and judging whether the driver is fatigue driving:
the method comprises the following steps of setting that a characteristic in j in a car belongs to a face, wherein o represents passenger personnel, z represents a potential theme, t represents a position in a vector corresponding to each seat information, and randomly replacing {1,2, … } by a random replacement method and ordering, wherein the face image acquired by a camera comprises the following steps:
all the human face features in the car are removed from the current category of the human face features:
(9)
theme distribution:
(10)
and selecting new belonged subjects of all the faces in the car through learning:
(11)
the feature is then re-added to the new topic to which it belongs:
(12)
fixing, updating the feature vector, and constructing constraint:
(13)
and (3) the geometric constraint satisfies Gaussian distribution, wherein the mean value of the Gaussian distribution represents the variance, the steps (1) to (5) are repeated, after multiple iterative cycles, an in-vehicle face detection model is obtained, the human face and the non-human face correspond to different distributions, and if the facial expression, particularly eyes are in a long-time blinking or eye closing process, fatigue driving is judged.
The method also comprises the steps of adjusting the placement position of the camera to obtain a video of a more frontal face image of the human face as far as possible in a complex scene, using weighting and normalization methods of frontal face property, definition, illumination intensity, size and motion change intensity of the face image as a comprehensive evaluation basis when evaluating the quality of a target face image, intuitively selecting a plurality of frames with higher quality to reconstruct the multi-frame super-resolution, and performing joint solution on registration parameters, fuzzy parameters and the super-resolution image when reconstructing the multi-frame super-resolution to improve the reconstruction accuracy; the method is realized by minimizing a loss function between the reconstructed image and the corresponding high-resolution image to obtain the required estimation parameters, and finally, the super-resolution reconstruction of the face image in the monitoring video is realized.
Step 2, the GIS ensemble learning module acquires vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, achieves pixel-level end-to-end semantic segmentation, and converts color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; the road information comprises a road traffic network, a current road speed limit, a road warning board and road sign information.
The specific process of acquiring the vehicle running state information, the surrounding vehicle state information and the road information in the trunk road through the convolutional neural network is as follows:
adopting a full convolution neural network FCN to realize end-to-end semantic segmentation of pixel levels, then optimizing semantic segmentation results by utilizing a conditional random field, distinguishing road and road scenes, sky and other irrelevant features in a color image, wherein each pixel has a category label, each pixel point is taken as a node, the connection between the pixel and the pixel is taken as an edge, a conditional random field is formed, the category label corresponding to the pixel is presumed through an observation variable, and a conditional probability function is maximized:
the observation variable of a single pixel point is a category label of the single pixel point, is a category label of four adjacent pixel points and is a normalization factor; after the segmentation of the color image at the pixel level is obtained, traversing the color image and only reserving information of vehicles, pedestrians, roads and warning boards in the color image, then traversing a depth image corresponding to the color image and only reserving the depth information of the vehicles, the pedestrians, the roads and the warning boards in the depth image, then reading the color information and the distance information of the vehicles, the pedestrians, the roads and the warning boards, calculating coordinates of the pixels under a camera coordinate system, and calculating a pair of point clouds corresponding to the color image and the depth image according to camera parameters.
Those skilled in the art will appreciate that the invention may be practiced without these specific details.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an exemplary embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (5)
1. The driving assistance recognition method based on the high-precision map is characterized by comprising the following steps of:
step 1, before the automobile is started, a camera in the automobile collects a face image for a driver, the image is transmitted to an MCU (microprogrammed control unit) for identity confirmation, whether the driver drives after drinking or not is judged, pictures are randomly taken in the process of driving, the face state of the driver is checked and judged, and whether the driver drives in a fatigue state or not is judged;
step 2, the GIS ensemble learning module acquires vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, achieves pixel-level end-to-end semantic segmentation, and converts color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; the road information comprises a road traffic network, a current road speed limit, a road warning board and road sign information.
2. The driving assistance recognition method based on the high-precision map as claimed in claim 1, wherein in the process of acquiring the face image by the driver in step 1, the driving assistance recognition method is used for scanning the face, acquiring the features of the face image and sending the feature information to the main control module; the vehicle-mounted control module generates a vehicle control instruction according to the judgment information sent by the main control module; the information such as the driving account number comprises a driving identity, a borrower account number, a timestamp and a binding account password.
3. The driving assistance recognition method based on the high-precision map as claimed in claim 1, wherein the specific process of randomly taking pictures during driving, checking and judging the face state of the driver and judging whether the driver is fatigue driving is as follows:
the method comprises the following steps of setting that a feature in j in a car belongs to a face, wherein o represents passenger personnel, z represents a potential theme, t represents a position in a vector corresponding to each seat information, and randomly replacing {1,2, … } by a random replacement method and ordering, wherein the face image acquired by a camera comprises the following steps:
all the human face features in the car are removed from the current category of the human face features:
(9)
theme distribution:
(10)
and selecting new belonged subjects of all the faces in the car through learning:
(11)
the feature is then re-added to the new topic to which it belongs:
(12)
fixing, updating the feature vector, and constructing constraint:
(13)
and (3) the geometric constraint satisfies Gaussian distribution, wherein the mean value of the Gaussian distribution represents the variance, the steps (1) to (5) are repeated, after multiple iterative cycles, an in-vehicle face detection model is obtained, the human face and the non-human face correspond to different distributions, and if the facial expression, particularly eyes are in a long-time blinking or eye closing process, fatigue driving is judged.
4. The driving assistance identification method based on the high-precision map according to claim 3, characterized by further comprising adjusting the placement position of the camera to enable the camera to acquire a video of a face image in a complex scene as much as possible, using a weighting and normalizing face image frontality, definition, illumination intensity, size, and motion variation intensity as a comprehensive evaluation basis when evaluating the quality of a target face image, intuitively selecting a plurality of frames with higher quality to perform multi-frame super-resolution reconstruction, and performing joint solution on the registration parameter, the blur parameter, and the super-resolution image when performing multi-frame super-resolution reconstruction to improve the reconstruction accuracy; the method is realized by minimizing a loss function between the reconstructed image and the corresponding high-resolution image to obtain the required estimation parameters, and finally, the super-resolution reconstruction of the face image in the monitoring video is realized.
5. The driving assistance recognition method based on the high-precision map according to claim 1, wherein the specific process of obtaining the vehicle driving state information, the surrounding vehicle state information and the road information in the main road through the convolutional neural network is as follows:
the method comprises the following steps of adopting a full convolution neural network FCN to realize end-to-end semantic segmentation of pixel levels, then optimizing semantic segmentation results by utilizing a conditional random field, distinguishing irrelevant features such as scenes and sky on two sides of a road in a color image, wherein each pixel is provided with a category label, each pixel point is used as a node, the connection between the pixel and the pixel is used as an edge, so that a conditional random field is formed, the category label corresponding to the pixel is presumed through an observation variable, and a conditional probability function is maximized:
the observation variable of a single pixel point is a category label of the single pixel point, is a category label of four adjacent pixel points and is a normalization factor; after the segmentation of the color image at the pixel level is obtained, traversing the color image and only reserving information of vehicles, pedestrians, roads and warning boards in the color image, then traversing a depth image corresponding to the color image and only reserving the depth information of the vehicles, the pedestrians, the roads and the warning boards in the depth image, then reading the color information and the distance information of the vehicles, the pedestrians, the roads and the warning boards, calculating coordinates of the pixels under a camera coordinate system, and calculating a pair of point clouds corresponding to the color image and the depth image according to camera parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379666.4A CN112487986A (en) | 2020-11-30 | 2020-11-30 | Driving assistance recognition method based on high-precision map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379666.4A CN112487986A (en) | 2020-11-30 | 2020-11-30 | Driving assistance recognition method based on high-precision map |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112487986A true CN112487986A (en) | 2021-03-12 |
Family
ID=74937866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011379666.4A Withdrawn CN112487986A (en) | 2020-11-30 | 2020-11-30 | Driving assistance recognition method based on high-precision map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112487986A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112989978A (en) * | 2021-03-04 | 2021-06-18 | 扬州微地图地理信息科技有限公司 | Driving assistance recognition method based on high-precision map |
CN118694919A (en) * | 2024-08-22 | 2024-09-24 | 深圳市百盛兴业科技有限公司 | AI camera image transmission optimization method for vehicle |
-
2020
- 2020-11-30 CN CN202011379666.4A patent/CN112487986A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112989978A (en) * | 2021-03-04 | 2021-06-18 | 扬州微地图地理信息科技有限公司 | Driving assistance recognition method based on high-precision map |
CN118694919A (en) * | 2024-08-22 | 2024-09-24 | 深圳市百盛兴业科技有限公司 | AI camera image transmission optimization method for vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102018116108B4 (en) | CALIBRATION TEST METHOD FOR THE OPERATION OF AUTONOMOUS VEHICLES AND VEHICLE WITH A CONTROLLER FOR EXECUTING THE METHOD | |
DE102018121595B4 (en) | UNSUPERVISED TRAINING OF AGENTS FOR AUTONOMOUS DRIVING APPLICATIONS | |
US9384401B2 (en) | Method for fog detection | |
DE102018116107A1 (en) | CALIBRATION PROCEDURE FOR THE OPERATION OF AUTONOMOUS VEHICLES | |
DE102018121597A1 (en) | FLOOR REFERENCE FOR THE OPERATION OF AUTONOMOUS VEHICLES | |
CN111898523A (en) | Remote sensing image special vehicle target detection method based on transfer learning | |
CN110910453B (en) | Vehicle pose estimation method and system based on non-overlapping view field multi-camera system | |
DE102019112002A1 (en) | SYSTEMS AND METHOD FOR THE AUTOMATIC DETECTION OF PENDING FEATURES | |
DE102018109371A1 (en) | CALIBRATION VALIDATION FOR THE OPERATION OF AUTONOMOUS VEHICLES | |
DE102017122170A1 (en) | SYSTEM AND METHOD FOR PLACING SUBJECTIVE MESSAGES ON A VEHICLE | |
DE112018000335T5 (en) | SYSTEMS AND METHOD FOR A CALCULATION FRAME FOR A VISUAL WARNING OF THE DRIVER USING A "FULLY CONVOLUTIONAL" ARCHITECTURE | |
CN104590130A (en) | Rearview mirror self-adaptive adjustment method based on image identification | |
CN111141311B (en) | Evaluation method and system of high-precision map positioning module | |
CN111860274A (en) | Traffic police command gesture recognition method based on head orientation and upper half body skeleton characteristics | |
CN112487986A (en) | Driving assistance recognition method based on high-precision map | |
WO2022128014A1 (en) | Correction of images from a panoramic-view camera system in the case of rain, incident light and contamination | |
CN110097055A (en) | A kind of vehicle attitude detection method and system based on grid convolutional neural networks | |
WO2022128013A1 (en) | Correction of images from a camera in case of rain, incident light and contamination | |
CN116152768A (en) | Intelligent driving early warning system and method based on road condition identification | |
CN115810179A (en) | Human-vehicle visual perception information fusion method and system | |
DE102019220335A1 (en) | SEMANTIC SEGMENTATION USING DRIVER ATTENTION INFORMATION | |
CN110472508A (en) | Lane line distance measuring method based on deep learning and binocular vision | |
CN112989978A (en) | Driving assistance recognition method based on high-precision map | |
CN115588188A (en) | Locomotive, vehicle-mounted terminal and driver behavior identification method | |
CN112836619A (en) | Embedded vehicle-mounted far infrared pedestrian detection method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210312 |
|
WW01 | Invention patent application withdrawn after publication |