CN112487986A - Driving assistance recognition method based on high-precision map - Google Patents

Driving assistance recognition method based on high-precision map Download PDF

Info

Publication number
CN112487986A
CN112487986A CN202011379666.4A CN202011379666A CN112487986A CN 112487986 A CN112487986 A CN 112487986A CN 202011379666 A CN202011379666 A CN 202011379666A CN 112487986 A CN112487986 A CN 112487986A
Authority
CN
China
Prior art keywords
information
driving
image
road
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011379666.4A
Other languages
Chinese (zh)
Inventor
闫浩文
张黎明
王帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou Micro Map Geography Information Technology Co ltd
Original Assignee
Yangzhou Micro Map Geography Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou Micro Map Geography Information Technology Co ltd filed Critical Yangzhou Micro Map Geography Information Technology Co ltd
Priority to CN202011379666.4A priority Critical patent/CN112487986A/en
Publication of CN112487986A publication Critical patent/CN112487986A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention has proposed the driving auxiliary identification method based on high-accuracy map, step 1, before the car starts, the camera in the car gathers the human face picture to the driver, transmit the picture to MCU and carry on the identity confirmation, judge whether the driver is driving after drinking at the same time, take the photo at random while driving, check and judge the human face state of driver, judge whether it is fatigue driving; step 2, acquiring vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, realizing pixel-level end-to-end semantic segmentation, and converting color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; experiments show that the method has high identification precision and good effect.

Description

Driving assistance recognition method based on high-precision map
Technical Field
The invention relates to the technical field of GIS application, in particular to a driving auxiliary identification method based on a high-precision map.
Background
The vehicle auxiliary system utilizes various sensors (millimeter wave radar, laser radar, single/double-eye camera and satellite navigation) installed on a vehicle to sense the surrounding environment at any time in the driving process of the vehicle, collect data, identify, detect and track static and dynamic objects, and combine navigation map data to perform systematic operation and analysis, so that a driver can perceive possible dangers in advance, and the comfort and the safety of vehicle driving are effectively improved. However, domestic automobile manufacturers are limited by capital and research and development strength, and the investment in the research and development of the advanced driving assistance system is small, so that the development of the technology needs to be further improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a driving assistance identification method based on a high-precision map.
The technical scheme adopted by the invention is as follows: the driving assistance identification method based on the high-precision map comprises the following steps of:
step 1, before the automobile is started, a camera in the automobile collects a face image for a driver, the image is transmitted to an MCU (microprogrammed control unit) for identity confirmation, whether the driver drives after drinking or not is judged, pictures are randomly shot in the process of driving, the face state of the driver is checked and judged, and whether the driver drives in a fatigue state or not is judged;
step 2, the GIS ensemble learning module acquires vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, achieves pixel-level end-to-end semantic segmentation, and converts color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; the road information comprises a road traffic network, a current road speed limit, a road warning board and road sign information.
Step 1, in the process of acquiring a face image by a driver, scanning the face, acquiring the features of the face image and sending the feature information to a main control module; the vehicle-mounted control module generates a vehicle control instruction according to the judgment information sent by the main control module; the information such as the driving account number comprises a driving identity, a borrower account number, a timestamp and a binding account password.
The method comprises the following specific processes of randomly taking pictures in the driving process, checking and judging the face state of a driver, and judging whether the driver is fatigue driving:
the method comprises the following steps of setting that a characteristic in j in a car belongs to a face, wherein o represents passenger personnel, z represents a potential theme, t represents a position in a vector corresponding to each seat information, and randomly replacing {1,2, … } by a random replacement method and ordering, wherein the face image acquired by a camera comprises the following steps:
all the human face features in the car are removed from the current category of the human face features:
(9)
theme distribution:
(10)
and selecting new belonged subjects of all the faces in the car through learning:
(11)
the feature is then re-added to the new topic to which it belongs:
(12)
fixing, updating the feature vector, and constructing constraint:
(13)
and (3) the geometric constraint satisfies Gaussian distribution, wherein the mean value of the Gaussian distribution represents the variance, the steps (1) to (5) are repeated, after multiple iterative cycles, an in-vehicle face detection model is obtained, the human face and the non-human face correspond to different distributions, and if the facial expression, particularly eyes are in a long-time blinking or eye closing process, fatigue driving is judged.
The method comprises the following steps of performing multi-frame super-resolution reconstruction on a target face image, wherein the method comprises the steps of adjusting the placement position of a camera, obtaining a video of the face image with a larger face area as far as possible in a complex scene, performing comprehensive evaluation according to the modes of weighting and normalizing the face image with the face area, definition, illumination intensity, size and motion change intensity when performing quality evaluation on the target face image, intuitively selecting a plurality of frames with higher quality to perform multi-frame super-resolution reconstruction, and performing joint solution on registration parameters, fuzzy parameters and the super-resolution image when performing multi-frame super-resolution reconstruction, so that the reconstruction accuracy is improved; the method is realized by minimizing a loss function between the reconstructed image and the corresponding high-resolution image to obtain the required estimation parameters, and finally, the super-resolution reconstruction of the face image in the monitoring video is realized.
Further, the specific process of acquiring the vehicle driving state information, the surrounding vehicle state information and the road information in the trunk road through the convolutional neural network is as follows:
adopting a full convolution neural network FCN to realize end-to-end semantic segmentation of pixel levels, then optimizing semantic segmentation results by utilizing a conditional random field, distinguishing road and road scenes, sky and other irrelevant features in a color image, wherein each pixel has a category label, each pixel point is taken as a node, the connection between the pixel and the pixel is taken as an edge, a conditional random field is formed, the category label corresponding to the pixel is presumed through an observation variable, and a conditional probability function is maximized:
the observation variable of a single pixel point is a category label of the single pixel point, is a category label of four adjacent pixel points and is a normalization factor; after the segmentation of the color image at the pixel level is obtained, traversing the color image and only reserving information of vehicles, pedestrians, roads and warning boards in the color image, then traversing a depth image corresponding to the color image and only reserving the depth information of the vehicles, the pedestrians, the roads and the warning boards in the depth image, then reading the color information and the distance information of the vehicles, the pedestrians, the roads and the warning boards, calculating coordinates of the pixels under a camera coordinate system, and calculating a pair of point clouds corresponding to the color image and the depth image according to camera parameters.
The invention has the beneficial effects that: the invention solves the problems that the traditional image needs to establish a model for each posture respectively and the recognition rate of the model is low due to factors such as the posture, light, environment and the like, and can effectively improve the image recognition accuracy of multi-posture pedestrians, roads and warning boards.
Detailed Description
The following further illustrates the practice of the invention.
1. The driving assistance recognition method based on the high-precision map is characterized by comprising the following steps of: step 1, before the automobile is started, a camera in the automobile collects a face image for a driver, the image is transmitted to an MCU (microprogrammed control unit) for identity confirmation, whether the driver drives after drinking or not is judged, pictures are randomly shot in the process of driving, the face state of the driver is checked and judged, and whether the driver drives in a fatigue state or not is judged;
step 1, in the process of acquiring a face image by a driver, scanning the face, acquiring the features of the face image and sending the feature information to a main control module; the vehicle-mounted control module generates a vehicle control instruction according to the judgment information sent by the main control module; the information such as the driving account number comprises a driving identity, a borrower account number, a timestamp and a binding account password.
The method comprises the following specific processes of randomly taking pictures in the driving process, checking and judging the face state of a driver, and judging whether the driver is fatigue driving:
the method comprises the following steps of setting that a characteristic in j in a car belongs to a face, wherein o represents passenger personnel, z represents a potential theme, t represents a position in a vector corresponding to each seat information, and randomly replacing {1,2, … } by a random replacement method and ordering, wherein the face image acquired by a camera comprises the following steps:
all the human face features in the car are removed from the current category of the human face features:
(9)
theme distribution:
(10)
and selecting new belonged subjects of all the faces in the car through learning:
(11)
the feature is then re-added to the new topic to which it belongs:
(12)
fixing, updating the feature vector, and constructing constraint:
(13)
and (3) the geometric constraint satisfies Gaussian distribution, wherein the mean value of the Gaussian distribution represents the variance, the steps (1) to (5) are repeated, after multiple iterative cycles, an in-vehicle face detection model is obtained, the human face and the non-human face correspond to different distributions, and if the facial expression, particularly eyes are in a long-time blinking or eye closing process, fatigue driving is judged.
The method also comprises the steps of adjusting the placement position of the camera to obtain a video of a more frontal face image of the human face as far as possible in a complex scene, using weighting and normalization methods of frontal face property, definition, illumination intensity, size and motion change intensity of the face image as a comprehensive evaluation basis when evaluating the quality of a target face image, intuitively selecting a plurality of frames with higher quality to reconstruct the multi-frame super-resolution, and performing joint solution on registration parameters, fuzzy parameters and the super-resolution image when reconstructing the multi-frame super-resolution to improve the reconstruction accuracy; the method is realized by minimizing a loss function between the reconstructed image and the corresponding high-resolution image to obtain the required estimation parameters, and finally, the super-resolution reconstruction of the face image in the monitoring video is realized.
Step 2, the GIS ensemble learning module acquires vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, achieves pixel-level end-to-end semantic segmentation, and converts color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; the road information comprises a road traffic network, a current road speed limit, a road warning board and road sign information.
The specific process of acquiring the vehicle running state information, the surrounding vehicle state information and the road information in the trunk road through the convolutional neural network is as follows:
adopting a full convolution neural network FCN to realize end-to-end semantic segmentation of pixel levels, then optimizing semantic segmentation results by utilizing a conditional random field, distinguishing road and road scenes, sky and other irrelevant features in a color image, wherein each pixel has a category label, each pixel point is taken as a node, the connection between the pixel and the pixel is taken as an edge, a conditional random field is formed, the category label corresponding to the pixel is presumed through an observation variable, and a conditional probability function is maximized:
the observation variable of a single pixel point is a category label of the single pixel point, is a category label of four adjacent pixel points and is a normalization factor; after the segmentation of the color image at the pixel level is obtained, traversing the color image and only reserving information of vehicles, pedestrians, roads and warning boards in the color image, then traversing a depth image corresponding to the color image and only reserving the depth information of the vehicles, the pedestrians, the roads and the warning boards in the depth image, then reading the color information and the distance information of the vehicles, the pedestrians, the roads and the warning boards, calculating coordinates of the pixels under a camera coordinate system, and calculating a pair of point clouds corresponding to the color image and the depth image according to camera parameters.
Those skilled in the art will appreciate that the invention may be practiced without these specific details.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an exemplary embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (5)

1. The driving assistance recognition method based on the high-precision map is characterized by comprising the following steps of:
step 1, before the automobile is started, a camera in the automobile collects a face image for a driver, the image is transmitted to an MCU (microprogrammed control unit) for identity confirmation, whether the driver drives after drinking or not is judged, pictures are randomly taken in the process of driving, the face state of the driver is checked and judged, and whether the driver drives in a fatigue state or not is judged;
step 2, the GIS ensemble learning module acquires vehicle driving state information, surrounding vehicle state information and road information in a trunk road through a convolutional neural network, achieves pixel-level end-to-end semantic segmentation, and converts color images and depth images of vehicles and roads into point cloud data by utilizing conditional random field optimization semantic segmentation; the vehicle state information of the trunk road comprises speed, acceleration and position information; the surrounding vehicle state information includes speed, acceleration, and position information; the road information comprises a road traffic network, a current road speed limit, a road warning board and road sign information.
2. The driving assistance recognition method based on the high-precision map as claimed in claim 1, wherein in the process of acquiring the face image by the driver in step 1, the driving assistance recognition method is used for scanning the face, acquiring the features of the face image and sending the feature information to the main control module; the vehicle-mounted control module generates a vehicle control instruction according to the judgment information sent by the main control module; the information such as the driving account number comprises a driving identity, a borrower account number, a timestamp and a binding account password.
3. The driving assistance recognition method based on the high-precision map as claimed in claim 1, wherein the specific process of randomly taking pictures during driving, checking and judging the face state of the driver and judging whether the driver is fatigue driving is as follows:
the method comprises the following steps of setting that a feature in j in a car belongs to a face, wherein o represents passenger personnel, z represents a potential theme, t represents a position in a vector corresponding to each seat information, and randomly replacing {1,2, … } by a random replacement method and ordering, wherein the face image acquired by a camera comprises the following steps:
all the human face features in the car are removed from the current category of the human face features:
(9)
theme distribution:
(10)
and selecting new belonged subjects of all the faces in the car through learning:
(11)
the feature is then re-added to the new topic to which it belongs:
(12)
fixing, updating the feature vector, and constructing constraint:
(13)
and (3) the geometric constraint satisfies Gaussian distribution, wherein the mean value of the Gaussian distribution represents the variance, the steps (1) to (5) are repeated, after multiple iterative cycles, an in-vehicle face detection model is obtained, the human face and the non-human face correspond to different distributions, and if the facial expression, particularly eyes are in a long-time blinking or eye closing process, fatigue driving is judged.
4. The driving assistance identification method based on the high-precision map according to claim 3, characterized by further comprising adjusting the placement position of the camera to enable the camera to acquire a video of a face image in a complex scene as much as possible, using a weighting and normalizing face image frontality, definition, illumination intensity, size, and motion variation intensity as a comprehensive evaluation basis when evaluating the quality of a target face image, intuitively selecting a plurality of frames with higher quality to perform multi-frame super-resolution reconstruction, and performing joint solution on the registration parameter, the blur parameter, and the super-resolution image when performing multi-frame super-resolution reconstruction to improve the reconstruction accuracy; the method is realized by minimizing a loss function between the reconstructed image and the corresponding high-resolution image to obtain the required estimation parameters, and finally, the super-resolution reconstruction of the face image in the monitoring video is realized.
5. The driving assistance recognition method based on the high-precision map according to claim 1, wherein the specific process of obtaining the vehicle driving state information, the surrounding vehicle state information and the road information in the main road through the convolutional neural network is as follows:
the method comprises the following steps of adopting a full convolution neural network FCN to realize end-to-end semantic segmentation of pixel levels, then optimizing semantic segmentation results by utilizing a conditional random field, distinguishing irrelevant features such as scenes and sky on two sides of a road in a color image, wherein each pixel is provided with a category label, each pixel point is used as a node, the connection between the pixel and the pixel is used as an edge, so that a conditional random field is formed, the category label corresponding to the pixel is presumed through an observation variable, and a conditional probability function is maximized:
the observation variable of a single pixel point is a category label of the single pixel point, is a category label of four adjacent pixel points and is a normalization factor; after the segmentation of the color image at the pixel level is obtained, traversing the color image and only reserving information of vehicles, pedestrians, roads and warning boards in the color image, then traversing a depth image corresponding to the color image and only reserving the depth information of the vehicles, the pedestrians, the roads and the warning boards in the depth image, then reading the color information and the distance information of the vehicles, the pedestrians, the roads and the warning boards, calculating coordinates of the pixels under a camera coordinate system, and calculating a pair of point clouds corresponding to the color image and the depth image according to camera parameters.
CN202011379666.4A 2020-11-30 2020-11-30 Driving assistance recognition method based on high-precision map Withdrawn CN112487986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011379666.4A CN112487986A (en) 2020-11-30 2020-11-30 Driving assistance recognition method based on high-precision map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011379666.4A CN112487986A (en) 2020-11-30 2020-11-30 Driving assistance recognition method based on high-precision map

Publications (1)

Publication Number Publication Date
CN112487986A true CN112487986A (en) 2021-03-12

Family

ID=74937866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011379666.4A Withdrawn CN112487986A (en) 2020-11-30 2020-11-30 Driving assistance recognition method based on high-precision map

Country Status (1)

Country Link
CN (1) CN112487986A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989978A (en) * 2021-03-04 2021-06-18 扬州微地图地理信息科技有限公司 Driving assistance recognition method based on high-precision map
CN118694919A (en) * 2024-08-22 2024-09-24 深圳市百盛兴业科技有限公司 AI camera image transmission optimization method for vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989978A (en) * 2021-03-04 2021-06-18 扬州微地图地理信息科技有限公司 Driving assistance recognition method based on high-precision map
CN118694919A (en) * 2024-08-22 2024-09-24 深圳市百盛兴业科技有限公司 AI camera image transmission optimization method for vehicle

Similar Documents

Publication Publication Date Title
DE102018116108B4 (en) CALIBRATION TEST METHOD FOR THE OPERATION OF AUTONOMOUS VEHICLES AND VEHICLE WITH A CONTROLLER FOR EXECUTING THE METHOD
DE102018121595B4 (en) UNSUPERVISED TRAINING OF AGENTS FOR AUTONOMOUS DRIVING APPLICATIONS
US9384401B2 (en) Method for fog detection
DE102018116107A1 (en) CALIBRATION PROCEDURE FOR THE OPERATION OF AUTONOMOUS VEHICLES
DE102018121597A1 (en) FLOOR REFERENCE FOR THE OPERATION OF AUTONOMOUS VEHICLES
CN111898523A (en) Remote sensing image special vehicle target detection method based on transfer learning
CN110910453B (en) Vehicle pose estimation method and system based on non-overlapping view field multi-camera system
DE102019112002A1 (en) SYSTEMS AND METHOD FOR THE AUTOMATIC DETECTION OF PENDING FEATURES
DE102018109371A1 (en) CALIBRATION VALIDATION FOR THE OPERATION OF AUTONOMOUS VEHICLES
DE102017122170A1 (en) SYSTEM AND METHOD FOR PLACING SUBJECTIVE MESSAGES ON A VEHICLE
DE112018000335T5 (en) SYSTEMS AND METHOD FOR A CALCULATION FRAME FOR A VISUAL WARNING OF THE DRIVER USING A "FULLY CONVOLUTIONAL" ARCHITECTURE
CN104590130A (en) Rearview mirror self-adaptive adjustment method based on image identification
CN111141311B (en) Evaluation method and system of high-precision map positioning module
CN111860274A (en) Traffic police command gesture recognition method based on head orientation and upper half body skeleton characteristics
CN112487986A (en) Driving assistance recognition method based on high-precision map
WO2022128014A1 (en) Correction of images from a panoramic-view camera system in the case of rain, incident light and contamination
CN110097055A (en) A kind of vehicle attitude detection method and system based on grid convolutional neural networks
WO2022128013A1 (en) Correction of images from a camera in case of rain, incident light and contamination
CN116152768A (en) Intelligent driving early warning system and method based on road condition identification
CN115810179A (en) Human-vehicle visual perception information fusion method and system
DE102019220335A1 (en) SEMANTIC SEGMENTATION USING DRIVER ATTENTION INFORMATION
CN110472508A (en) Lane line distance measuring method based on deep learning and binocular vision
CN112989978A (en) Driving assistance recognition method based on high-precision map
CN115588188A (en) Locomotive, vehicle-mounted terminal and driver behavior identification method
CN112836619A (en) Embedded vehicle-mounted far infrared pedestrian detection method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210312

WW01 Invention patent application withdrawn after publication