CN114782916B - ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror - Google Patents

ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror Download PDF

Info

Publication number
CN114782916B
CN114782916B CN202210374465.8A CN202210374465A CN114782916B CN 114782916 B CN114782916 B CN 114782916B CN 202210374465 A CN202210374465 A CN 202210374465A CN 114782916 B CN114782916 B CN 114782916B
Authority
CN
China
Prior art keywords
image
determining
module
vehicle
limb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210374465.8A
Other languages
Chinese (zh)
Other versions
CN114782916A (en
Inventor
苏泳
谭小球
刘柏林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ULTRONIX PRODUCTS Ltd
Original Assignee
ULTRONIX PRODUCTS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ULTRONIX PRODUCTS Ltd filed Critical ULTRONIX PRODUCTS Ltd
Priority to CN202210374465.8A priority Critical patent/CN114782916B/en
Publication of CN114782916A publication Critical patent/CN114782916A/en
Application granted granted Critical
Publication of CN114782916B publication Critical patent/CN114782916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an ADAS rear vehicle recognition system based on multi-sensor fusion and carried by a rear view mirror, which comprises the following components: the shooting module is arranged on a rearview mirror of the target vehicle and is used for shooting a scene behind the target vehicle to acquire a video data stream; the frame-decoding processing module is used for carrying out frame-decoding processing on the video data stream to obtain a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video image and determining a vehicle area of each frame of video image; and the display module is used for determining driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data. The vehicle rear environment is comprehensively perceived, the driving data of the rear vehicle are timely acquired, the driving state of the rear vehicle is found, corresponding countermeasures are conveniently taken according to the driving state of the rear vehicle, and potential safety hazards are eliminated.

Description

ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror
Technical Field
The invention relates to the technical field of environment perception, in particular to an ADAS rear vehicle recognition system based on multi-sensor fusion and carried by a rear view mirror.
Background
The ADAS system utilizes various sensors installed on the vehicle to sense the surrounding environment at any time in the running process of the vehicle, collects data, performs identification, detection and tracking of static and dynamic objects, and combines navigator map data to perform system operation and analysis, thereby enabling a driver to perceive possible danger in advance and effectively increasing the comfort and safety of the driving of the vehicle. In the prior art, the sensing data of the environment behind the vehicle are incomplete and less, and meanwhile, in the running process of the vehicle, the running data of the vehicle behind is not effectively sensed, the driving state of the vehicle behind can not be timely found, and further, corresponding countermeasures can not be taken according to the found driving state of the vehicle behind, so that certain potential safety hazards exist for the safety of the vehicle and a driver.
Disclosure of Invention
The present invention aims to solve, at least to some extent, one of the technical problems in the above-described technology. Therefore, the invention aims to provide an ADAS rear vehicle recognition system based on multi-sensor fusion, which is carried on a rear view mirror, comprehensively senses the environment behind a vehicle, timely acquires driving data of the rear vehicle, discovers the driving state of the rear vehicle, and is convenient to take corresponding countermeasures according to the driving state of the rear vehicle, so that potential safety hazards are eliminated.
To achieve the above objective, an embodiment of the present invention provides an ADAS rear-view vehicle recognition system based on multi-sensor fusion, which includes: the shooting module is arranged on a rearview mirror of the target vehicle and is used for shooting a scene behind the target vehicle to acquire a video data stream; the frame-decoding processing module is used for carrying out frame-decoding processing on the video data stream to obtain a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video image and determining a vehicle area of each frame of video image; and the display module is used for determining driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data.
According to some embodiments of the invention, the shooting module is an ADAS smart camera.
According to some embodiments of the invention, further comprising:
the first recognition module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line recognition according to the binary image to obtain first recognition data;
the second identification module is used for carrying out graying treatment on the video image to obtain a gray level image, and carrying out obstacle identification according to the gray level image to obtain second identification data;
the display module is further configured to display the first identification data and the second identification data.
According to some embodiments of the invention, the driving data includes high beam and low beam states, vehicle speed, steering, braking states, historical driving trajectories.
According to some embodiments of the invention, further comprising:
the prediction module is used for predicting the driving path of the rear vehicle according to the driving data;
and the display module is used for displaying the driving path.
According to some embodiments of the invention, further comprising:
the driving behavior determining module is used for:
determining a driving image in the partial image corresponding to the vehicle region;
inputting the driving image into a pre-trained human body recognition model, and determining head key points and limb key points;
determining a first head image from the head keypoints;
determining a first limb image according to the limb key points;
acquiring an infrared image of a driver on a rear vehicle;
dividing the infrared image into a second head image and a second limb image based on different rules of infrared information of different parts of a preset human body;
matching the first head image with the second head image, and after the matching is successful, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing to obtain a first fusion image;
normalizing the first fusion image and determining an estimated face;
overlapping the estimated face with the real face on the first fused image, and determining overlapping information;
calculating SIFT features of the pixel points on the estimated face, and clustering the pixel points according to the SIFT features to obtain a plurality of clustering sets;
determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of clustering sets, and determining the face characteristics according to the corresponding relation;
matching the first limb image with the second limb image, and after the matching is successful, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing to obtain a second fusion image;
inputting the second fusion image into a limb feature extraction model to determine limb features;
constructing a motion trail of the limb based on a plurality of limb characteristics;
determining driving behaviors according to the facial features and the motion trail of the limbs;
and the alarm module is used for matching the driving behavior with the abnormal driving behavior in the preset abnormal driving behavior database, and sending an alarm prompt when the successful matching is determined.
According to some embodiments of the invention, further comprising:
a positioning module for:
acquiring Beidou positioning information and GPS positioning information of the target vehicle;
determining positioning information of a target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information;
a matching module for:
calling a planning map according to the positioning information, and determining a preset road scene image;
performing image segmentation on the video image to determine a scene area of each frame of video image;
carrying out space-time alignment on the preset road scene image and the road scene image corresponding to the scene area;
and matching the preset road scene image and the road scene image in the same time and space, and correcting the preset road scene image and the road scene image according to the matching result.
According to some embodiments of the invention, further comprising: the preprocessing module is used for preprocessing the video image before the determining module performs image segmentation on the video image, and the preprocessing comprises denoising processing and white balance adjustment processing.
According to some embodiments of the invention, further comprising:
the distance sensor is used for sensing the distance information between the target vehicle and the rear vehicle;
and the warning module is used for sending out warning information when the distance information is determined to be smaller than the preset distance information.
According to some embodiments of the invention, the facial features include an emotional state of the driver, an opening degree of the eyes.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a block diagram of an ADAS rear-view mirror-mounted multisensor fusion-based rear-view vehicle identification system according to a first embodiment of the present invention;
fig. 2 is a block diagram of an ADAS rear-view mirror-mounted multisensor fusion-based rear-view vehicle identification system according to a second embodiment of the present invention;
fig. 3 is a block diagram of an ADAS rear-view mirror-mounted multisensor fusion-based rear-view recognition system according to a third embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
As shown in fig. 1, an embodiment of the present invention provides an ADAS rear vehicle recognition system based on multi-sensor fusion and carried by a rear view mirror, including: the shooting module is arranged on a rearview mirror of the target vehicle and is used for shooting a scene behind the target vehicle to acquire a video data stream; the frame-decoding processing module is used for carrying out frame-decoding processing on the video data stream to obtain a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video image and determining a vehicle area of each frame of video image; and the display module is used for determining driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data.
The working principle of the technical scheme is as follows: the shooting module is arranged on a rearview mirror of the target vehicle and is used for shooting a scene behind the target vehicle to acquire a video data stream; the frame-decoding processing module is used for carrying out frame-decoding processing on the video data stream to obtain a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video image and determining a vehicle area of each frame of video image; and the display module is used for determining driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data.
The beneficial effects of the technical scheme are that: the vehicle rear environment is comprehensively perceived, the driving data of the rear vehicle are timely acquired, the driving state of the rear vehicle is found, corresponding countermeasures are conveniently taken according to the driving state of the rear vehicle, and potential safety hazards are eliminated.
According to some embodiments of the invention, the shooting module is an ADAS smart camera.
According to some embodiments of the invention, further comprising:
the first recognition module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line recognition according to the binary image to obtain first recognition data;
the second identification module is used for carrying out graying treatment on the video image to obtain a gray level image, and carrying out obstacle identification according to the gray level image to obtain second identification data;
the display module is further configured to display the first identification data and the second identification data.
The working principle of the technical scheme is as follows: the first recognition module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line recognition according to the binary image to obtain first recognition data; the second identification module is used for carrying out graying treatment on the video image to obtain a gray level image, and carrying out obstacle identification according to the gray level image to obtain second identification data; the display module is further configured to display the first identification data and the second identification data.
The beneficial effects of the technical scheme are that: the method is convenient for timely acquiring information such as whether the rear vehicle runs in the lane line or encounters an obstacle, and the like, accurately and comprehensively determining the state of the rear vehicle, and realizing comprehensive perception of the rear environment of the target vehicle.
According to some embodiments of the invention, the driving data includes high beam and low beam states, vehicle speed, steering, braking states, historical driving trajectories.
According to some embodiments of the invention, further comprising:
the prediction module is used for predicting the driving path of the rear vehicle according to the driving data;
and the display module is used for displaying the driving path.
The working principle of the technical scheme is as follows: the prediction module is used for predicting the driving path of the rear vehicle according to the driving data; and the display module is used for displaying the driving path.
The beneficial effects of the technical scheme are that: and predicting the running path of the rear vehicle according to the running data, so that a driver of the target vehicle can clearly perceive the running path of the rear vehicle, and when the running path is determined to be abnormal, the running path of the target vehicle can be conveniently adjusted at any time, and traffic accidents are reduced.
As shown in fig. 2, according to some embodiments of the invention, further comprising:
the driving behavior determining module is used for:
determining a driving image in the partial image corresponding to the vehicle region;
inputting the driving image into a pre-trained human body recognition model, and determining head key points and limb key points;
determining a first head image from the head keypoints;
determining a first limb image according to the limb key points;
acquiring an infrared image of a driver on a rear vehicle;
dividing the infrared image into a second head image and a second limb image based on different rules of infrared information of different parts of a preset human body;
matching the first head image with the second head image, and after the matching is successful, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing to obtain a first fusion image;
normalizing the first fusion image and determining an estimated face;
overlapping the estimated face with the real face on the first fused image, and determining overlapping information;
calculating SIFT features of the pixel points on the estimated face, and clustering the pixel points according to the SIFT features to obtain a plurality of clustering sets;
determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of clustering sets, and determining the face characteristics according to the corresponding relation;
matching the first limb image with the second limb image, and after the matching is successful, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing to obtain a second fusion image;
inputting the second fusion image into a limb feature extraction model to determine limb features;
constructing a motion trail of the limb based on a plurality of limb characteristics;
determining driving behaviors according to the facial features and the motion trail of the limbs;
and the alarm module is used for matching the driving behavior with the abnormal driving behavior in the preset abnormal driving behavior database, and sending an alarm prompt when the successful matching is determined.
The working principle of the technical scheme is as follows: the driving behavior determining module is used for: determining a driving image in the partial image corresponding to the vehicle region; inputting the driving image into a pre-trained human body recognition model, and determining head key points and limb key points; determining a first head image from the head keypoints; determining a first limb image according to the limb key points; acquiring an infrared image of a driver on a rear vehicle; dividing the infrared image into a second head image and a second limb image based on different rules of infrared information of different parts of a preset human body; matching the first head image with the second head image, and after the matching is successful, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing to obtain a first fusion image; normalizing the first fusion image and determining an estimated face; overlapping the estimated face with the real face on the first fused image, and determining overlapping information; calculating SIFT features of the pixel points on the estimated face, and clustering the pixel points according to the SIFT features to obtain a plurality of clustering sets; determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of clustering sets, and determining the face characteristics according to the corresponding relation; matching the first limb image with the second limb image, and after the matching is successful, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing to obtain a second fusion image; inputting the second fusion image into a limb feature extraction model to determine limb features; constructing a motion trail of the limb based on a plurality of limb characteristics; determining driving behaviors according to the facial features and the motion trail of the limbs; the alarm module is used for matching the driving behavior with the abnormal driving behavior in the preset abnormal driving behavior database, and sending out an alarm prompt when the matching is determined to be successful. The plurality of cluster sets are used for different parts of the human face.
The beneficial effects of the technical scheme are that: the driving image is a visible light image. And (3) based on a pre-trained human body recognition model, the driving image is subjected to image segmentation, and a first head image and a first limb image are determined. Acquiring an infrared image of a driver on a rear vehicle based on an infrared image acquisition device; dividing the infrared image into a second head image and a second limb image based on different rules of infrared information of different parts of a preset human body; and fusing the first head image with the second head image, and accurately extracting the facial features based on the first fused image. The problems of loss and inaccuracy of extracted characteristic information caused by characteristic extraction of the first head image and the second head image are avoided, and the first fusion image is processed, so that mutual complementation of information between the first head image and the second head image can be realized. When the face features are determined, the corresponding relation between the estimated face and the real face is determined accurately by the overlapping information and the plurality of clustering sets, and the operation parameters of the algorithm are determined, so that the face in the first fused image is accurately positioned, and further the face features are accurately acquired. Inputting the second fusion image into a limb feature extraction model to determine limb features; constructing a motion trail of the limb based on a plurality of limb characteristics; according to the face characteristics and the movement tracks of the limbs, the driving behavior is accurately determined, whether the driving behavior is abnormal driving behavior (dangerous driving behavior) is further judged, when the fact that the driver of the rear vehicle has abnormal timely behavior is determined, an alarm prompt is sent, the driver on the target vehicle is prompted to pay more attention to the situation, corresponding measures are convenient to take, and potential safety hazards are eliminated.
As shown in fig. 3, according to some embodiments of the invention, further comprising:
a positioning module for:
acquiring Beidou positioning information and GPS positioning information of the target vehicle;
determining positioning information of a target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information;
a matching module for:
calling a planning map according to the positioning information, and determining a preset road scene image;
performing image segmentation on the video image to determine a scene area of each frame of video image;
carrying out space-time alignment on the preset road scene image and the road scene image corresponding to the scene area;
and matching the preset road scene image and the road scene image in the same time and space, and correcting the preset road scene image and the road scene image according to the matching result.
The working principle of the technical scheme is as follows: a positioning module for: acquiring Beidou positioning information and GPS positioning information of the target vehicle; determining positioning information of a target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information; a matching module for: calling a planning map according to the positioning information, and determining a preset road scene image; performing image segmentation on the video image to determine a scene area of each frame of video image; carrying out space-time alignment on the preset road scene image and the road scene image corresponding to the scene area; and matching the preset road scene image and the road scene image in the same time and space, and correcting the preset road scene image and the road scene image according to the matching result. Spatiotemporal alignment refers to alignment in time and space.
The beneficial effects of the technical scheme are that: the method is convenient for accurately determining the preset road scene image based on the positioning information of the target vehicle, realizing accurate perception of the actual road scene based on the preset road scene image, and simultaneously, being convenient for correcting the preset road scene image according to the road scene image, and ensuring the precision of the planning map. Accurate perception of a scene behind the target vehicle is achieved.
In an embodiment, before overlapping the estimated face with the real face on the first fused image, further comprising:
determining a central region of the estimated face;
dividing a real surface part on the first fusion image into a plurality of standard areas;
calculating the matching degree between the central area and a plurality of standard areas respectively:
wherein P is i For the matching degree between the central area and the ith standard area, A is the length of the central area, B is the width of the central area, Q s,t Is the pixel value of the pixel point of the s-th row and t-th column of the central area,pixel mean value of pixel point of central area, < +.>Pixel value of pixel point of the ith row and t column of the ith standard area,/and y>As the pixel mean value of the pixel point of the i-th standard region, i=1, 2, 3 … M; m is the number of standard regions;
screening out a standard area corresponding to the maximum matching degree as a target standard area;
and determining an overlapping mode according to the central area and the target standard area, and overlapping the estimated face with the real face on the first fusion image according to the overlapping mode.
The technical scheme has the working principle and beneficial effects that: when the first fusion image is identified, an estimated face is determined, the central area of the estimated face is further determined, the real face on the first fusion image is divided into a plurality of standard areas, and the matching degree between the central area and the plurality of standard areas is calculated respectively; screening out a standard area corresponding to the maximum matching degree as a target standard area; and determining an overlapping mode according to the central area and the target standard area, and overlapping the estimated face with the real face on the first fusion image according to the overlapping mode. Based on the estimated faces, accurate recognition of the true faces on the first fused image is facilitated. The central area is the same size as the standard area. The overlapping mode is that the central area is overlapped with the target standard area, so that accurate overlapping of the estimated face and the real face on the first fusion image is realized, and the image recognition rate is improved. Based on the formula, the matching degree between the central area and the standard areas is conveniently and accurately calculated, and further the target standard area is accurately determined.
According to some embodiments of the invention, further comprising: the preprocessing module is used for preprocessing the video image before the determining module performs image segmentation on the video image, and the preprocessing comprises denoising processing and white balance adjustment processing.
The beneficial effects of the technical scheme are that: the method is convenient for improving the signal-to-noise ratio and definition of the video image, and further improving the accuracy of image segmentation.
According to some embodiments of the invention, further comprising:
the distance sensor is used for sensing the distance information between the target vehicle and the rear vehicle;
and the warning module is used for sending out warning information when the distance information is determined to be smaller than the preset distance information.
The beneficial effects of the technical scheme are that: the driver of the target vehicle can conveniently determine the distance information between the driver and the rear vehicle in time, and the driver can accurately perceive the rear information.
According to some embodiments of the invention, the facial features include an emotional state of the driver, an opening degree of the eyes.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. An ADAS rear-view mirror-carried ADAS rear-view mirror recognition system based on multi-sensor fusion, comprising: the shooting module is arranged on a rearview mirror of the target vehicle and is used for shooting a scene behind the target vehicle to acquire a video data stream; the frame-decoding processing module is used for carrying out frame-decoding processing on the video data stream to obtain a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video image and determining a vehicle area of each frame of video image; the display module is used for determining driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data;
the driving data comprise high beam and low beam states, vehicle speed, steering state, braking state and historical driving tracks;
further comprises:
the driving behavior determining module is used for:
determining a driving image in the partial image corresponding to the vehicle region;
inputting the driving image into a pre-trained human body recognition model, and determining head key points and limb key points;
determining a first head image from the head keypoints;
determining a first limb image according to the limb key points;
acquiring an infrared image of a driver on a rear vehicle;
dividing the infrared image into a second head image and a second limb image based on different rules of infrared information of different parts of a preset human body;
matching the first head image with the second head image, and after the matching is successful, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing to obtain a first fusion image;
normalizing the first fusion image and determining an estimated face;
overlapping the estimated face with the real face on the first fused image, and determining overlapping information;
calculating SIFT features of the pixel points on the estimated face, and clustering the pixel points according to the SIFT features to obtain a plurality of clustering sets;
determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of clustering sets, and determining the face characteristics according to the corresponding relation;
matching the first limb image with the second limb image, and after the matching is successful, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing to obtain a second fusion image;
inputting the second fusion image into a limb feature extraction model to determine limb features;
constructing a motion trail of the limb based on a plurality of limb characteristics;
determining driving behaviors according to the facial features and the motion trail of the limbs;
and the alarm module is used for matching the driving behavior with the abnormal driving behavior in the preset abnormal driving behavior database, and sending an alarm prompt when the successful matching is determined.
2. The rear view mirror-mounted ADAS rear vehicle identification system based on multi-sensor fusion according to claim 1, wherein the shooting module is an ADAS intelligent camera.
3. The rear view mirror-mounted multisensor fusion-based ADAS rear vehicle identification system of claim 1, further comprising:
the first recognition module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line recognition according to the binary image to obtain first recognition data;
the second identification module is used for carrying out graying treatment on the video image to obtain a gray level image, and carrying out obstacle identification according to the gray level image to obtain second identification data;
the display module is further configured to display the first identification data and the second identification data.
4. The rear view mirror-mounted multisensor fusion-based ADAS rear vehicle identification system of claim 1, further comprising:
the prediction module is used for predicting the driving path of the rear vehicle according to the driving data;
and the display module is used for displaying the driving path.
5. The rear view mirror-mounted multisensor fusion-based ADAS rear vehicle identification system of claim 1, further comprising:
a positioning module for:
acquiring Beidou positioning information and GPS positioning information of the target vehicle;
determining positioning information of a target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information;
a matching module for:
calling a planning map according to the positioning information, and determining a preset road scene image;
performing image segmentation on the video image to determine a scene area of each frame of video image;
carrying out space-time alignment on the preset road scene image and the road scene image corresponding to the scene area;
and matching the preset road scene image and the road scene image in the same time and space, and correcting the preset road scene image and the road scene image according to the matching result.
6. The rear view mirror-mounted multisensor fusion-based ADAS rear vehicle identification system of claim 1, further comprising: the preprocessing module is used for preprocessing the video image before the determining module performs image segmentation on the video image, and the preprocessing comprises denoising processing and white balance adjustment processing.
7. The rear view mirror-mounted multisensor fusion-based ADAS rear vehicle identification system of claim 1, further comprising:
the distance sensor is used for sensing the distance information between the target vehicle and the rear vehicle;
and the warning module is used for sending out warning information when the distance information is determined to be smaller than the preset distance information.
8. The rear view mirror-mounted multisensor fusion-based ADAS rear car recognition system of claim 1, wherein the facial features include an emotional state of a driver, an opening and closing degree of eyes.
CN202210374465.8A 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror Active CN114782916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210374465.8A CN114782916B (en) 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210374465.8A CN114782916B (en) 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror

Publications (2)

Publication Number Publication Date
CN114782916A CN114782916A (en) 2022-07-22
CN114782916B true CN114782916B (en) 2024-03-29

Family

ID=82429151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210374465.8A Active CN114782916B (en) 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror

Country Status (1)

Country Link
CN (1) CN114782916B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105676253A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Longitudinal positioning system and method based on city road marking map in automatic driving
CN105912984A (en) * 2016-03-31 2016-08-31 大连楼兰科技股份有限公司 Auxiliary driving method capable of realizing multi-state information fusion
CN106599832A (en) * 2016-12-09 2017-04-26 重庆邮电大学 Method for detecting and recognizing various types of obstacles based on convolution neural network
CN108596064A (en) * 2018-04-13 2018-09-28 长安大学 Driver based on Multi-information acquisition bows operating handset behavioral value method
CN109325388A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Recognition methods, system and the automobile of lane line
CN109318799A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Automobile, automobile ADAS system and its control method
CN111178161A (en) * 2019-12-12 2020-05-19 重庆邮电大学 Vehicle tracking method and system based on FCOS
CN112130153A (en) * 2020-09-23 2020-12-25 的卢技术有限公司 Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
CN113205686A (en) * 2021-06-04 2021-08-03 华中科技大学 360-degree panoramic wireless safety auxiliary system for rear loading of motor vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8953044B2 (en) * 2011-10-05 2015-02-10 Xerox Corporation Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems
US10909845B2 (en) * 2013-07-01 2021-02-02 Conduent Business Services, Llc System and method for enhancing images and video frames

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105676253A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Longitudinal positioning system and method based on city road marking map in automatic driving
CN105912984A (en) * 2016-03-31 2016-08-31 大连楼兰科技股份有限公司 Auxiliary driving method capable of realizing multi-state information fusion
CN106599832A (en) * 2016-12-09 2017-04-26 重庆邮电大学 Method for detecting and recognizing various types of obstacles based on convolution neural network
CN109325388A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Recognition methods, system and the automobile of lane line
CN109318799A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Automobile, automobile ADAS system and its control method
CN108596064A (en) * 2018-04-13 2018-09-28 长安大学 Driver based on Multi-information acquisition bows operating handset behavioral value method
CN111178161A (en) * 2019-12-12 2020-05-19 重庆邮电大学 Vehicle tracking method and system based on FCOS
CN112130153A (en) * 2020-09-23 2020-12-25 的卢技术有限公司 Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
CN113205686A (en) * 2021-06-04 2021-08-03 华中科技大学 360-degree panoramic wireless safety auxiliary system for rear loading of motor vehicle

Also Published As

Publication number Publication date
CN114782916A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
Possatti et al. Traffic light recognition using deep learning and prior maps for autonomous cars
CN105711597B (en) Front locally travels context aware systems and method
US8634593B2 (en) Pixel-based texture-less clear path detection
EP1817761B1 (en) Apparatus and method for automatically detecting objects
US6819779B1 (en) Lane detection system and apparatus
US8332134B2 (en) Three-dimensional LIDAR-based clear path detection
WO2019169031A1 (en) Method for determining driving policy
KR101999993B1 (en) Automatic traffic enforcement system using radar and camera
US20030002713A1 (en) Vision-based highway overhead structure detection system
US20030083790A1 (en) Vehicle information providing apparatus
US20090268948A1 (en) Pixel-based texture-rich clear path detection
EP1686538A2 (en) Vehicle position recognizing device and vehicle position recognizing method
US9352746B2 (en) Lane relative position estimation method and system for driver assistance systems
CN102792314A (en) Cross traffic collision alert system
CN109686031A (en) Identification follower method based on security protection
KR102388806B1 (en) System for deciding driving situation of vehicle
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
CN112622765B (en) Full-time vision auxiliary rearview mirror system
CN113071500A (en) Method and device for acquiring lane line, computer equipment and storage medium
JP4967758B2 (en) Object movement detection method and detection apparatus
DE102019109491A1 (en) DATA PROCESSING DEVICE, MONITORING SYSTEM, WECKSYSTEM, DATA PROCESSING METHOD AND DATA PROCESSING PROGRAM
CN114782916B (en) ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror
CN117292346A (en) Vehicle running risk early warning method for driver and vehicle state integrated sensing
CN116012822B (en) Fatigue driving identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant