CN112298039A - A-column imaging method - Google Patents

A-column imaging method Download PDF

Info

Publication number
CN112298039A
CN112298039A CN202011031799.2A CN202011031799A CN112298039A CN 112298039 A CN112298039 A CN 112298039A CN 202011031799 A CN202011031799 A CN 202011031799A CN 112298039 A CN112298039 A CN 112298039A
Authority
CN
China
Prior art keywords
driver
pillar
column
image
monitoring camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011031799.2A
Other languages
Chinese (zh)
Inventor
盛大宁
张祺
戴大力
李晨轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Hozon New Energy Automobile Co Ltd
Original Assignee
Zhejiang Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Hozon New Energy Automobile Co Ltd filed Critical Zhejiang Hozon New Energy Automobile Co Ltd
Priority to CN202011031799.2A priority Critical patent/CN112298039A/en
Priority to PCT/CN2020/121744 priority patent/WO2022061999A1/en
Publication of CN112298039A publication Critical patent/CN112298039A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

An A-pillar imaging method belongs to the technical field of driving images. The method is realized based on an A-column imaging system, and comprises the following steps: step S01, calculating the coordinates of the head position of the driver in an automobile coordinate system based on the images in the automobile collected by the eyebrow monitoring camera; the eyebrow center monitoring camera is arranged on a steering column of an automobile steering wheel; step S02, tracking the eyes of the driver through the eyebrow center monitoring camera, calculating the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system, and obtaining the sight track of the driver; step S03, constructing three-dimensional image information containing A-column blind area obstacles by using a three-dimensional reconstruction algorithm based on the front images collected by the two A-column cameras; and step S04, projecting the three-dimensional image information in the step S03 to the normal plane where the driver sight line is located in the step S02, obtaining the image which is the same as the human eye image and displaying the image on the A-pillar flexible display screen. The invention can realize the effect of the transparent A column, dynamically adjust the display picture according to the sight of human eyes and ensure that the picture display accords with the vision of human eyes.

Description

A-column imaging method
Technical Field
The invention belongs to the technical field of driving images, and particularly relates to an A-pillar imaging method.
Background
The A column is a column between a front windshield and a front door of the automobile body, and the A column enables the automobile body to have higher stability and rigidity and plays an important role in protecting the driving safety of drivers and passengers. Meanwhile, due to the existence of the A column, the visual blind area of the A column occurs. At present, cameras are erected on the left rearview mirror and the right rearview mirror, and pictures of the blind areas of the A column are displayed on a screen on the inner side of the A column in real time, so that the blind areas of the A column can be eliminated.
Generally, a camera picture is cut to a certain degree and then displayed on a screen, but due to the influences of factors such as the height of a driver, the sitting posture, the distance of obstacles and the like, the picture on the screen may have larger deviation with the picture seen by human eyes in the aspects of shape, size and the like. In order to realize the transparent effect, the track of the sight line of human eyes and the distance of the obstacle in the blind area of the column A need to be monitored on the basis of the current software and hardware, the display picture of the screen is dynamically adjusted according to the sight line of human eyes, and the influence caused by factors such as the sitting posture of a driver, the distance of the obstacle and the like is reduced.
The invention patent application CN201910440232.1 discloses a method of an A-column visual field blind area auxiliary vision system based on an eyeball tracking technology, and particularly discloses a method comprising the following steps: s1: the eyeball tracking unit is used for positioning the eyes of the driver and sending the positioning information to the ECU unit; s2: the ECU unit receives the positioning information sent by the eyeball tracking unit and controls the external camera unit to move according to the positioning information to acquire road condition information of the A column visual field blind area; s3: the external camera unit is controlled by the ECU unit to move to collect road condition information of the column A view blind area along with the vision of a driver; s4: the indoor display unit displays the road condition information of the A-column visual field blind area collected by the external camera unit. Although the invention utilizes the eyeball to track and acquire the blind area image, the color, distortion and brightness of the acquired image cannot be matched with the height seen by human eyes.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an A-pillar imaging method, so that the color, distortion and brightness of a display picture on an A-pillar flexible display screen are matched with those seen by human eyes, and the transparent effect of the A-pillar is realized.
The invention is realized by the following technical scheme:
a pillar A imaging method is characterized in that the method is realized based on a pillar A imaging system comprising two pillar A cameras, an eyebrow center monitoring camera, a pillar A flexible display screen and a control device, wherein the eyebrow center monitoring camera is arranged on a steering column of an automobile steering wheel; the method is applied to a control device; the method comprises the following steps:
step S01, calculating the coordinates of the head position of the driver in an automobile coordinate system based on the images in the automobile collected by the eyebrow monitoring camera;
step S02, tracking the eyes of the driver through the eyebrow center monitoring camera, calculating the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system, and obtaining the sight track of the driver;
step S03, constructing three-dimensional image information containing A-column blind area obstacles by using a three-dimensional reconstruction algorithm based on the front images collected by the two A-column cameras;
and step S04, projecting the three-dimensional image information in the step S03 to the normal plane where the driver sight line is located in the step S02, obtaining the image which is the same as the human eye image and displaying the image on the A-pillar flexible display screen.
According to the invention, the A-column cameras are arranged on the two sides of the A columns to collect images in front of the vehicle, the eyebrow monitoring cameras are arranged in the vehicle to collect images in the vehicle and track the eyes of a driver, the trajectory of the sight line of human eyes and the distance between obstacles in the blind area of the A columns are monitored based on the above mode, the screen display picture is dynamically adjusted according to the sight line of human eyes, and the influence caused by factors such as the sitting posture of the driver, the distance between obstacles and the like is reduced. The invention finally obtains the matching of the color, distortion and brightness of the display picture with the human eyes, and realizes the transparent effect of the A column.
Preferably, step S01 specifically includes:
step S11, calculating the position of a reference point on the B column on the driving side on the picture of the eyebrow center monitoring camera based on the image in the vehicle collected by the eyebrow center monitoring camera;
step S12, calculating the coordinates of the eyebrow center monitoring camera in the automobile coordinate system based on the position of the reference point in the automobile coordinate system and the position of the eyebrow center monitoring camera on the picture;
and step S13, calculating the coordinates of the head position of the driver in the automobile coordinate system based on the coordinates of the eyebrow center monitoring camera in the automobile coordinate system.
Preferably, the reference point is a point which is located on the B column on the driving side, can be collected by the eyebrow monitoring camera and is not shielded by the driver.
Preferably, the eyebrow center monitoring camera is a binocular camera, and step S02 specifically includes:
step S21, tracking the eyes of the driver through a binocular camera, and calculating the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system;
and step S22, converting the sight line of the driver based on the three-dimensional coordinates of the eyebrow center of the driver, and obtaining the sight line track of the driver.
Preferably, step S03 specifically includes: constructing three-dimensional image information containing A-column blind area barriers by using a three-dimensional reconstruction algorithm based on the vehicle front images and the depth data acquired by the two A-column cameras; the depth data comprises the imaging size of the A-column flexible display screen obtained through calculation according to the focal length of human eyes, the A-column camera and the distance of the obstacle.
Preferably, the obstacle distance is obtained by: and based on the images of the front of the vehicle collected by the two A-column cameras, calculating the distance of the obstacle by using a single-view depth estimation algorithm.
Preferably, the A-column imaging system further comprises a ranging sensor for acquiring the distance of the obstacle outside the A-column.
Preferably, step S04 specifically includes:
step S41, acquiring the normal plane of the driver sight line acquired in real time in step S02, and taking the normal plane as a projection plane;
and step S42, projecting the three-dimensional image information in the step S03 to the projection plane to obtain an image which is the same as the image formed by human eyes and displaying the image on the A-pillar flexible display screen.
Preferably, the step S04 further includes partially cutting the image imaged by the human eye before displaying the same image imaged by the human eye on the A-pillar flexible display screen.
Preferably, the step of partially cutting the image that is the same as the image imaged by the human eye comprises:
when the real-time vehicle speed is detected to be not greater than the vehicle speed threshold value, intercepting a part of the maximum image within the range of human eyes from the obtained image which is the same as the human eye imaging;
when the real-time vehicle speed is detected to be larger than the vehicle speed threshold value, a part of local images in the range of human eyes are intercepted from the obtained images which are the same as the human eye images, and the local images are smaller than the maximum image.
The invention has the following beneficial effects:
a method for imaging an A-pillar,
(1) the color, distortion and brightness of the displayed picture on the display screen can be matched with those seen by human eyes, and the transparent effect of the column A is realized;
(2) the sight line track of a driver is obtained by using a binocular glabellar tracking method, the specific imaging of the A-column screen is adjusted in real time according to the sight line track of the driver, the imaging angle and depth of a camera are ensured to be in accordance with the real world condition seen by the driver, imaging distortion or offset cannot occur, and the driver feels that the driver looks outward from a wide 'window';
(3) the scenes seen by human eyes are obtained through a three-dimensional reconstruction algorithm and a visual angle transformation algorithm based on the position of the eyebrow center, and the real transparent effect is realized.
Drawings
FIG. 1 is a flow chart of an A-pillar imaging method of the present invention;
FIG. 2 is a schematic view of an eyebrow monitoring camera disposed in a vehicle; wherein the X point is a reference point;
FIG. 3 is a schematic view of different focal length optics imaging;
FIG. 4 is a diagram showing a state simulation of an obstacle at a viewing angle of an A-pillar camera;
FIG. 5 is a simulation diagram of the state of an obstacle at the angle of view of human eyes;
FIG. 6 is a schematic of a three-dimensional reconstruction;
FIG. 7 is a schematic view of the A-pillar camera and the field of view of the human eye blocked by the A-pillar;
x-reference point; 5-camera position; 6-eye position; 7-position of axis of camera.
Detailed Description
The following are specific embodiments of the present invention and are further described with reference to the drawings, but the present invention is not limited to these embodiments.
The A-column imaging method is realized based on an A-column imaging system. The A-pillar imaging system comprises two A-pillar cameras, an eyebrow center monitoring camera, an A-pillar flexible display screen and a control device. The two A-column cameras are arranged on the sides of the two A columns, such as the top ends of the A columns, or on the left rearview mirror and the right rearview mirror. The eyebrow center monitoring camera is arranged on a steering column of an automobile steering wheel. Images collected by the two A-pillar cameras and the eyebrow center monitoring camera are sent to the control device, and the control device obtains images according with the vision of human eyes and then displays the images on the A-pillar flexible display screen. The A-column flexible display screen adopts an OLED display screen.
As shown in fig. 1, the a-pillar imaging method of the present invention is applied to a control device, and the method includes:
step S01, calculating the coordinates of the head position of the driver in an automobile coordinate system based on the images in the automobile collected by the eyebrow monitoring camera;
step S02, tracking the eyes of the driver through the eyebrow center monitoring camera, calculating the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system, and obtaining the sight track of the driver;
step S03, constructing three-dimensional image information containing A-column blind area obstacles by using a three-dimensional reconstruction algorithm based on the front images collected by the two A-column cameras;
and step S04, projecting the three-dimensional image information in the step S03 to the normal plane where the driver sight line is located in the step S02, obtaining the image which is the same as the human eye image and displaying the image on the A-pillar flexible display screen.
The sequence of step S02 and step S03 is not limited to the above-described sequence. Step S02 and step S03 may be executed simultaneously, or step S03 may be executed before step S02.
The step S01 specifically includes:
step S11, calculating the position of a reference point on the B column on the driving side on the picture of the eyebrow center monitoring camera based on the image in the vehicle collected by the eyebrow center monitoring camera;
step S12, calculating the coordinates of the eyebrow center monitoring camera in the automobile coordinate system based on the position of the reference point in the automobile coordinate system and the position of the eyebrow center monitoring camera on the picture;
and step S13, calculating the coordinates of the head position of the driver in the automobile coordinate system based on the coordinates of the eyebrow center monitoring camera in the automobile coordinate system.
The eyebrow center coordinates of the driver detected by the eyebrow center monitoring camera are based on the camera coordinate system, and when the eyebrow center coordinates of the driver are converted into the automobile coordinate system to be used for other applications, the eyebrow center coordinates of the eyebrow center monitoring camera on the automobile coordinate system need to be known. Because the steering column of the steering wheel of the automobile is generally provided with four mechanical adjustments, namely up, down, front and back, the coordinates of the eyebrow monitoring camera on the steering column in an automobile coordinate system cannot be directly positioned. For this reason, step S01 reversely infers the eyebrow monitoring camera position mainly using the position of the reference point, and then obtains the driver' S head position.
The camera on the steering column of the steering wheel can irradiate a certain reference point X on the B column on the driving side of the automobile and cannot be shielded by a driver. And detecting the specific position of the fixed icon on the camera picture through an image algorithm, so that the position of the reference point X in the automobile coordinate system is utilized to reversely deduce the coordinate of the camera in the automobile coordinate system, and finally the specific position of the head position of the driver in the automobile coordinate system is obtained.
In the driving process, due to factors such as the height and sitting posture of a driver, the visual angle of the driver is not fixed, so that the scene of the driver outside the A column is different when the driver looks at the A column, at the moment, the picture displayed on the screen inside the A column needs to be correspondingly changed, and otherwise, the screen display and the scene seen by human eyes are greatly distorted or deviated. For this reason, this problem is solved with step S02. The eyebrow heart monitoring camera is a binocular camera, and step S02 specifically includes:
step S21, tracking the eyes of the driver through a binocular camera, and calculating the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system;
and step S22, converting the sight line of the driver based on the three-dimensional coordinates of the eyebrow center of the driver, and obtaining the sight line track of the driver.
Therefore, the binocular camera tracks the eyes of the driver, calculates the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system, obtains the sight line track of the driver, and adjusts the specific imaging of the A column screen in real time according to the sight line track of the driver, so that the imaging angle and depth of the camera are ensured to accord with the real world conditions seen by the driver, no imaging distortion or deviation occurs, and the driver feels that the driver looks outside from a wide 'window'.
As seen from fig. 4 and 5, the line of sight of the camera and the line of sight of the human eye are not coincident, so that the normal vector of the visual plane of the human eye and the normal vector of the imaging plane of the camera in the world coordinate system are not consistent, and the "transparency" effect of the a-pillar cannot be realized.
Generally, the installation position and angle of the camera are adjusted, but in view of the fact that the heights and sitting postures of drivers may be different, the fixed installation position and angle of the camera cannot meet the requirements of different drivers, and the adjustment mode similar to that of an electric seat will influence the driving safety. To this end, we propose the three-dimensional reconstruction method of step S03. Step S03 includes: three-dimensional image information containing A-column blind area barriers is constructed by utilizing a three-dimensional reconstruction algorithm based on the image of the front of the vehicle and the depth data acquired by the two A-column cameras. The depth data comprises the imaging size of the A-column flexible display screen obtained through calculation according to the focal length of human eyes, the A-column camera and the distance of the obstacle. Specifically, the three-dimensional information of the A-pillar blind area barrier is restored through the steps of preprocessing of two A-pillar camera images and depth data, point cloud calculation, feature extraction, point cloud registration, data fusion, surface generation and the like, the three-dimensional world information is perspectively transformed to the same plane of human vision according to the focal length, the erection position and angle of a camera and the direction of the line of sight of human eyes, and an image which is the same as human eye imaging is obtained and displayed on a A-pillar screen (see figure 6).
Fig. 3 shows a schematic of two optics imaging at different focal lengths. It can be seen that the focal length of the optical device is large, and the imaging size is also greatly changed. The human eye can be equivalent to an optical device and has a certain focal length, when the focal length of the camera is different from the human eye, objects with different distances are seen, the imaging size ratio of the two objects is different, as shown in fig. 3, the imaging ratio of the same object on the upper optical device and the lower optical device is 27:59 at the position of the object distance 1, and the imaging ratio of the same object on the position of the object distance 2 is 55:195, so that the camera needs to zoom the images of the objects with different distances on the A-column display screen in different proportions to match the visual system of the human eye, and the shape and the size of the objects are consistent.
Therefore, the distance of the obstacle is estimated by using a distance measurement method, and then the size of the A-column screen image is converted according to the human eye focal length, the camera focal length and the obstacle distance. The obstacle distance is obtained by the following method: and based on the images of the front of the vehicle collected by the two A-column cameras, calculating the distance of the obstacle by using a single-view depth estimation algorithm. Or, the A-column imaging system further comprises a distance measuring sensor used for obtaining the distance of the obstacle outside the A-column, the distance of the obstacle is obtained through the distance measuring sensor, and the distance measuring sensor is not limited to a millimeter wave radar, a laser radar, an ultrasonic radar and a depth camera.
Step S04 specifically includes:
step S41, acquiring the normal plane of the driver sight line acquired in real time in step S02, and taking the normal plane as a projection plane;
and step S42, projecting the three-dimensional image information in the step S03 to the projection plane to obtain an image which is the same as the image formed by human eyes and displaying the image on the A-pillar flexible display screen.
In view of the fact that drivers are often different in height and sitting posture, the visual angles of seeing the same barrier are different, in the perspective transformation process, the algorithm needs to determine that three-dimensional world information is projected onto a plane where the eyes of the drivers are imaged, and otherwise, the transparent effect cannot be achieved. The driver often has actions such as head twisting, sitting posture adjustment and the like in the driving process, so that the eye imaging plane of the driver can be changed frequently, and the plane information of perspective transformation is adjusted in real time. Step S41 requires obtaining the driver' S sight line detected in real time in step S02, and adjusting the perspective transformed projection plane. Then, the three-dimensional image information in step S03 is projected onto a projection plane to obtain the scene seen by human eyes, thereby realizing a real "transparent" effect.
The step S04 further includes partially intercepting the image imaged by the human eye before displaying the same image imaged by the human eye on the a-pillar flexible display screen.
The step of partially intercepting the same image as the human eye image comprises:
when the real-time vehicle speed is detected to be not greater than the vehicle speed threshold value, intercepting a part of the maximum image within the range of human eyes from the obtained image which is the same as the human eye imaging;
when the real-time vehicle speed is detected to be larger than the vehicle speed threshold value, a part of local images in the range of human eyes are intercepted from the obtained images which are the same as the human eye images, and the local images are smaller than the maximum image.
Referring to fig. 7, when the vehicle speed is not high, since the camera view (1-4) is larger than the range of the human eyes shielded by the a-pillar, the 2-3 part is intercepted and displayed on the display screen as the maximum image. When the vehicle speed is higher than the vehicle speed threshold value, due to time delay caused by algorithm processing and software operation, a part A-B which is relatively front in a camera picture is intercepted as a local image to be displayed on a display screen for the purpose of a transparent effect.
The step improves the definition of the image intercepted from the camera image and displayed on the screen on the inner side of the A column, and improves the user experience.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the present invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (10)

1. A pillar A imaging method is characterized in that the method is realized based on a pillar A imaging system comprising two pillar A cameras, an eyebrow center monitoring camera, a pillar A flexible display screen and a control device, wherein the eyebrow center monitoring camera is arranged on a steering column of an automobile steering wheel; the method is applied to a control device; the method comprises the following steps:
step S01, calculating the coordinates of the head position of the driver in an automobile coordinate system based on the images in the automobile collected by the eyebrow monitoring camera;
step S02, tracking the eyes of the driver through the eyebrow center monitoring camera, calculating the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system, and obtaining the sight track of the driver;
step S03, constructing three-dimensional image information containing A-column blind area obstacles by using a three-dimensional reconstruction algorithm based on the front images collected by the two A-column cameras;
and step S04, projecting the three-dimensional image information in the step S03 to the normal plane where the driver sight line is located in the step S02, obtaining the image which is the same as the human eye image and displaying the image on the A-pillar flexible display screen.
2. The a-pillar imaging method according to claim 1, wherein the step S01 specifically includes:
step S11, calculating the position of a reference point on the B column on the driving side on the picture of the eyebrow center monitoring camera based on the image in the vehicle collected by the eyebrow center monitoring camera;
step S12, calculating the coordinates of the eyebrow center monitoring camera in the automobile coordinate system based on the position of the reference point in the automobile coordinate system and the position of the eyebrow center monitoring camera on the picture;
and step S13, calculating the coordinates of the head position of the driver in the automobile coordinate system based on the coordinates of the eyebrow center monitoring camera in the automobile coordinate system.
3. An A-pillar imaging method according to claim 2, wherein said reference point is a point on the B-pillar on the driving side which can be captured by the brow monitoring camera and is not obstructed by the driver.
4. The A-pillar imaging method according to claim 1, wherein the eyebrow center monitoring camera is a binocular camera, and the step S02 specifically includes:
step S21, tracking the eyes of the driver through a binocular camera, and calculating the three-dimensional coordinates of the eyebrow center of the driver in a world coordinate system;
and step S22, converting the sight line of the driver based on the three-dimensional coordinates of the eyebrow center of the driver, and obtaining the sight line track of the driver.
5. The a-pillar imaging method according to claim 1, wherein the step S03 specifically includes: constructing three-dimensional image information containing A-column blind area barriers by using a three-dimensional reconstruction algorithm based on the vehicle front images and the depth data acquired by the two A-column cameras; the depth data comprises the imaging size of the A-column flexible display screen obtained through calculation according to the focal length of human eyes, the A-column camera and the distance of the obstacle.
6. An a-pillar imaging method according to claim 5, wherein said obstacle distance is obtained by: and based on the images of the front of the vehicle collected by the two A-column cameras, calculating the distance of the obstacle by using a single-view depth estimation algorithm.
7. An A-pillar imaging method as claimed in claim 5, wherein the A-pillar imaging system further comprises a ranging sensor for acquiring the distance of an obstacle outside the A-pillar.
8. The a-pillar imaging method according to claim 1, wherein the step S04 specifically includes:
step S41, acquiring the normal plane of the driver sight line acquired in real time in step S02, and taking the normal plane as a projection plane;
and step S42, projecting the three-dimensional image information in the step S03 to the projection plane to obtain an image which is the same as the image formed by human eyes and displaying the image on the A-pillar flexible display screen.
9. The A-pillar imaging method as claimed in claim 1, wherein the step S04 further comprises partially intercepting the image identical to the image imaged by the human eye before displaying the image identical to the image imaged by the human eye on the A-pillar flexible display screen.
10. An a-pillar imaging method as claimed in claim 9, wherein the step of partially intercepting the same image as the human eye comprises:
when the real-time vehicle speed is detected to be not greater than the vehicle speed threshold value, intercepting a part of the maximum image within the range of human eyes from the obtained image which is the same as the human eye imaging;
when the real-time vehicle speed is detected to be larger than the vehicle speed threshold value, a part of local images in the range of human eyes are intercepted from the obtained images which are the same as the human eye images, and the local images are smaller than the maximum image.
CN202011031799.2A 2020-09-27 2020-09-27 A-column imaging method Pending CN112298039A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011031799.2A CN112298039A (en) 2020-09-27 2020-09-27 A-column imaging method
PCT/CN2020/121744 WO2022061999A1 (en) 2020-09-27 2020-10-19 A-pillar imaging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011031799.2A CN112298039A (en) 2020-09-27 2020-09-27 A-column imaging method

Publications (1)

Publication Number Publication Date
CN112298039A true CN112298039A (en) 2021-02-02

Family

ID=74489851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011031799.2A Pending CN112298039A (en) 2020-09-27 2020-09-27 A-column imaging method

Country Status (2)

Country Link
CN (1) CN112298039A (en)
WO (1) WO2022061999A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113064279A (en) * 2021-03-26 2021-07-02 芜湖汽车前瞻技术研究院有限公司 Virtual image position adjusting method, device and storage medium of AR-HUD system
CN113111402A (en) * 2021-03-24 2021-07-13 浙江合众新能源汽车有限公司 A column barrier angle parameterization design method based on CATIA knowledge
CN113239735A (en) * 2021-04-15 2021-08-10 重庆利龙科技产业(集团)有限公司 Automobile transparent A column system based on binocular camera and implementation method
CN113306492A (en) * 2021-07-14 2021-08-27 合众新能源汽车有限公司 Method and device for generating automobile A column blind area image
CN113343935A (en) * 2021-07-14 2021-09-03 合众新能源汽车有限公司 Method and device for generating automobile A column blind area image
CN113335184A (en) * 2021-07-08 2021-09-03 合众新能源汽车有限公司 Image generation method and device for automobile A column blind area
CN113610053A (en) * 2021-08-27 2021-11-05 合众新能源汽车有限公司 Eyebrow center positioning method for transparent A pillar
CN113676618A (en) * 2021-08-20 2021-11-19 东北大学 Intelligent display system and method of transparent A column
CN113665485A (en) * 2021-08-30 2021-11-19 东风汽车集团股份有限公司 Anti-glare system for front windshield of automobile and control method
CN113815534A (en) * 2021-11-05 2021-12-21 吉林大学重庆研究院 Method for dynamically processing graphics based on response to changes in the position of human eyes

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103358996A (en) * 2013-08-13 2013-10-23 吉林大学 Automobile A pillar perspective vehicle-mounted display device
CN107776488A (en) * 2016-08-24 2018-03-09 京东方科技集团股份有限公司 Automobile using auxiliary display system, display methods and automobile
CN109859270A (en) * 2018-11-28 2019-06-07 浙江合众新能源汽车有限公司 A kind of human eye three-dimensional coordinate localization method and separate type binocular camera shooting device
CN109941277A (en) * 2019-04-08 2019-06-28 宝能汽车有限公司 The method, apparatus and vehicle of display automobile pillar A blind image
CN110509924A (en) * 2019-08-13 2019-11-29 浙江合众新能源汽车有限公司 A kind of method and structure of camera in car locating human face position
CN110901534A (en) * 2019-11-14 2020-03-24 浙江合众新能源汽车有限公司 A-pillar perspective implementation method and system
CN111016785A (en) * 2019-11-26 2020-04-17 惠州市德赛西威智能交通技术研究院有限公司 Head-up display system adjusting method based on human eye position
US20200148112A1 (en) * 2018-11-13 2020-05-14 Toyota Jidosha Kabushiki Kaisha Driver-assistance device, driver-assistance system, method of assisting driver, and computer readable recording medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206465860U (en) * 2017-02-13 2017-09-05 北京惠泽智业科技有限公司 One kind eliminates automobile A-column blind area equipment
JP6967801B2 (en) * 2017-05-19 2021-11-17 株式会社ユピテル Drive recorders, display devices and programs for drive recorders, etc.
JP7160301B2 (en) * 2018-01-17 2022-10-25 株式会社ジャパンディスプレイ MONITOR DISPLAY SYSTEM AND ITS DISPLAY METHOD
CN210852234U (en) * 2019-06-27 2020-06-26 中国第一汽车股份有限公司 In-vehicle display device and automobile
CN110614952A (en) * 2019-10-28 2019-12-27 崔成哲 Automobile blind area eliminating system
CN211468310U (en) * 2019-12-17 2020-09-11 上汽通用汽车有限公司 Vehicle display system and vehicle
CN111572452A (en) * 2020-06-12 2020-08-25 胡海峰 Anti-shielding automobile A column blind area monitoring device and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103358996A (en) * 2013-08-13 2013-10-23 吉林大学 Automobile A pillar perspective vehicle-mounted display device
CN107776488A (en) * 2016-08-24 2018-03-09 京东方科技集团股份有限公司 Automobile using auxiliary display system, display methods and automobile
US20200148112A1 (en) * 2018-11-13 2020-05-14 Toyota Jidosha Kabushiki Kaisha Driver-assistance device, driver-assistance system, method of assisting driver, and computer readable recording medium
CN109859270A (en) * 2018-11-28 2019-06-07 浙江合众新能源汽车有限公司 A kind of human eye three-dimensional coordinate localization method and separate type binocular camera shooting device
CN109941277A (en) * 2019-04-08 2019-06-28 宝能汽车有限公司 The method, apparatus and vehicle of display automobile pillar A blind image
CN110509924A (en) * 2019-08-13 2019-11-29 浙江合众新能源汽车有限公司 A kind of method and structure of camera in car locating human face position
CN110901534A (en) * 2019-11-14 2020-03-24 浙江合众新能源汽车有限公司 A-pillar perspective implementation method and system
CN111016785A (en) * 2019-11-26 2020-04-17 惠州市德赛西威智能交通技术研究院有限公司 Head-up display system adjusting method based on human eye position

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111402A (en) * 2021-03-24 2021-07-13 浙江合众新能源汽车有限公司 A column barrier angle parameterization design method based on CATIA knowledge
CN113064279B (en) * 2021-03-26 2022-09-16 芜湖汽车前瞻技术研究院有限公司 Virtual image position adjusting method, device and storage medium of AR-HUD system
CN113064279A (en) * 2021-03-26 2021-07-02 芜湖汽车前瞻技术研究院有限公司 Virtual image position adjusting method, device and storage medium of AR-HUD system
CN113239735A (en) * 2021-04-15 2021-08-10 重庆利龙科技产业(集团)有限公司 Automobile transparent A column system based on binocular camera and implementation method
CN113239735B (en) * 2021-04-15 2024-04-12 重庆利龙中宝智能技术有限公司 Automobile transparent A column system based on binocular camera and implementation method
CN113335184A (en) * 2021-07-08 2021-09-03 合众新能源汽车有限公司 Image generation method and device for automobile A column blind area
CN113306492A (en) * 2021-07-14 2021-08-27 合众新能源汽车有限公司 Method and device for generating automobile A column blind area image
CN113343935A (en) * 2021-07-14 2021-09-03 合众新能源汽车有限公司 Method and device for generating automobile A column blind area image
CN113676618A (en) * 2021-08-20 2021-11-19 东北大学 Intelligent display system and method of transparent A column
CN113610053A (en) * 2021-08-27 2021-11-05 合众新能源汽车有限公司 Eyebrow center positioning method for transparent A pillar
CN113665485B (en) * 2021-08-30 2023-12-26 东风汽车集团股份有限公司 Anti-glare system for automobile front windshield and control method
CN113665485A (en) * 2021-08-30 2021-11-19 东风汽车集团股份有限公司 Anti-glare system for front windshield of automobile and control method
CN113815534A (en) * 2021-11-05 2021-12-21 吉林大学重庆研究院 Method for dynamically processing graphics based on response to changes in the position of human eyes

Also Published As

Publication number Publication date
WO2022061999A1 (en) 2022-03-31

Similar Documents

Publication Publication Date Title
CN112298039A (en) A-column imaging method
US9418556B2 (en) Apparatus and method for displaying a blind spot
JP5874920B2 (en) Monitoring device for vehicle surroundings
KR101544524B1 (en) Display system for augmented reality in vehicle, and method for the same
US8179435B2 (en) Vehicle surroundings image providing system and method
EP2544449B1 (en) Vehicle perimeter monitoring device
US20100054580A1 (en) Image generation device, image generation method, and image generation program
JP3228086B2 (en) Driving operation assist device
US10274726B2 (en) Dynamic eyebox correction for automotive head-up display
JP2018058544A (en) On-vehicle display control device
EP3716143A1 (en) Facial feature detecting apparatus and facial feature detecting method
JP2020080485A (en) Driving support device, driving support system, driving support method, and program
CN111267616A (en) Vehicle-mounted head-up display module and method and vehicle
KR102223852B1 (en) Image display system and method thereof
CN111277796A (en) Image processing method, vehicle-mounted vision auxiliary system and storage device
US20230191994A1 (en) Image processing apparatus, image processing method, and image processing system
US20220314886A1 (en) Display control device, display control method, moving body, and storage medium
JP2017056909A (en) Vehicular image display device
JP2008037118A (en) Display for vehicle
KR20160034681A (en) Environment monitoring apparatus and method for vehicle
US20190137770A1 (en) Display system and method thereof
US10896017B2 (en) Multi-panel display system and method for jointly displaying a scene
CN111016786B (en) Automobile A column shielding area display method based on 3D sight estimation
CN115018942A (en) Method and apparatus for image display of vehicle
CN114132259A (en) Automobile exterior rearview mirror adjusting method and device and automobile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210202

RJ01 Rejection of invention patent application after publication