CN118037855A - Origin regression method of driver monitoring system - Google Patents

Origin regression method of driver monitoring system Download PDF

Info

Publication number
CN118037855A
CN118037855A CN202410171422.9A CN202410171422A CN118037855A CN 118037855 A CN118037855 A CN 118037855A CN 202410171422 A CN202410171422 A CN 202410171422A CN 118037855 A CN118037855 A CN 118037855A
Authority
CN
China
Prior art keywords
monitoring system
correction
sensor
driver monitoring
virtual camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410171422.9A
Other languages
Chinese (zh)
Inventor
林冠宇
刘智远
廖致霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Interface Optoelectronics Shenzhen Co Ltd
Interface Technology Chengdu Co Ltd
General Interface Solution Ltd
Original Assignee
Interface Optoelectronics Shenzhen Co Ltd
Interface Technology Chengdu Co Ltd
General Interface Solution Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interface Optoelectronics Shenzhen Co Ltd, Interface Technology Chengdu Co Ltd, General Interface Solution Ltd filed Critical Interface Optoelectronics Shenzhen Co Ltd
Priority to CN202410171422.9A priority Critical patent/CN118037855A/en
Publication of CN118037855A publication Critical patent/CN118037855A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • B60R11/0229Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for displays, e.g. cathodic tubes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application relates to a method for returning an origin of a driver monitoring system, which is suitable for an augmented reality head-up display for a vehicle, and comprises the following steps: establishing a pinhole camera model; executing a first correction step, and performing external parameter correction of the driver monitoring system and the virtual camera; executing a second correction step, and performing similar transformation correction of the 3D sensor and the driver monitoring system; and executing a conversion step, and carrying out conversion operation aiming at different points of the driver monitoring system.

Description

Origin regression method of driver monitoring system
Technical Field
The present application relates to a driver monitoring system, and more particularly, to a method for origin regression of a driver monitoring system in an augmented reality head-up display for vehicles.
Background
Generally, a Head Up Display (HUD) in the related art mainly displays information such as driving speed, engine speed, fuel consumption of a vehicle, or even battery voltage and water temperature of a water tank, and can be regarded as an extended screen.
However, with the advent of Augmented REALITY HEAD Up Display (AR HUD) and application in the automotive industry, the AR HUD can enable a driver to acquire the displayed vehicle key information within the line of sight of the actual road without diverting the line of sight.
In addition, the vehicle driver monitoring system (Driver Monitoring System, DMS) is composed of a hardware module and a detection algorithm for monitoring and detecting the driver behavior and the vehicle dynamics, and sending out a warning signal or even starting the auxiliary driving system when abnormal data are detected so as to reduce the accident rate.
However, although the related art AR HUD is equipped with DMS to track the driver's line of sight, the DMS and the 3D camera are fixed at the same position before shipment, and the DMS installation position after departure is arbitrarily set, so that the driver's behavior and the vehicle dynamics cannot be effectively and correctly monitored and detected. Therefore, how to provide a driver monitoring system and a monitoring method that can solve the above-mentioned problems is an important issue for the industry to consider.
Disclosure of Invention
Accordingly, the present application aims to solve the problem that the related art AR HUD cannot effectively and correctly monitor and detect the behavior of the driver and the dynamics of the vehicle, and therefore, the present application achieves the effect of solving the above-mentioned problem by the technical means comprising, but not limited to, returning DMS at different positions to the origin, for example, using a 3D sensor as the origin and designing the technical features of the origin returning process, that is, using the view angle before leaving the factory for the detection algorithm regardless of the location of DMS. In addition, it is to be noted in particular that the DMS of the present application is mainly used for recording the head position in the driving space.
According to an aspect of the present application, there is provided a method for origin regression of a driver monitoring system, adapted for an augmented reality head-up display for a vehicle, the method comprising: establishing a pinhole camera model; executing a first correction step, and performing external parameter correction of the driver monitoring system and the virtual camera; executing a second correction step, and performing similar transformation correction of the 3D sensor and the driver monitoring system; and executing a conversion step, and carrying out conversion operation aiming at different points of the driver monitoring system.
According to one or more embodiments of the present application, a pinhole camera model is shown as the following (formula 1) and (formula 2):
Wherein, Representing AR HUD pixel coordinates;
k represents an AR HUD reference, and Wherein the method comprises the steps of
F x represents the focal length in the x-direction, f y represents the focal length in the y-direction,
C x represents displacement in the x direction, c y represents displacement in the y direction; α represents a skew parameter, here 0;
Representing world coordinates under camera coordinates;
external parameters representing 3D sensor to virtual camera, R represents a moment of 3*3 of rotation matrix
Matrix, and t represents a displacement matrix which is a 3*1 matrix;
Representing world coordinates;
According to one or more embodiments of the present application, the similar transformation relationship between the driver monitoring system and the 3D sensor is confirmed by configuring the virtual camera at a plurality of different sets of points.
According to one or more embodiments of the present application, the similarity transformation correction of the second correction step finds a 3D spatial similarity transformation relationship using the following (equation 3),
Wherein,Representing world coordinates/>, for a 3D sensor, of a virtual camera at point i Representing the 3D coordinates of the virtual camera recorded at the point i DMS; s represents a scale factor, and the distances between objects seen by the sensor at different distances are also different, and represent the scaling relationship between the two coordinate systems; r represents a rotation matrix and represents the rotation relation between two coordinate systems; t represents a translation vector and represents a displacement relationship between two coordinate systems.
According to one or more embodiments of the present application, the external correction of the first correction step is to obtain an external parameter that the 3D sensor converts to a virtual camera when correcting the position of any point location using SolvePnP algorithm.
According to another aspect of the present application, there is provided a method for origin regression of a driver monitoring system, adapted for an augmented reality head-up display for a vehicle, the method comprising:
Establishing a pinhole camera model;
Executing a first correction step, and performing external parameter correction of the driver monitoring system and the virtual camera;
executing a second correction step, and performing similar transformation correction of the 3D sensor and the driver monitoring system; and
Executing a conversion step, and carrying out conversion operation aiming at different points of the driver monitoring system;
The virtual cameras are configured at a plurality of groups of different points to confirm the similar transformation relation between the driver monitoring system and the 3D sensor;
The similarity transformation correction of the second correction step uses the following (3) to find the 3D spatial similarity transformation relationship,
Wherein,Representing world coordinates/>, for a 3D sensor, of a virtual camera at point i Representing the 3D coordinates of the virtual camera recorded at the point i DMS; s represents a size factor, the object spacing seen by the sensor at different distances is also different, and the scaling relationship between the two coordinate systems is represented; r represents a rotation matrix and represents the rotation relation between two coordinate systems; t represents a translation vector and represents a displacement relationship between two coordinate systems.
According to one or more embodiments of the present application, the external correction of the first correction step is to obtain an external parameter that the 3D sensor converts to a virtual camera when correcting the position of any point location using SolvePnP algorithm.
According to still another aspect of the present application, there is provided a method for origin regression of a driver monitoring system, adapted for an augmented reality head-up display for a vehicle, the method comprising:
Establishing a pinhole camera model;
Executing a first correction step, and performing external parameter correction of the driver monitoring system and the virtual camera;
executing a second correction step, and performing similar transformation correction of the 3D sensor and the driver monitoring system; and
Executing a conversion step, and carrying out conversion operation aiming at different points of the driver monitoring system;
The virtual cameras are configured at a plurality of groups of different points to confirm the similar transformation relation between the driver monitoring system and the 3D sensor;
the external reference correction in the first correction step is to obtain the external reference of the 3D sensor converted into the virtual camera when the position of any point is corrected by using SolvePnP algorithm.
According to one or more embodiments of the present application, a pinhole camera model is shown as the following (formula 1) and (formula 2):
Wherein, Representing AR HUD pixel coordinates;
k represents an AR HUD reference, and Wherein the method comprises the steps of
F x represents the focal length in the x-direction, f y represents the focal length in the y-direction,
C x represents displacement in the x direction, c y represents displacement in the y direction; α represents a skew parameter, here 0;
Representing world coordinates under camera coordinates;
external parameters representing 3D sensor to virtual camera, R represents a moment of 3*3 of rotation matrix
Matrix, and t represents a displacement matrix which is a 3*1 matrix;
Representing world coordinates;
according to one or more embodiments of the present application, the similarity transformation correction of the second correction step finds a 3D spatial similarity transformation relationship using the following (equation 3),
Wherein,Representing world coordinates/>, for a 3D sensor, of a virtual camera at point i Representing the 3D coordinates of the virtual camera recorded at the point i DMS; s represents a scale factor, and the distances between objects seen by the sensor at different distances are also different, and represent the scaling relationship between the two coordinate systems; r represents a rotation matrix and represents the rotation relation between two coordinate systems; t represents a translation vector and represents a displacement relationship between two coordinate systems.
Drawings
The foregoing and other objects, features, advantages and embodiments of the application will be apparent from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a driver monitoring system according to a comparative example of the present application.
Fig. 2 is a schematic diagram of a driver monitoring system according to an embodiment of the application.
FIG. 3 is a schematic diagram of a driver monitoring system according to an embodiment of the application.
Fig. 4 is a flowchart of the external registration of DMS and virtual camera according to an embodiment of the present application.
Fig. 5 is a flowchart of a 3D sensor and DMS similarity transformation correction according to an embodiment of the present application.
Fig. 6 is a schematic diagram of point location conversion of different DMS in an embodiment of the present application.
Fig. 7 is a flowchart of a method for performing point-to-point conversion for different DMS in an embodiment of the present application.
Fig. 8 is a flowchart of the external registration of DMS and virtual camera according to an embodiment of the present application.
Various features and elements are not drawn to scale in accordance with conventional practice in the drawings in a manner that best serves to illustrate the specific features and elements that are pertinent to the present application. In addition, like elements and components are referred to by the same or similar reference numerals among the different drawings.
Reference numerals illustrate:
driver monitoring system: 100;
AR HUD:50;
virtual camera: 10;
3D sensor: 30;
Head tracker: 40, a step of performing a;
A windshield: 20, a step of;
Target object: 60;
Virtual image: 70;
driver monitoring system: 200;
AR HUD:250;
virtual camera: 2101;
3D sensor: 230, a step of;
The tracker: 240, a step of;
A windshield: 220;
Virtual image: 270;
Plane: 212;
Position: 2102. 2103, 2104, 2105, 2106, 2107, 2108, 2109; driver monitoring system: 300;
AR HUD:350;
virtual camera: 3102;
3D sensor: 330;
the tracker: 340 (340);
a windshield: 320.
Virtual image: 370, a step of;
Plane: 312;
position: 3101. 3103, 3104, 3105, 3106, 3107, 3108, 3109; the steps are as follows: s1, S2, S3, S4, S5, S6;
The steps are as follows: s11, S21, S31, S41, S51, S61, S71;
Driver monitoring system: 600;
3D sensor: 630.
Original parameters: (s 1, R 1,t1);
New correction parameters: (s 2, R 2,t2);
Origin point: 640a;
New point location: 640b;
external parameters: (s 3,R3,t3);
the steps are as follows: s12, S22, S32, S42, S52;
The steps are as follows: s13, S23, S33, S43, S53, S63.
Detailed Description
For a further understanding and appreciation of the objects, shapes, structural features of the application, and their efficacy, the application can be best understood by reference to the drawings, followed by a detailed description of the embodiments.
The following disclosure provides different embodiments or examples to create different features of the provided objects. Specific examples of components and arrangements are described below for the purpose of simplifying the present disclosure and are not intended to be limiting; nor is the size and shape of the elements limited by the disclosed ranges or values, but may depend on the manufacturing process conditions or desired characteristics of the elements. For example, the technical features of the present application are described using cross-sectional views, which are schematic illustrations of idealized embodiments. Thus, variations in the shapes of the illustrations as a result of manufacturing processes and/or tolerances are to be expected and should not be construed as limiting.
Moreover, spatially relative terms such as "below," "under …," "below," "above …," and "above" and the like are used for ease of description of the relationship between elements or features depicted in the drawings; further, spatially relative terms may be intended to encompass different orientations of the element in use or operation in addition to the orientation depicted in the figures.
First, the technical contents of the comparative example of the present application and the problems faced by the technique of the comparative example of the present application will be described.
Referring to fig. 1, fig. 1 is a schematic diagram of a driver monitoring system according to a comparative example of the present application. As shown in fig. 1, in a comparative example of the present application, the driver monitoring system 100 mainly includes an AR HUD 50 loaded with DMS. In addition, the driver monitoring system 100 further includes a virtual camera (virtual camera) 10, a 3D sensor 30, and a head tracker (HEAD TRACKER) 40, wherein the 3D sensor 30 is located on the windshield (windshield) 20. As shown in fig. 1, in a comparative example of the present application, the target 60 and the virtual image 70 behind it are within the recording range of the virtual camera 10, and the head tracker (HEAD TRACKER) 40 can track the motion of the virtual camera 10.
However, the driver monitoring system 100 cannot effectively and correctly monitor and detect the driver behavior and the vehicle dynamics, so the following embodiments of the present application solve the above-mentioned problems by homing DMS at different locations to the origin, for example, but not limited to, a 3D sensor as the origin and designing the technical means composed of the origin homing procedure. Next, the following description is given in detail with reference to the drawings and embodiments.
Referring to fig. 2, fig. 2 is a schematic diagram of a driver monitoring system according to an embodiment of the application. As shown in fig. 2, the driver monitoring system 200 includes, but is not limited to, a DMS-loaded AR HUD 250, a Virtual camera (Virtual camera or EYE CAMERA) 2101, a 3D sensor 230, and a tracker (tracker) 240, wherein the 3D sensor 230 is located on a windshield (windshield) 220 and is responsible for gathering world coordinates (x, y, z), and the AR HUD 250 is composed of PGUs and is responsible for projecting Virtual images (Virtual images); the virtual camera (virtual camera or EYE CAMERA) 2101 is, for example, a general industrial camera. As shown in fig. 2, in the embodiment of the present application, the object (not shown, but like the object 60 of fig. 1) and the virtual image 270 behind it are within the recording range of the virtual camera 2101, and the tracker (HEAD TRACKER) 240 can track the motion of the virtual camera 2101; the object not shown in fig. 2 is a calibration pattern, typically a checkerboard pattern. In an embodiment of the present application, tracker (tracker) 240 is, for example, a head tracker (HEAD TRACKER). Additionally, in other embodiments of the present application, the DMS loaded within AR HUD 250 may also be used as a head tracker, either alone or in combination with tracker (tracker) 240. It is to be noted that, in the embodiment of the present application, the virtual camera 2101 may be disposed at eight other positions 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109 on the plane 212, besides the positions shown in fig. 2, and the total of nine positions are 3D correction points taken by the embodiment of the present application.
Here, in an embodiment of the present application, a pinhole camera model (Pinhole Camera Model) is first built as shown in the following (equations 1) and (equation 2):
Wherein, Representing AR HUD pixel coordinates;
k represents an AR HUD reference, and Wherein the method comprises the steps of
F x represents the focal length in the x-direction, f y represents the focal length in the y-direction,
C x represents displacement in the x direction, c y represents displacement in the y direction; α represents a skew parameter, here 0;
Representing world coordinates under camera coordinates;
external parameters representing 3D sensor to virtual camera, R represents a moment of 3*3 of rotation matrix
Matrix, and t represents a displacement matrix which is a 3*1 matrix;
Representing world coordinates;
referring to fig. 2 and 3, if there is an external parameter of the 3D sensor corresponding to the virtual camera in the head-mounted device, the relative position (i.e. spatial coordinates) of the virtual camera to the 3D sensor can be obtained In addition, the DMS can directly observe the space coordinates/>, relative to the virtual camera, of the virtual image 270 and the virtual image 370 of the virtual cameraThe internal reference K can be obtained from the Computer Aided Design (CAD) file of the AR HUD, and can also be self-corrected.
Referring next to fig. 3, fig. 3 is a schematic diagram of a driver monitoring system according to an embodiment of the application. As shown in fig. 3, the driver monitoring system 300 includes, but is not limited to, a DMS-loaded AR HUD 350, a virtual camera (virtual camera) 3102, a 3D sensor 330, and a tracker (tracker) 340, wherein the 3D sensor 330 is located on a windshield (windshield) 320. As shown in fig. 3, in the embodiment of the present application, the object (not shown) and the virtual image 30 behind it are within the recording range of the virtual camera 3102, and the tracker (HEAD TRACKER) 340 can track the motion of the virtual camera 3102. In an embodiment of the present application, tracker (tracker) 340 is, for example, a head tracker (HEAD TRACKER). It should be noted here that, in the embodiment of the present application, the virtual camera 3102 may be disposed at eight other positions 3101, 3103, 3104, 3105, 3106, 3107, 3108, 3109 on the plane 312 in addition to the positions shown in fig. 3, and these nine positions are the 3D correction points taken by the embodiment of the present application.
It should be noted that in the embodiment of the present application, the planes 212 and 312 shown in fig. 2 and 3 are used to simulate different height positions of different eyes, and 9 sets of different points are usedThe similar transformation relationship between the DMS and the 3D sensors 230, 330 is confirmed, so that the corrected posture (Pose) of the head and the like is more accurate with more points.
In addition, referring to fig. 4, fig. 4 is a flowchart of the external calibration of the DMS and the virtual camera according to an embodiment of the application. Referring to fig. 2 to 4 together, in the embodiment of the present application, the external reference correction process of the DMS and the virtual cameras 2101 and 3102 includes steps S1 to S6.
In step S1, the camera is moved to the position of the i-th point (e.g., the position of the virtual camera 2101) and the virtual image is superimposed with the pattern (pattern).
In step S2, the checkerboard is superimposed with the virtual images of the virtual cameras 2101, 3102. The 3D sensor 230, 330 captures the checkerboard locations (x, y, z) of the virtual image 270, 370. The virtual cameras 2101, 3102 acquire checkerboard locations and convert to image generation units (Picture Generation Unit, PGUs), which are used in the overall heads-up display system to provide images and light sources, and acquire virtual-real superimposed interior corner points (u, v).
In step S3, when the position of the i-th point is corrected by using the SolvePnP algorithm, the external parameters of the virtual cameras 2101 and 3102 are obtained from the 3D sensors 230 and 330.
In step S4, it is confirmed whether or not the correction procedure has been completed for all positions of the virtual images 270, 370 for a total of nine points (or "points"); for example, in FIG. 2 there are a virtual camera 2101 located and eight other locations 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109 on plane 212; for example, in fig. 3 there are eight other locations 3101, 3103, 3104, 3105, 3106, 3107, 3108, 3109 on the plane 312 where the virtual camera 3102 is located. When the correction process has been completed for all positions of the virtual images 270, 370 for a total of nine points (or "points"), the process proceeds to step S5. When the correction procedure is not completed for nine points (or "points") in total for all positions of the virtual images 270, 370, the procedure returns to step S1.
In step S5, nine-point external parameters are output.
In step S6, the whole DMS and virtual camera external registration process is ended.
Next, please refer to fig. 5, fig. 5 is a flowchart illustrating a similar transformation calibration of the 3D sensor and the DMS according to an embodiment of the present application. Referring to fig. 2 to 4 together, in the embodiment of the application, the similar transformation correction process of the 3D sensor 230, 330 and the DMS includes steps S11-S71.
In step S11, the virtual cameras 2101, 3102 (or "eyes") are moved to the fixed point, and the DMS records the 3D coordinates of the virtual cameras 2101, 3102 (or "eyes")In an embodiment of the application, the positions of the virtual cameras 2101, 3102 are recorded with DMS spatial coordinates/>
In step S21, the world coordinates of the virtual cameras 2101, 3102 (or "eyes") for the 3D sensors 230, 330 are back-inferred by the [ R|T ] of the pointing pointIn an embodiment of the application, the position of the virtual cameras 2101, 3102 uses R, t of the fixed point to extrapolate the spatial coordinates/>
In step S31, it is confirmed whether or not the correction process has been completed for all positions of the virtual images 270, 370 for a total of nine points (or "points"). For example, in FIG. 2 there are a virtual camera 2101 located and eight other locations 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109 on plane 212; for example, in fig. 3 there are eight other locations 3101, 3103, 3104, 3105, 3106, 3107, 3108, 3109 on the plane 312 where the virtual camera 3102 is located. When the correction process has been completed for all positions of the virtual images 270, 370 for a total of nine points (or "points"), the process proceeds to step S41. When the correction procedure is not completed for all positions of the virtual images 270, 370 for nine points (or "points"), the process returns to step S11.
In step S41, 9 sets are obtainedI=1, 2 …, etc.
In step S51, the correction DMS is converted into external parameters of the 3D sensors 230, 330. Further, step S51 obtains the external parameters of the DMS-to-3D sensor 230, 330 by using a 3D-to-3D method. It should be noted that, because of the different viewing angles, the space needs to be added with a scale factor in addition to the rotation and translation. It is specifically noted that the 3D-to-3D conversion method is to obtain the 3D spatial similarity transformation by using the following (formula 3).
Wherein,Representing world coordinates/>, for a 3D sensor, of a virtual camera at point i Representing the 3D coordinates of the virtual camera recorded at the i-point DMS; s represents a scale factor, and the distances between objects seen by the sensor at different distances are also different, and represent the scaling relationship between the two coordinate systems; r represents a rotation matrix and represents the rotation relation between two coordinate systems; t represents a translation vector and represents a displacement relationship between two coordinate systems.
In step S61, the similarity transformation parameters (S, R, T) are stored, and the size (scale) factor (S), rotation (R), displacement (T) are recorded.
In step S71 the whole 3D sensor-DMS similarity transformation correction procedure is ended.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating point location conversion of different DMS in an embodiment of the present application. As shown in fig. 6, in an embodiment of the present application, the driver monitoring system 600 includes, but is not limited to, a 3D sensor 630, DMS. According to the embodiment of the application, conversion operation is performed between the origin 640a of the DMS and the new point 640b of the DMS according to the original parameters (s 1, R 1,t1) and the new correction parameters (s 2, R 2,t2), so as to obtain an external parameter (s 3,R3,t3) between the new point 640b of the DMS and the origin 640a of the DMS.
Referring again to fig. 6, in one embodiment of the present application, the different DMS point transformations are represented by the following derivation of (equation 4) and (equation 5):
Wherein, />Representing world coordinates at different point views.
In addition, referring to fig. 7, fig. 7 is a flowchart illustrating a method for point location conversion of different DMS in an embodiment of the present application. As shown in fig. 7, in the embodiment of the present application, the method flow of different DMS point location conversion includes steps S12-S52.
In step S12, a similarity transformation correction flow is performed.
In step S22, new correction parameters (S2, R 2,t2) are provided. In addition, in step S42, the original parameters (S1, R 1,t1) are provided.
In step S32, a conversion operation program is executed.
In step S52, an outlier between the new point 640b of the DMS and the origin 640a of the DMS is generated (S 3,R3,t3).
In addition, referring to fig. 8, fig. 8 is a flowchart of the external reference correction of the DMS and the virtual camera according to an embodiment of the present application. As shown in fig. 8, in the embodiment of the present application, please refer to fig. 2, 3 and 8 together, and in the embodiment of the present application, the external reference correction process of the DMS and the virtual cameras 2101, 3102 includes steps S13-S63.
In step S13, the camera is moved to the position of the i-th point (e.g., the position of the virtual camera 2101) and the virtual image is superimposed with the pattern (pattern). The virtual cameras 2101, 3102 are moved to the ith point and the virtual images projected by the software programmed DMS loaded AR HUDs 250, 350 are aligned with the checkerboard on the target.
In step S23, the checkerboard is superimposed with the virtual images of the virtual cameras 2101, 3102, creating the internal corner points of the image generating unit (Picture Generation Unit, PGU) and the spatial coordinates of the 3D sensors 230, 330. The 3D sensor 230, 330 captures the checkerboard locations (x, y, z) of the virtual image 270, 370. The virtual cameras 2101, 3102 acquire checkerboard locations and convert to image generation units (PGUs), which are used to provide images and light sources in the overall heads-up display system, and acquire virtual-real superimposed interior corner points (u, v).
In step S33, when the position of the i-th point is corrected by using the SolvePnP algorithm, the external parameters of the virtual cameras 2101 and 3102 are obtained from the 3D sensors 230 and 330. That is, two inputs (inputs) are the positions (x, y, z) of the 3D sensors 230, 330 and the positions (u, v) of the PGU, respectively, using solvePnP, and the output (output) is the rotation R and the displacement T.
In step S43, it is confirmed whether or not the correction procedure has been completed for all positions of the virtual images 270, 370 for a total of nine points (or "points"). For example, in FIG. 2 there are a virtual camera 2101 located and eight other locations 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109 on plane 212; for example, in fig. 3 there are eight other locations 3101, 3103, 3104, 3105, 3106, 3107, 3108, 3109 on the plane 312 where the virtual camera 3102 is located. When the correction process has been completed for all positions of the virtual images 270, 370 for a total of nine points (or "points"), the process proceeds to step S5. When the correction procedure is not completed for nine points (or "points") in total for all positions of the virtual images 270, 370, the procedure returns to step S1.
In step S53, the nine-point external parameter is output.
In step S63, the entire DMS and virtual camera external registration process ends.
In summary, the embodiments of the present application effectively solve the problem that the related art AR HUD cannot effectively and correctly monitor and detect the behavior of the driver and the dynamics of the vehicle. In the embodiment of the application, the above-mentioned problems are solved by returning DMS at different positions to the origin, for example, but not limited to, a technical means comprising using a 3D sensor as the origin and designing the origin return flow, i.e. the DMS can be used for detection algorithms from the perspective before leaving the factory regardless of the position of the DMS. That is, the DMS collocated after leaving the factory can be aligned with the origin at any position, and finally, the correct space coordinate information required by the subsequent virtual-real superposition algorithm can be provided after the origin is aligned. In this way, the problems in the related art are solved by the disclosure of the present specification, that is, when the spatial position of the virtual camera relative to the 3D sensor is reversely pushed by the external reference R, T in the related art, the problem that the DMS placement position after delivery is different from that before delivery is encountered.
The above embodiments are only for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present application.

Claims (10)

1. A method of origin regression of a driver monitoring system, the method being adapted for an in-vehicle augmented reality head-up display, the method comprising:
Establishing a pinhole camera model;
Executing a first correction step, and performing external parameter correction of the driver monitoring system and the virtual camera;
executing a second correction step, and performing similar transformation correction of the 3D sensor and the driver monitoring system; and
And executing a conversion step, and performing conversion operation on different points of the driver monitoring system.
2. The method of origin regression of a driver's monitoring system according to claim 1, wherein the pinhole camera model is shown in the following (formulas 1) and (formula 2):
Wherein, Representing AR HUD pixel coordinates;
k represents an AR HUD reference, and Wherein the method comprises the steps of
F x represents the focal length in the x-direction, f y represents the focal length in the y-direction,
C x represents displacement in the x direction, c v represents displacement in the y direction; α represents a skew parameter, here 0;
Representing world coordinates under camera coordinates;
Representing the external parameters of the 3D sensor to virtual camera, R represents the rotation matrix as a matrix of 3*3, and t represents the displacement matrix as a matrix of 3*1;
Representing world coordinates;
3. The method of driver monitoring system origin regression according to claim 1, wherein the similarity transformation relationship between the driver monitoring system and the 3D sensor is confirmed by configuring virtual cameras at a plurality of different sets of points.
4. The method for origin regression of a driver's monitoring system according to claim 1, wherein the similarity transformation correction of the second correction step uses the following (equation 3) to find the 3D spatial similarity transformation relationship,
Wherein,Representing world coordinates/>, for a 3D sensor, of a virtual camera at point i Representing the 3D coordinates of the virtual camera recorded at the point i DMS; s represents a size factor, the object spacing seen by the sensor at different distances is also different, and the scaling relationship between the two coordinate systems is represented; r represents a rotation matrix and represents the rotation relation between two coordinate systems; t represents a translation vector and represents a displacement relationship between two coordinate systems.
5. The method of returning to the origin of the driver monitoring system according to claim 1, wherein the external reference correction in the first correction step is to obtain the external reference of the 3D sensor converted into the virtual camera when correcting the position of any point by using the So1vePnP algorithm.
6. A method of origin regression of a driver monitoring system, the method being adapted for an in-vehicle augmented reality head-up display, the method comprising:
Establishing a pinhole camera model;
Executing a first correction step, and performing external parameter correction of the driver monitoring system and the virtual camera;
executing a second correction step, and performing similar transformation correction of the 3D sensor and the driver monitoring system; and
Executing a conversion step, and carrying out conversion operation aiming at different points of the driver monitoring system;
wherein, the virtual camera is configured at a plurality of groups of different points to confirm the similar transformation relation between the driver monitoring system and the 3D sensor;
the similarity transformation correction in the second correction step is to obtain the 3D space similarity transformation relationship by using the following (3),
Wherein,Representing world coordinates/>, for a 3D sensor, of a virtual camera at point i Representing the 3D coordinates of the virtual camera recorded at the point i DMS; s represents a size factor, the object spacing seen by the sensor at different distances is also different, and the scaling relationship between the two coordinate systems is represented; r represents a rotation matrix and represents the rotation relation between two coordinate systems; t represents a translation vector and represents a displacement relationship between two coordinate systems.
7. The method of returning to the origin of the driver monitoring system of claim 6, wherein the external correction of the first correction step is an external reference to the 3D sensor to a virtual camera when correcting the position of any point using SolvePnP algorithm.
8. A method of origin regression of a driver monitoring system, the method being adapted for an in-vehicle augmented reality head-up display, the method comprising:
Establishing a pinhole camera model;
Executing a first correction step, and performing external parameter correction of the driver monitoring system and the virtual camera;
executing a second correction step, and performing similar transformation correction of the 3D sensor and the driver monitoring system; and
Executing a conversion step, and carrying out conversion operation aiming at different points of the driver monitoring system;
wherein, the virtual camera is configured at a plurality of groups of different points to confirm the similar transformation relation between the driver monitoring system and the 3D sensor;
The external reference correction in the first correction step is to obtain the external reference of converting the 3D sensor into the virtual camera when the position of any point is corrected by using SolvePnP algorithm.
9. The method of origin regression of a driver's monitoring system according to claim 8, wherein the pinhole camera model is shown in the following (equations 1) and (equation 2):
Wherein, Representing AR HUD pixel coordinates;
k represents an AR HUD reference, and Wherein the method comprises the steps of
F x represents the focal length in the x-direction, f y represents the focal length in the y-direction,
C x represents displacement in the x direction, c y represents displacement in the y direction; α represents a skew parameter, here 0;
Representing world coordinates under camera coordinates;
Representing the external parameters of the 3D sensor to virtual camera, R represents the rotation matrix as a matrix of 3*3, and t represents the displacement matrix as a matrix of 3*1;
Representing world coordinates;
10. The method of origin regression of a driver's monitoring system according to claim 9, wherein the similarity transformation correction of the second correction step uses the following (equation 3) to find the 3D spatial similarity transformation relationship,
Wherein,Representing world coordinates/>, for a 3D sensor, of a virtual camera at point i Representing the 3D coordinates of the virtual camera recorded at the point i DMS; s represents a size factor, the object spacing seen by the sensor at different distances is also different, and the scaling relationship between the two coordinate systems is represented; r represents a rotation matrix and represents the rotation relation between two coordinate systems; t represents a translation vector and represents a displacement relationship between two coordinate systems.
CN202410171422.9A 2024-02-05 2024-02-05 Origin regression method of driver monitoring system Pending CN118037855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410171422.9A CN118037855A (en) 2024-02-05 2024-02-05 Origin regression method of driver monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410171422.9A CN118037855A (en) 2024-02-05 2024-02-05 Origin regression method of driver monitoring system

Publications (1)

Publication Number Publication Date
CN118037855A true CN118037855A (en) 2024-05-14

Family

ID=90987121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410171422.9A Pending CN118037855A (en) 2024-02-05 2024-02-05 Origin regression method of driver monitoring system

Country Status (1)

Country Link
CN (1) CN118037855A (en)

Similar Documents

Publication Publication Date Title
US8368761B2 (en) Image correction method for camera system
JP6891954B2 (en) Object detection device, object detection method, and program
WO2017217411A1 (en) Image processing device, image processing method, and recording medium
TWI393072B (en) Multi-sensor array module with wide viewing angle; image calibration method, operating method and application for the same
US8134608B2 (en) Imaging apparatus
US11922711B2 (en) Object tracking assisted with hand or eye tracking
JP4715334B2 (en) Vehicular image generation apparatus and method
EP2009590A1 (en) Drive assistance device
CN112655024A (en) Image calibration method and device
CN111476104A (en) AR-HUD image distortion correction method, device and system under dynamic eye position
JP2009288152A (en) Calibration method of on-vehicle camera
CN111353951B (en) Circuit device, electronic apparatus, and moving object
JP4679293B2 (en) In-vehicle panoramic camera system
KR20150125767A (en) Method for generating calibration indicator of camera for vehicle
CN113129224A (en) Display system, electronic apparatus, moving object, and display method
US20160288711A1 (en) Distance and direction estimation of a target point from a vehicle using monocular video camera
US11941851B2 (en) Systems and methods for calibrating imaging and spatial orientation sensors
KR100786351B1 (en) System and method for teaching work-robot based on ar
JP4855278B2 (en) Camera parameter acquisition device
CN111861865B (en) Circuit device, electronic apparatus, and moving object
JP5418427B2 (en) Collision time calculation device, collision time calculation method and program
CN118037855A (en) Origin regression method of driver monitoring system
CN113120080A (en) Method and device for establishing backing auxiliary line, terminal and storage medium
CN109523489B (en) Method for generating overlook undistorted reversing image
CN113132714A (en) Circuit device, electronic apparatus, and moving object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination