CN116118628A - CMS visual field control method - Google Patents

CMS visual field control method Download PDF

Info

Publication number
CN116118628A
CN116118628A CN202310088659.6A CN202310088659A CN116118628A CN 116118628 A CN116118628 A CN 116118628A CN 202310088659 A CN202310088659 A CN 202310088659A CN 116118628 A CN116118628 A CN 116118628A
Authority
CN
China
Prior art keywords
cms
preset
full
picture
clipping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310088659.6A
Other languages
Chinese (zh)
Inventor
杨青春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Foryou General Electronics Co Ltd
Original Assignee
Huizhou Foryou General Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Foryou General Electronics Co Ltd filed Critical Huizhou Foryou General Electronics Co Ltd
Priority to CN202310088659.6A priority Critical patent/CN116118628A/en
Publication of CN116118628A publication Critical patent/CN116118628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/25Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the sides of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a CMS field control method, which comprises the following steps: step 1, initializing a CMS system, acquiring full images shot by left and right CMS cameras, and displaying the full images in a standard mode; step 2, when detecting that the vehicle turns, detecting whether a preset obstacle exists in the full-frame picture, if so, entering the next step, otherwise, circularly executing the step; step 3, marking the preset obstacle; step 4, determining the offset pixel direction and the number of the clipping center point of the full-frame picture according to the turning direction and the turning angle of the vehicle; step 5, adjusting the picture display of the CMS display screen according to the offset pixel directions and the number of the clipping center points; and step 6, calculating the relative position of the preset obstacle and the vehicle, and marking the preset obstacle with a preset color if the relative position is positioned in the early warning area. The invention realizes the intelligentization of the visual field control of the CMS system and improves the usability of CMS products.

Description

CMS visual field control method
Technical Field
The invention relates to the technical field of rearview mirrors, in particular to a CMS visual field control method.
Background
Outside rear view mirrors on the left and right sides of the vehicle are the main tools for the driver to acquire outside information on the rear of the vehicle side. With the development of automotive electronics, the CMS System (Camera-Monitor System, i.e., electronic exterior rear view mirror) has a tendency to gradually replace the conventional optical exterior rear view mirror because of its unique advantages.
CMS systems typically consist of CMS cameras, CMS displays, and associated connection cables. However, due to the regulatory lower limit requirements on the magnification and resolution of the default view or adjustable default view of the CMS system during normal driving, the CMS system has a view blind area like a conventional optical outside rear view mirror due to the limited FOV of the view displayed on the CMS display screen in combination with the whole vehicle body data; however, in special driving situations such as turning or reversing, regulations allow the temporary view of the CMS system to be not limited by magnification and resolution, and if the temporary view is not well processed, the temporary view is not too different from a physical mirror, and thus accidents can occur due to the existence of blind areas when a driver changes lanes or turns, so that it is important to process the temporary view such as turning, reversing, high speed and the like.
Disclosure of Invention
The invention provides a CMS visual field control method, which aims to overcome the defects in the prior art, realize the intelligentization of CMS system visual field control and improve the usability of CMS products.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the invention provides a CMS visual field control method, which comprises the following steps:
step 1, initializing a CMS system, acquiring full images shot by left and right CMS cameras, and displaying the full images in a standard mode;
step 2, when detecting that the vehicle turns, detecting whether a preset obstacle exists in the full-frame picture, if so, entering the next step, otherwise, circularly executing the step;
step 3, marking the preset obstacle;
step 4, determining the offset pixel direction and the number of the clipping center point of the full-frame picture according to the turning direction and the turning angle of the vehicle;
step 5, adjusting the picture display of the CMS display screen according to the offset pixel directions and the number of the clipping center points;
and step 6, calculating the relative position of the preset obstacle and the vehicle, and marking the preset obstacle with a preset color if the relative position is positioned in the early warning area.
Specifically, the detecting whether the obstacle is preset in the full frame picture includes:
step 201, determining whether the current day or night is according to the full-frame picture;
step 202, if the current situation is daytime, directly calling a pruning YOLO network to detect a preset obstacle in the full-frame picture, if the current situation is nighttime, firstly carrying out Gamma transformation enhancement on the full-frame picture to generate an enhanced full-frame picture, and then calling the pruning YOLO network to detect the preset obstacle in the enhanced full-frame picture.
Specifically, the step 201 includes:
step 2011, converting the full-frame picture into an HSV color space to generate an HSV full-frame image;
step 2012, obtaining a brightness average value of the HSV full image;
in this embodiment, the gray values of the pixels under the V channel of the HSV full image are accumulated and divided by the total pixel number to obtain a brightness average value;
and step 2013, judging that the day is daytime if the brightness average value is larger than a preset brightness threshold value, and judging that the night is night if the brightness average value is not larger than the preset brightness threshold value.
Specifically, the generating step of the pruned YOLO network includes:
step 2021, optimizing the trained standard YOLOv3 network by using a preset total loss function;
and step 2022, performing parameter adjustment on the optimized YOLOv3 network until the precision difference value between the optimized YOLOv3 network and the standard YOLOv3 network is within a preset range, and obtaining the pruned YOLOv network.
Specifically, the total loss function formula is:
Figure BDA0004069616270000021
where L represents the total loss function, x represents the training input, y represents the training target, W represents the trainable parameter, λ represents the scaling factor of the BN layer, and g (γ) is the L1 regularization function of the scaling factor.
Specifically, the calculating the relative position of the preset obstacle and the host vehicle includes:
step 601, taking the projection of the optical center of the CMS camera on a road as an origin, taking the upward direction of a vertical road as the Z-axis forward direction, taking the reverse direction of vehicle running as the Y-axis forward direction, determining the X-axis forward direction according to the left-hand rule, and establishing a world coordinate system;
step 602, acquiring the height h from the CMS camera to the ground, and the overlooking angle alpha of the CMS camera relative to the horizontal direction;
step 603, obtaining the pixel coordinates (u) of the preset obstacle in the pixel coordinate system q ,v q );
Step 604, calculating the lateral distance x between the preset obstacle and the vehicle according to a preset distance formula w Distance y in longitudinal direction w
Specifically, the preset distance formula is:
Figure BDA0004069616270000031
Figure BDA0004069616270000032
wherein, (u) 0 ,v 0 ) Representing offset coordinates of the optical axis of the CMS camera in an image coordinate system, wherein the units are pixels; d, d x 、d y Representing the size of a unit pixel in a CMS camera sensor, f x 、f y Is the focal length of the camera in pixels.
Further, after the step 6, the method further includes:
step 7, when the fact that the vehicle does not turn is detected, the current speed of the vehicle is obtained, and the size of a cutting picture is adjusted according to the current speed;
and 8, displaying the adjusted clipping picture on a CMS display screen.
Specifically, the step 7 includes:
when the current vehicle speed exceeds a first speed threshold V 1 Then, the relation between the size of the clipping picture and the current vehicle speed is determined through a first preset relation:
number of horizontal pixels W of clipping picture v1 =min{W f ,W 0 *(1+e (v-v 1 )/v 0 )},
Number of vertical pixel lengths H of clipping picture v1 =k*W v1
When the current vehicle speed is lower than a second speed threshold V 2 Then, the relation between the size of the clipping picture and the current vehicle speed is determined by a second preset relation:
number of horizontal pixels W of clipping picture v2 =W 0 /(1+e- (v-v 2 )/v 0 ),
Number of vertical pixel lengths H of clipping picture v2 =k*W v2
Wherein W is f Representing the horizontal resolution of the full frame, W 0 Represents the horizontal resolution of the CMS display screen, e represents a natural number, k represents the ratio of the vertical pixels to the horizontal pixels of the CMS display screen, min { } represents the smaller, V represents the current vehicle speed, V 0 Indicating a reference vehicle speed.
Specifically, the step 8 includes:
if the horizontal pixels of the clipping picture are larger than the horizontal pixels of the CMS display screen, displaying in a frame extraction mode;
and if the horizontal pixels of the clipping picture are smaller than the horizontal pixels of the CMS display screen, displaying in a frame inserting mode.
The invention has the beneficial effects that: when the vehicle turns, the preset obstacles are marked when the preset obstacles are detected to exist in the full-frame picture, the offset pixel direction and the number of the cutting center points of the full-frame picture are determined according to the turning direction and the angle of the vehicle, the picture display of the CMS display screen is adjusted according to the offset pixel direction and the number of the cutting center points, the relative positions of the preset obstacles and the vehicle are calculated, and if the relative positions are located in an early warning area, the preset obstacles are marked with preset colors, so that the intelligentization of the visual field control of a CMS system is realized, and the usability of CMS products is improved.
Drawings
FIG. 1 is a flow chart of the CMS field of view control method of the present invention;
FIG. 2 is a schematic illustration of clipping center point pixel offset according to the present invention.
Fig. 3 is a schematic diagram of a camera coordinate system of the present invention.
Detailed Description
Embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which are for reference and illustration only, and are not intended to limit the scope of the invention.
In the flow described in the description, claims or drawings of the present invention, the serial numbers of the respective steps (e.g., steps 10, 20, etc.) are included, and are only used to distinguish the respective steps, and the serial numbers themselves do not represent any execution sequence. It should be noted that, the descriptions of "first", "second", and the like herein are only for distinguishing the description objects, and do not represent the sequence, nor do they represent that the descriptions of "first", "second", and the like are of different types.
As shown in fig. 1, the present embodiment provides a CMS field of view control method, including:
and step 1, initializing a CMS system, and acquiring the whole image shot by the left and right CMS cameras to display in a standard mode.
In an embodiment, the left and right CMS cameras are fixed to the left and right doors of the vehicle, respectively, and are wide cameras with fixed angles, that is, the resolution of the picture shot by the CMS camera is greater than that of the CMS display screen, and when the picture shot by the CMS camera is displayed on the CMS display screen, a part of the picture is cut or displayed, or zoomed and then displayed.
In this embodiment, the standard mode refers to that a part of the whole image captured by the CMS camera is cut out and displayed, and the cut-out image is required to meet the magnification and the field of view required by the relevant regulations (for example, GB 15084).
Typically, the full frame is in RGB format.
And step 2, when the turning of the vehicle is detected, detecting whether a preset obstacle exists in the whole picture, if so, entering the next step, otherwise, circularly executing the step.
The preset obstacle in this embodiment includes a moving obstacle such as a pedestrian or a vehicle, or may include a static obstacle such as a curb or a road pile according to actual situations.
In this embodiment, the detecting whether the obstacle is preset in the full frame picture includes:
step 201, determining whether the current day or night is according to the full-frame picture.
In this embodiment, the step 201 includes:
step 2011, converting the full-frame picture into an HSV color space to generate an HSV full-frame image;
step 2012, obtaining a brightness average value of the HSV full image;
in this embodiment, the gray values of the pixels under the V channel of the HSV full image are accumulated and divided by the total pixel number to obtain a brightness average value;
and step 2013, judging that the day is daytime if the brightness average value is larger than a preset brightness threshold value, and judging that the night is night if the brightness average value is not larger than the preset brightness threshold value.
In this embodiment, the preset brightness threshold is 70. Of course, the threshold may be obtained by calibrating according to the actual performance of the CMS system.
Step 202, if the current situation is daytime, directly calling a pruning YOLO network to detect a preset obstacle in the full-frame picture, if the current situation is nighttime, firstly carrying out Gamma transformation enhancement on the full-frame picture to generate an enhanced full-frame picture, and then calling the pruning YOLO network to detect the preset obstacle in the enhanced full-frame picture.
In this embodiment, the generating step of the pruned YOLO network includes:
step 2021, optimizing the trained standard YOLOv3 network by using a preset total loss function;
and step 2022, performing parameter adjustment on the optimized YOLOv3 network until the precision difference value between the optimized YOLOv3 network and the standard YOLOv3 network is within a preset range, and obtaining the pruned YOLOv network.
In this embodiment, the total loss function formula is:
Figure BDA0004069616270000061
where L represents the total loss function, x represents the training input, y represents the training target, W represents the trainable parameter, λ represents the scaling factor of the BN layer, and g (γ) is the L1 regularization function of the scaling factor.
In this embodiment, the scaling factor λ=10 -4
Through the step, unimportant channels can be automatically identified and removed in the training process, so that the pruned YOLO network is more compact.
And 3, marking the preset obstacle.
In specific implementation, the preset obstacle is marked by adopting an external rectangular frame.
And 4, determining the offset pixel direction and the number of the clipping center points of the full-frame picture according to the turning direction and the turning angle of the vehicle.
As shown in FIG. 2, C in the drawing s Cutting center point of full-frame picture in standard mode, C t The clipping center point of the full-frame picture when the vehicle turns right.
In practice, the relationship between the angle of steering wheel rotation and the number of offset pixels of the clipping center point may be calibrated in advance, for example, steering wheel rotation by 1 degree, offset by 10 pixels, steering wheel rotation by 10 degrees, offset by 50 pixels, and so on.
And step 5, adjusting the picture display of the CMS display screen according to the offset pixel directions and the number of the clipping center points.
In the concrete implementation, the center point C is cut on the whole picture t For the centre, the appropriate image is cropped (e.g. in combination with magnification requirements) to obtain a steering image, and the cropped steering image is then usedThe image is centrally displayed on the CMS display.
And step 6, calculating the relative position of the preset obstacle and the vehicle, and marking the preset obstacle with a preset color if the relative position is positioned in the early warning area.
For example, when the preset obstacle is located in the early warning area, rendering the target frame as red, wherein the closer the distance is, the darker the red of the target frame is; and the target frame can be flashed after the distance is smaller than the preset distance.
The early warning area is a previously calibrated area which needs attention of a driver.
In this embodiment, the calculating the relative position of the preset obstacle and the host vehicle includes:
and 601, taking the projection of the optical center of the CMS camera on the road as an origin, taking the upward direction of the vertical road as the Z-axis forward direction, taking the reverse direction of the vehicle running as the Y-axis forward direction, determining the X-axis forward direction according to the left-hand rule, and establishing a world coordinate system.
As shown in FIG. 3, O c For CMS camera optical center, O w Is at its projection point on the road, O w -X w Y w Z w Represents the world coordinate system, O p Uv is the pixel coordinate system, O i Xy is the image coordinate system, Q is the position of the vehicle in the world coordinate system, Q ' is the projected point of Q on the Xw axis, Q ' are the corresponding image points of the Q, Q ' point in the pixel coordinate system, respectively.
Step 602, the height h of the CMS camera to the ground and the overlooking angle alpha of the CMS camera relative to the horizontal direction are obtained.
Step 603, obtaining the pixel coordinates (u) of the preset obstacle in the pixel coordinate system q ,v q )。
Step 604, calculating the lateral distance x between the preset obstacle and the vehicle according to a preset distance formula w Distance y in longitudinal direction w
In this embodiment, the preset distance formula is:
Figure BDA0004069616270000081
Figure BDA0004069616270000082
wherein, (u) 0 ,v 0 ) Representing offset coordinates of the optical axis of the CMS camera in an image coordinate system, wherein the units are pixels; d, d x 、d y Representing the size of a unit pixel in a CMS camera sensor, f x 、f y Is the focal length of the camera in pixels.
In another embodiment of the present invention, after the step 6, the method further includes:
and 7, when the fact that the vehicle does not turn is detected, acquiring the current speed of the vehicle, and adjusting the size of the cutting picture according to the current speed.
In this embodiment, the step 7 includes:
when the current vehicle speed exceeds a first speed threshold V 1 Then, the relation between the size of the clipping picture and the current vehicle speed is determined through a first preset relation:
number of horizontal pixels W of clipping picture v1 =min{W f ,W 0 *(1+e (v-v 1 )/v 0 )},
Number of vertical pixel lengths H of clipping picture v1 =k*W v1
When the current vehicle speed is lower than a second speed threshold V 2 Then, the relation between the size of the clipping picture and the current vehicle speed is determined by a second preset relation:
number of horizontal pixels W of clipping picture v2 =W 0 /(1+e- (v-v 2 )/v 0 ),
Number of vertical pixel lengths H of clipping picture v2 =k*W v2
Wherein W is f Representing the horizontal resolution of the full frame, W 0 Representing the horizontal resolution of the CMS display screen, e representing a natural number, k representing the ratio of vertical pixels to horizontal pixels of the CMS display screen, min { } representing the smaller of themV represents the current vehicle speed, V 0 Indicating a reference vehicle speed.
The reference vehicle speed V 0 Can be obtained according to the display effect calibration.
And 8, displaying the adjusted clipping picture on a CMS display screen.
In this embodiment, the step 8 includes:
if the horizontal pixels of the clipping picture are larger than the horizontal pixels of the CMS display screen, displaying in a frame extraction mode;
and if the horizontal pixels of the clipping picture are smaller than the horizontal pixels of the CMS display screen, displaying in a frame inserting mode.
For example, if the horizontal pixel of the CMS display screen is 800 pixels and the horizontal pixel of the clipping frame is 1600 pixels, the clipping frame is extracted every other line and then displayed on the CMS display screen; if the horizontal pixel of the clipping picture is 400 pixels, the pixel of the clipping picture is interpolated to 800 pixels by a frame inserting mode and then displayed on the CMS display screen.
The above disclosure is illustrative of the preferred embodiments of the present invention and should not be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (10)

1. A CMS vision control method, comprising:
step 1, initializing a CMS system, acquiring full images shot by left and right CMS cameras, and displaying the full images in a standard mode;
step 2, when detecting that the vehicle turns, detecting whether a preset obstacle exists in the full-frame picture, if so, entering the next step, otherwise, circularly executing the step;
step 3, marking the preset obstacle;
step 4, determining the offset pixel direction and the number of the clipping center point of the full-frame picture according to the turning direction and the turning angle of the vehicle;
step 5, adjusting the picture display of the CMS display screen according to the offset pixel directions and the number of the clipping center points;
and step 6, calculating the relative position of the preset obstacle and the vehicle, and marking the preset obstacle with a preset color if the relative position is positioned in the early warning area.
2. The CMS vision control method of claim 1, wherein the detecting whether an obstacle is preset in the full screen comprises:
step 201, determining whether the current day or night is according to the full-frame picture;
step 202, if the current situation is daytime, directly calling a pruning YOLO network to detect a preset obstacle in the full-frame picture, if the current situation is nighttime, firstly carrying out Gamma transformation enhancement on the full-frame picture to generate an enhanced full-frame picture, and then calling the pruning YOLO network to detect the preset obstacle in the enhanced full-frame picture.
3. The CMS vision control method of claim 2, wherein the step 201 comprises:
step 2011, converting the full-frame picture into an HSV color space to generate an HSV full-frame image;
step 2012, obtaining a brightness average value of the HSV full image;
in this embodiment, the gray values of the pixels under the V channel of the HSV full image are accumulated and divided by the total pixel number to obtain a brightness average value;
and step 2013, judging that the day is daytime if the brightness average value is larger than a preset brightness threshold value, and judging that the night is night if the brightness average value is not larger than the preset brightness threshold value.
4. The CMS vision control method of claim 2, wherein the generation of the pruned YOLO network comprises:
step 2021, optimizing the trained standard YOLOv3 network by using a preset total loss function;
and step 2022, performing parameter adjustment on the optimized YOLOv3 network until the precision difference value between the optimized YOLOv3 network and the standard YOLOv3 network is within a preset range, and obtaining the pruned YOLOv network.
5. The CMS vision control method of claim 4, wherein the total loss function formula is:
Figure FDA0004069616260000021
where L represents the total loss function, x represents the training input, y represents the training target, W represents the trainable parameter, λ represents the scaling factor of the BN layer, and g (γ) is the L1 regularization function of the scaling factor.
6. The CMS vision control method of claim 1, wherein the calculating the relative position of the preset obstacle and the host vehicle comprises:
step 601, taking the projection of the optical center of the CMS camera on a road as an origin, taking the upward direction of a vertical road as the Z-axis forward direction, taking the reverse direction of vehicle running as the Y-axis forward direction, determining the X-axis forward direction according to the left-hand rule, and establishing a world coordinate system;
step 602, acquiring the height h from the CMS camera to the ground, and the overlooking angle alpha of the CMS camera relative to the horizontal direction;
step 603, obtaining the pixel coordinates (u) of the preset obstacle in the pixel coordinate system q ,v q );
Step 604, calculating the lateral distance x between the preset obstacle and the vehicle according to a preset distance formula w Distance y in longitudinal direction w
7. The CMS vision control method of claim 6, wherein the predetermined distance formula is:
Figure FDA0004069616260000031
Figure FDA0004069616260000032
wherein, (u) 0 ,v 0 ) Representing offset coordinates of the optical axis of the CMS camera in an image coordinate system, wherein the units are pixels; d, d x 、d y Representing the size of a unit pixel in a CMS camera sensor, f x 、f y Is the focal length of the camera in pixels.
8. The CMS vision control method of claim 1, further comprising, after the step 6:
step 7, when the fact that the vehicle does not turn is detected, the current speed of the vehicle is obtained, and the size of a cutting picture is adjusted according to the current speed;
and 8, displaying the adjusted clipping picture on a CMS display screen.
9. The CMS vision control method of claim 8, wherein the step 7 comprises:
when the current vehicle speed exceeds a first speed threshold V 1 Then, the relation between the size of the clipping picture and the current vehicle speed is determined through a first preset relation:
number of horizontal pixels W of clipping picture v1 =min{W f ,W 0 *(1+e (v-v 1 )/v 0 )},
Number of vertical pixel lengths H of clipping picture v1 =k*W v1
When the current vehicle speed is lower than a second speed threshold V 2 Then, the relation between the size of the clipping picture and the current vehicle speed is determined by a second preset relation:
number of horizontal pixels W of clipping picture v2 =W 0 /(1+e- (v-v 2 )/v 0 ),
Number of vertical pixel lengths H of clipping picture v2 =k*W v2
Wherein W is f Representing a full-frame pictureHorizontal resolution of W 0 Represents the horizontal resolution of the CMS display screen, e represents a natural number, k represents the ratio of the vertical pixels to the horizontal pixels of the CMS display screen, min { } represents the smaller, V represents the current vehicle speed, V 0 Indicating a reference vehicle speed.
10. The CMS vision control method of claim 9, wherein the step 8 comprises:
if the horizontal pixels of the clipping picture are larger than the horizontal pixels of the CMS display screen, displaying in a frame extraction mode;
and if the horizontal pixels of the clipping picture are smaller than the horizontal pixels of the CMS display screen, displaying in a frame inserting mode.
CN202310088659.6A 2023-01-18 2023-01-18 CMS visual field control method Pending CN116118628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310088659.6A CN116118628A (en) 2023-01-18 2023-01-18 CMS visual field control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310088659.6A CN116118628A (en) 2023-01-18 2023-01-18 CMS visual field control method

Publications (1)

Publication Number Publication Date
CN116118628A true CN116118628A (en) 2023-05-16

Family

ID=86309759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310088659.6A Pending CN116118628A (en) 2023-01-18 2023-01-18 CMS visual field control method

Country Status (1)

Country Link
CN (1) CN116118628A (en)

Similar Documents

Publication Publication Date Title
US8289391B2 (en) Apparatus for vehicle surroundings monitoring
CN108515909B (en) Automobile head-up display system and obstacle prompting method thereof
JP4695167B2 (en) Method and apparatus for correcting distortion and enhancing an image in a vehicle rear view system
US8514282B2 (en) Vehicle periphery display device and method for vehicle periphery image
US20050134479A1 (en) Vehicle display system
US20210188168A1 (en) Method for calibrating a vehicular vision system
US20070230800A1 (en) Visibility range measuring apparatus for vehicle and vehicle drive assist system
US9056630B2 (en) Lane departure sensing method and apparatus using images that surround a vehicle
JP5115792B2 (en) Image processing apparatus and method, and program
US20130106993A1 (en) Driver assistance system for a vehicle
JP5178361B2 (en) Driving assistance device
EP2723060A1 (en) Vehicle-mounted camera device
EP2414776B1 (en) Vehicle handling assistant apparatus
US20120154589A1 (en) Vehicle surrounding monitor apparatus
CN112224132A (en) Vehicle panoramic all-around obstacle early warning method
EP2476588A1 (en) Vehicle surrounding monitor apparatus
JP2007288657A (en) Display apparatus for vehicle, and display method of the display apparatus for vehicle
JP3847547B2 (en) Vehicle periphery monitoring support device
CN110378836B (en) Method, system and equipment for acquiring 3D information of object
CN113859127A (en) Mode switching method for electronic rearview mirror
CN212220070U (en) Vehicle real-time positioning system based on visual semantic segmentation technology
CN113327201A (en) Image processing apparatus and image processing method
DE102013220839B4 (en) A method of dynamically adjusting a brightness of an image of a rear view display device and a corresponding vehicle imaging system
CN116118628A (en) CMS visual field control method
CN107914639B (en) Lane display device using external reflector and lane display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination