KR20170031982A - Monitoring system for vehicle - Google Patents

Monitoring system for vehicle Download PDF

Info

Publication number
KR20170031982A
KR20170031982A KR1020150129627A KR20150129627A KR20170031982A KR 20170031982 A KR20170031982 A KR 20170031982A KR 1020150129627 A KR1020150129627 A KR 1020150129627A KR 20150129627 A KR20150129627 A KR 20150129627A KR 20170031982 A KR20170031982 A KR 20170031982A
Authority
KR
South Korea
Prior art keywords
image
backlight
area
region
processing unit
Prior art date
Application number
KR1020150129627A
Other languages
Korean (ko)
Inventor
백승현
안수진
Original Assignee
에스엘 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 에스엘 주식회사 filed Critical 에스엘 주식회사
Priority to KR1020150129627A priority Critical patent/KR20170031982A/en
Publication of KR20170031982A publication Critical patent/KR20170031982A/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/12Fluid-filled or evacuated lenses
    • H04N5/2258
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

According to an embodiment of the present invention, a monitoring system for a vehicle comprises: a camera having a fluid lens of which a curvature is varied, capturing the outside of a vehicle; am image processing unit detecting a backlight area inside the image captured by the camera; a lens control unit controlling the curvature of the fluid lens to enable the image to be shifted in the direction where at least a portion in the backlight area is avoided inside the image to maintain a focal distance of the fluid lens and change a focal position of the fluid lens; and a display unit visually representing the image.

Description

Monitoring system for vehicle

BACKGROUND OF THE INVENTION 1. Field of the Invention [0002] The present invention relates to a monitoring system for a vehicle, and more particularly, to a monitoring system for a vehicle that provides an image outside the vehicle.

Generally, the vehicle is equipped with a side mirror that allows the driver to grasp the road conditions on the left or right or rear of the vehicle.

The side mirrors are installed near the front doors or the front doors on both sides of the vehicle, and allow the driver to recognize the surrounding vehicles located laterally or rearwardly through the side mirrors, and to securely change lanes or maintain safety distances. It plays a role.

However, since the side mirrors are installed so as to protrude from both sides of the vehicle to a considerable size in the outward direction, there is a problem that the air resistance increases when the vehicle is running, the fuel consumption is lowered, the noise is increased, .

In the absence of a side mirror, air resistance can be reduced by up to 7%, and fuel consumption due to reduced air resistance can be reduced by about 3%. Accordingly, in order to design a high-efficiency and high-performance car, research is underway to replace a conventional side mirror with a camera.

However, there is a disadvantage that it is difficult to secure a clock when backlight is generated in an image photographed by a camera.

A problem to be solved by the present invention is to provide a vehicle monitoring system capable of avoiding backlighting.

The problems of the present invention are not limited to the above-mentioned problems, and other problems not mentioned can be clearly understood by those skilled in the art from the following description.

According to another aspect of the present invention, there is provided a vehicle monitoring system including a camera for capturing an outside of a vehicle, the camera including a fluid lens having a variable curvature, an image processor for detecting a backlight region in an image captured by the camera, A lens controller for controlling the curvature of the fluid lens so that the image is shifted in a direction in which at least a part of the backlight region is avoided in the image to maintain the focal distance of the fluid lens and change the focal position of the fluid lens, And a display unit for visually expressing the image.

Wherein the image processing unit divides the image into a plurality of regions and calculates a luminance average value for each region, and calculates a cumulative luminance average value of the regions of the images photographed at a past time point within a predetermined time from the photographing time of the backlight detection subject image, And determine that the backlight region is present in an area in which the luminance average value of each of the plurality of areas in the backlight detection subject image is greater than or equal to a first reference value that is greater than the accumulated luminance average value in each area.

The image processing unit may determine an area having the highest luminance in the area where the backlighting area is determined to exist as the backlighting area among the plurality of areas of the backlight detection subject image.

Wherein the image processing unit sequentially calculates a luminance value of a plurality of pixels constituting an area in which the backlight region is determined to be present among the plurality of areas of the backlight detection subject image in accordance with a predetermined rule, And a pixel whose luminance value is greater than or equal to a second reference value than the cumulative luminance average value for each area can be determined as the backlight region.

The image processing unit may determine that the pixels surrounding the pixel determined as the backlight region have the luminance value that is equal to or greater than the second reference value to the cumulative luminance average value per region as the backlight region.

Wherein the image processing unit determines the pixels as the backlight region when the luminance value among the pixels surrounding the pixel determined as the backlight region is equal to or greater than the reference number of pixels having the second reference value or more than the cumulative luminance average value .

Wherein the image processing unit calculates the center point of the backlight region and the center point of the image and the lens control unit controls the lens unit such that the image is moved in a direction of a virtual extension line connecting the center point of the backlight region and the center point of the image, The curvature of the lens can be controlled.

The image processing unit may calculate a center point of the backlight area based on an average of coordinate values of pixels constituting the outermost region of the backlight region.

The moving distance of the image may be inversely proportional to the distance between the center point of the backlight region and the center point of the image.

Wherein the image processing unit sets a virtual center area at the center of the image and determines whether an area included in the center area of the backlight area is less than a preset threshold ratio, The control unit controls the fluid lens such that at least a part of the backlight region is avoided in the image when it is determined that an area included in the central region of the backlight region is equal to or less than the threshold ratio.

Wherein when the image processing unit determines that an area included in the central area of the backlight area is less than or equal to the threshold ratio, the image processor corrects a center point of an area not included in the center area and a center point And the lens control unit controls the curvature of the fluid lens such that the image is moved in a direction of a virtual extension line connecting the center point of the region not included in the center region and the center point of the image in the backlight region .

The camera may include a first camera for photographing the left rear side of the vehicle and a second camera for photographing the right rear side of the vehicle.

Other specific details of the invention are included in the detailed description and drawings.

The embodiments of the present invention have at least the following effects.

Backlight can be avoided in an image when backlight appears in an image taken by a camera.

The effects according to the present invention are not limited by the contents exemplified above, and more various effects are included in the specification.

1 is a block diagram showing a monitoring system for a vehicle according to an embodiment of the present invention.
2 is a cross-sectional view schematically illustrating a fluid lens according to an embodiment of the present invention.
3 is a view showing an example of a vehicle equipped with a vehicle monitoring system according to an embodiment of the present invention.
4 is a view showing another example of a vehicle equipped with a vehicle monitoring system according to an embodiment of the present invention.
5 to 7 are flowcharts illustrating a backlight avoidance control method using a vehicle monitoring system according to an embodiment of the present invention.
8 is a diagram for explaining steps S10 and S11 of FIG.
9 is a view for explaining a method of moving the focus position of the fluid lens using the lens operation switch in the manual control mode.
FIGS. 10 to 13 are diagrams for explaining a method of moving a focus position of a fluid lens when a backlight region exists in one of the regions divided in the automatic control mode. FIG.
14 is a diagram for explaining a method of moving a focus position of a fluid lens in a case where a backlight region exists in three regions out of the regions divided in the automatic control mode.
15 is a diagram for explaining a method of moving a focus position of a fluid lens in a case where backlight regions continuous to two regions out of the regions divided in the automatic control mode exist.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention, and the manner of achieving them, will be apparent from and elucidated with reference to the embodiments described hereinafter in conjunction with the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. To fully disclose the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.

Further, the embodiments described herein will be described with reference to cross-sectional views and / or schematic drawings that are ideal illustrations of the present invention. Thus, the shape of the illustrations may be modified by manufacturing techniques and / or tolerances. In addition, in the drawings of the present invention, each component may be somewhat enlarged or reduced in view of convenience of explanation. Like reference numerals refer to like elements throughout the specification.

Hereinafter, the present invention will be described with reference to the drawings for explaining a vehicle monitoring system and a backlight avoidance control method using the same according to an embodiment of the present invention.

FIG. 1 is a block diagram showing a monitoring system for a vehicle according to an embodiment of the present invention, and FIG. 2 is a cross-sectional view schematically showing a fluid lens according to an embodiment of the present invention.

1, a vehicle monitoring system 1 according to an embodiment of the present invention includes a first camera 10, a second camera 20, a lens control unit 30, an image processing unit 40, (50), a mode selection switch (60), and a lens operation switch (70).

The first camera 10 includes a first fluid lens 11 and a first image sensor 12 and the second camera 20 includes a second fluid lens 21 and a second image sensor 22 do.

The first fluid lens 11 and the second fluid lens 21 may be lenses that control the curvature of the lens by applying a voltage to the conductive fluid using an electrowetting phenomenon.

Alternatively, the first fluid lens 11 and the second fluid lens 21 may be a lens whose curvature is controlled according to the volume change of the amount of fluid flowing into and out of the lens.

The curvatures of the first fluid lens 11 and the second fluid lens 21 are individually controlled by the lens control unit 30 described later.

The first image sensor 12 may be a component for converting the light received through the first fluid lens 11 into an electrical signal and the second image sensor 22 may be a component for converting the light received through the second fluid lens 21 And may be a component that converts light received into an electrical signal. Examples of the first image sensor 12 and the second image sensor 22 include a CCD (Charge-Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor).

The lens control unit 30 controls the curvatures of the first fluid lens 11 and the second fluid lens 21 to change the focus positions of the first fluid lens 11 and the second fluid lens 21. [ At this time, the focal lengths of the first fluid lens 11 and the second fluid lens 21 can be kept the same.

When the first fluid lens 11 and the second fluid lens 21 are fluid lenses whose curvature is changed by electrowetting phenomenon, the lens control unit 30 applies a voltage to the fluid lenses 11 and 12 The curvature and focal position of the first fluid lens 11 and the second fluid lens 21 can be changed by controlling the voltage.

Alternatively, when the first fluid lens 11 and the second fluid lens 21 are fluid lenses whose curvature is controlled in accordance with the volume change of the fluid entering and leaving the lens, the lens control unit 30 controls the fluid lenses 11 and 12 , The curvature and the focal position of the first fluid lens 11 and the second fluid lens 21 can be changed.

Fig. 2 schematically shows the structure of a fluid lens using electrowetting phenomenon as an example of the fluid lenses 11 and 21. Fig.

As shown in Fig. 2, the first fluid lens 11 includes two glasses 11a and 11b arranged to face each other at regular intervals.

The two glasses 11a and 11b can be sealed by the first electrode 11e and the second electrode 11f. The first electrode 11e is coated with an insulator 11g.

Are filled with two fluids F1 and F2 which do not mix with each other in the sealing space between the two glasses 11a and 11b. The first fluid Fl of the two fluids F1 and F2 is composed of a conductive fluid and the second fluid F2 is composed of a nonconductive fluid. For example, water may be used as the first fluid F1, and oil may be used as the second fluid F2.

2 is a cross-sectional view of the fluid lens 11, the first electrode 11e is located on the left and right. The first electrode 11e on the left side and the first electrode 11e on the right side are connected to different voltage sources 11c and 11d, respectively, and are electrically isolated from each other by applying different voltages.

The surface tension of the first fluid Fl changes according to the voltage applied to the first electrode 11e on the left side and the first electrode 11e on the right side so that the first fluid Fl and the second fluid F2 And the angle of refraction of the light is changed. As a result, the focus of the fluid lens 11 is changed.

Specifically, when the voltage applied to the first electrode 11e on the left side and the first electrode 11e on the left side are the same, the interface shape becomes a dotted line shape in Fig. 2, but when the voltage applied to the first electrode 11e on the left side The voltage applied to the second electrode 11f on the right side is different from the surface tension of the first fluid F1 adjacent to the first electrode 11e on the left side and the surface tension of the first fluid 11f adjacent to the right first electrode 11e The surface tension of the first fluid changes, so that the interface shape changes into a dash-dotted line or solid line in Fig.

As a result, as shown in FIG. 2, the focus position changes and the image entering the first image sensor 12 changes.

For example, when the shape of the interface changes from the dotted line to the one-dot chain line, the image photographed by the fluid lens 11 is shifted to the right, as shown in Fig. Then, when the shape of the interface changes from the dotted line to the solid line, the image photographed by the fluid lens 11 is shifted to the left, as shown in Fig.

On the other hand, when Fig. 2 is taken as a vertical cross-sectional view of the fluid lens 11, the first electrode 11e is located at the upper and lower sides. The first electrode 11e and the first electrode 11e on the upper side are connected to different voltage sources 11c and 11d respectively and are electrically independent from each other so that different voltages are applied to the first and second electrodes.

The surface tension of the first fluid F1 is changed in accordance with the voltage applied to the first electrode 11e and the first electrode 11e at the upper portion and the first fluid F1 is changed accordingly, And the second fluid F2 are changed to change the refraction angle of the light. As a result, the focus of the fluid lens 11 is changed.

For example, when the shape of the interface changes from a dotted line to a one-dot chain line, an image photographed by the fluid lens 11 is shifted upward as shown in Fig. Then, when the shape of the interface changes from the dotted line to the solid line, the image photographed by the fluid lens 11 is shifted downward as shown in Fig. Since the second fluid lens 21 also has the same or similar structure as the first fluid lens 11, a detailed description of the second fluid lens 21 will be omitted.

The focal positions of the first fluid lens 11 and the second fluid lens 21 are adjusted in three dimensions through the structure of the fluid lenses 11 and 21 described above.

The image processing unit 40 receives the image information converted from the first image sensor 12 and the second image sensor 22 into an electrical signal, and reconstructs the image as an image.

When the mode selection switch 60 is selected as the automatic control mode, the image processing unit 40 detects the backlight in the reconstructed image. A specific process for detecting the backlight will be described later. The image processing unit 40 may transmit a control command to the lens control unit 30 so that backlight is avoided in the image based on the position of the backlight detected in the image. The lens control unit 30 controls the curvature of the first fluid lens 11 and / or the second fluid lens 21 according to the control command transmitted from the image processing unit 40 to control the first fluid lens 11 and / The focal position of the second fluid lens 21 is changed to avoid backlight in the image.

In addition, the image processing unit 40 includes a storage unit (not shown), and stores the reconstructed image frames of the electric signal or electric signal received from the first image sensor 12 and the second image sensor 22 within a predetermined time Can be stored.

The display unit 50 visually expresses an image reconstructed by the image processing unit 40. The driver of the vehicle can visually confirm the images taken by the first camera 10 and the second camera 20 through the display unit 50. [

The display unit 50 may be configured to divide one screen into two and display the images captured by the first camera 10 and the images captured by the second camera 20 respectively, And may display the image captured by the first camera 10 and the image captured by the second camera 20. [

On the other hand, the mode selection switch 60 is a switch capable of selecting the automatic control mode and the manual control mode, and is operated by the driver.

When the driver selects the automatic control mode using the mode selection switch 60, the lens control unit 30 is controlled by the image processing unit 40. [ That is, in the automatic control mode, the image processing unit 40 senses backlight in the image and transmits a control command to the lens controller 30 so that backlight is avoided in the image according to the position of the backlight in the image. The lens control unit 30 controls the curvature of the first fluid lens 11 and / or the second fluid lens 21 based on the control command received from the image processing unit 40. [

When the driver selects the manual control mode by using the mode selection switch 60, the lens control unit 30 controls the operation of the first fluid lens 11 and / or the second fluid lens 21 ).

In the manual control mode, the driver can see the displayed image through the display unit 50 and manipulate the lens operation switch 70 in the up, down, left, and right directions to shift the image in the up, down, left, and right directions. The lens control unit 30 controls the curvature of the first fluid lens 11 and / or the second fluid lens 21 in accordance with the operation direction of the lens operation switch 70 so that the first camera 10 and / The direction of the image captured by the second camera 20 is adjusted in the up, down, left, and right directions.

Although not shown, the lens operation switch may include a button for selecting either the first camera 10 or the second camera 20. [ When the driver selects one of the first camera 10 and the second camera 20 by using the corresponding button and operates the lens operation switch 70 in the up, down, left, and right directions, And is adjusted in the lateral direction.

Even in the automatic control mode, when the driver operates the lens operation switch 70, the lens control unit 30 controls the operation of the first fluid lens 11 and / or the second fluid lens 21 Can be controlled.

3 is a view showing an example of a vehicle equipped with a vehicle monitoring system according to an embodiment of the present invention.

3, the first camera 10 is mounted adjacent to the left side mirror 2, and the second camera 20 is mounted adjacent to the right side mirror 3, The rear portion and the right rear portion of the vehicle.

As shown in FIG. 3, the display unit 50 may be provided in a cluster located behind the steering wheel 4. Or a monitor 5 mounted on the center fascia of the vehicle as the display unit 50 may be used.

As shown in Fig. 3, the mode selection switch 60 and the lens operation switch 70 may be provided on the steering wheel 4. Fig. The lens operation switch 70 may be configured to be operable in the up, down, left, and right directions including the four direction buttons.

The vehicle according to the embodiment of FIG. 3 is a vehicle in which the driver secures the right and left rearview fields of the vehicle through both side mirrors 2 and 3 or the first camera 10 and the second camera 20 can secure the right and left rear views of the vehicle through the images captured by the cameras. In this case, the screen of the display unit 50 is divided into two so that the image captured by the first camera 10 and the image captured by the second camera 20 can be simultaneously displayed.

4 is a view showing another example of a vehicle equipped with a vehicle monitoring system according to an embodiment of the present invention.

As shown in Fig. 4, the vehicle according to the present embodiment does not have left and right side mirrors, and there are a first camera 10 and a second camera 20 in place of the side mirrors.

The first camera 10 is installed on the outside of the driver's seat door and carries the left rear of the vehicle, and the second camera 20 is installed on the outside of the assistant driver's door to photograph the right rear of the vehicle. The first camera 10 and the second camera 20 may be provided at any position where they can photograph the left rear side and the right rear side of the vehicle.

Similar to the case of Fig. 3, the mode selection switch 60 and the lens operation switch 70 may be provided on the steering wheel 4. Fig. The lens operation switch 70 may be configured to be operable in the up, down, left, and right directions including the four direction buttons.

4, the display unit 50 includes a first display unit 51 installed inside the vehicle adjacent to the driver's seat door and a second display unit 52 installed inside the vehicle adjacent to the assistant's door .

An image captured by the first camera 10 is displayed on the first display unit 51 and an image captured by the second camera 20 is displayed on the second display unit 52. [ Accordingly, the driver can secure the view of the left rear side and the right rear side of the vehicle through the first display unit 51 and the second display unit 52. [

Alternatively, as in the case of Fig. 3, a monitor 5 mounted on the center fascia of the vehicle or the cluster located behind the steering wheel 4 as the display portion 50 may be used.

Hereinafter, a backlight avoidance control method using the vehicle monitoring system 1 according to an embodiment of the present invention will be described.

5 to 7 are flowcharts illustrating a backlight avoidance control method using a vehicle monitoring system according to an embodiment of the present invention.

As shown in FIG. 5, the backlight avoidance control method according to an embodiment of the present invention includes a step S1 of determining whether or not the mode is an automatic control mode.

In step S1, it is determined whether the mode selection switch 60 is in the automatic control mode or in the manual control mode.

When the mode selection switch 60 is selected as the automatic control mode, the operation proceeds to the image reception step S4 (see FIG. 6) described later. When the mode selection switch 60 is selected as the manual control mode, The control proceeds to step S2 in which the fluid lens focus is controlled.

In the manual control mode, the lens control unit 30 is controlled by the lens operation switch 70 without being controlled by the image processing unit 40, unlike the automatic control mode.

The lens operation switch 70 may include a directional control switch that is adjustable in the up, down, left, and right directions, and a camera selection switch that can select either the first camera 10 or the second camera 20.

The driver can select the camera to adjust the shooting direction by using the camera selection switch, and adjust the shooting direction of the selected camera by using the direction control switch.

In step S2, the lens control unit 30 controls the curvature of the first fluid lens 11 or the second fluid lens in accordance with the operation of the camera selection switch and the direction control switch, 20 adjusts the direction of the image to be photographed in up, down, left, and right directions.

9 is a view for explaining a method of moving the focus position of the fluid lens using the lens operation switch in the manual control mode, showing an image photographed by the first camera 10 photographing the left rear of the vehicle .

As shown in FIG. 9, when there is a strong backlight C in the image, the trailing vehicle may not be properly displayed in the image due to light blur. In such a case, there is a possibility that the lane can not be changed to the left side because the view to the left rear side is not sufficiently secured, or the accident may be caused by the lane change.

When the backlight C is in the sunlight, it is inevitable to operate the image while viewing the image in which the backlight C exists for a considerable time during the straight running.

Accordingly, the driver selects the first camera 10 existing on the left side of the vehicle using the camera selection switch, and switches the photographing direction of the first camera 10 to the right downward direction using the right direction and the downward direction switch Can be shifted.

The lens control unit 30 controls the curvature of the first fluid lens 11 of the first camera 10 in accordance with the operation of the lens operation switch 70 of the driver to focus the first fluid lens 11 in the right and left directions Direction.

As the focus of the first fluid lens 11 is shifted rightward and downward, the photographing direction of the first camera 10 is shifted to the lower right. Thus, in the display unit 50, Is displayed.

The backlight C is avoided in the image of the dotted line box, so that the light blur due to the backlight C is relieved and the trailing vehicle on the left side is expressed more clearly in the image.

Hereinafter, the backlight avoidance control method according to the automatic control mode will be described.

If it is determined in step S1 that the mode selection switch 60 has been selected in the automatic control mode, the image processing unit 40 outputs the image information converted from the first image sensor 12 and the second image sensor 22 (S4). The received image information becomes a backlight detection subject image, and the image processing unit 40 determines whether or not a backlight area exists in the backlight detection subject image through a later-described step.

The image processing unit 40 divides the received image information into a plurality of areas (S5).

As shown in FIGS. 10 to 15, the backlight avoidance control method according to the present embodiment will be described on the basis of dividing an image frame into four regions (Q1, Q2, Q3, and Q4) of a 2X2 array. The method of dividing the video frame may vary according to the embodiment.

For convenience of explanation, the Q1 region is referred to as a first quadrant, the Q2 region as a second quadrant, the Q3 region as a third quadrant, and the Q4 region as a fourth quadrant.

Thereafter, the image processing unit 40 calculates a luminance average value for each divided area (S6).

In step S6, the image processing unit 40 calculates the luminance average value of the pixels belonging to the first quadrant Q1 of the received image frame, the luminance average value of the pixels belonging to the second quadrant Q2, And the luminance average value of the pixels belonging to the fourth quadrant Q4.

The image processing unit 40 can distinguish the brightness value of the pixel from the 8-bit information and manage it as information of 256 steps (0 to 255). In this case, the luminance average value may be represented by a value between 0 and 255 inclusive.

Thereafter, the image processing unit 40 calculates a cumulative luminance average value for each divided area (S7).

In step S7, the image processing unit 40 can calculate a cumulative luminance average value for each of a plurality of divided image frames photographed at a previous time point within a predetermined time from the photographing time of the backlight detection subject image. For this, the image processing unit 40 may include a storage unit capable of storing image frames obtained for a predetermined time.

For example, the image processing unit 40 divides a plurality of image frames photographed within the t seconds from the current time T into four regions Q1, Q2, Q3, and Q4, A cumulative luminance average value of the pixels belonging to the second quadrant Q2 of each image frames, a cumulative luminance average value of the pixels belonging to the third quadrant Q3 of each image frames , And calculate the cumulative luminance average value of the pixels belonging to the fourth quadrant (Q4) of each image frame.

Alternatively, the image processing unit 40 extracts N image frames in the past time order from the image frame obtained in the current time T, and extracts the extracted image frames as four regions Q1, Q2, Q3, and Q4 And calculates a cumulative luminance average value of pixels belonging to the first quadrant Q1 of each image frames, a cumulative luminance average value of pixels belonging to the second quadrant Q2 of each image frames, a third quadrant Q3 of each image frames, The cumulative luminance average value of the pixels belonging to the first image frame, and the cumulative luminance average value of the pixels belonging to the fourth quadrant Q4 of each image frame.

As the time elapses, new image frames are added to the calculation object and the image frames of the past time are excluded from the calculation object, so that the average value of the luminance of each of the regions Q1, Q2, Q3, and Q4 may change with passage of time .

Thereafter, the image processing unit 40 compares the luminance average value of each area calculated in step S6 with the accumulated luminance average value of each area calculated in step S7, and determines whether the difference therebetween is equal to or greater than a preset first reference value (S8) .

For example, when the first reference value is set to 30 and the cumulative luminance average value of the first quadrant Q1 is 100, the image processing unit 40 determines that the luminance average value of the first quadrant Q1 of the backlight detection subject image is 130 Or more.

In a similar manner, the image processing unit 40 compares the cumulative luminance average value of the second quadrant Q2 with the luminance average value of the second quadrant Q2 of the backlight detection object image, and calculates the cumulative luminance average value of the third quadrant Q3 The luminance average value of the third quadrant Q3 of the backlight detection object image is compared with the luminance average value of the fourth quadrant 42 of the backlight detection object image in the fourth quadrant Q4, It is possible to determine whether or not it is equal to or greater than the first reference value.

If there is no quadrant (Q1, Q2, Q3, Q4) in which the luminance average value of each region in the backlight detection object image is larger than the accumulated luminance average value in each region larger than the first reference value in Step S8, the process goes back to Step S1.

On the other hand, when there are quadrants Q1, Q2, Q3, and Q4 in which the luminance average value of each region in the backlight detection object image is larger than the first accumulated luminance intensity average value in Step S8, The quadrant is determined as a high luminance region in which the backlight region exists or is likely to exist, and the number of the quadrants Q1, Q2, Q3, and Q4 is determined (S9, S14).

When the average luminance value of each region in the image of a specific quadrant is larger than the first reference value, the strong light appears in the corresponding quadrant, and therefore there is a high possibility that backlight exists in the quadrant.

In step S9, it is determined whether the number of high brightness areas is one or not.

In step S9, if the number of high brightness areas is 1, the process goes to step S10. If the number of high brightness areas exceeds one, the process goes to step S14.

Since the control method of the first fluid lens 11 and / or the second fluid lens 21 is changed according to the number of high brightness areas, the number of high brightness areas is determined in steps S9 and S14.

First, the case where the number of high luminance regions is one will be described.

If it is determined in step S9 that the number of high luminance areas is one, the image processing unit 40 compares the individual pixel luminance values in the high luminance area selected in step S8 with the cumulative luminance average values of the areas calculated in step S7, Is equal to or greater than a preset second reference value (S10).

For convenience of explanation, steps S10 and S11 will be described with reference to FIG.

8, when backlight C exists only in the first quadrant Q1 among the quadrants Q1, Q2, Q3, and Q4 of the backlight detection subject image I1, in step S8, (Q1) is judged to be a high luminance region.

In step S10, the image processing unit 40 determines whether the luminance value of the specific pixel P1 among the plurality of pixels constituting the first quadrant Q1 is greater than the cumulative luminance average value of the first quadrant Q1 calculated in step S7 It is determined whether or not it is equal to or greater than the second reference value. The second reference value may be a predetermined value.

When the deviation between the luminance value of the specific pixel P1 and the accumulated luminance average value of the first quadrant Q1 is lower than the second reference value, the luminance value of the next specific pixel is compared with the accumulated luminance average value of the first quadrant Q1.

The criterion for selecting the specific pixel P1 and the next specific pixel among the plurality of pixels constituting the first quadrant Q1 may be determined by various rules. For example, a rule may be adopted in which the next pixel is selected in a zigzag manner starting from the pixel located at the uppermost right end of the first quadrant Q1, or the most luminance among the plurality of pixels constituting the first quadrant Q1 A rule may be adopted in which the next pixel is selected in descending order of the luminance value starting from the pixel having the higher value.

If there is no pixel whose luminance value is equal to or greater than the second reference value of the cumulative luminance average value of the first quadrant Q1 among the individual pixels constituting the first quadrant Q1 according to a predetermined rule, the process proceeds to S1.

On the other hand, if a specific pixel P1 whose luminance value is equal to or greater than a second reference value of the accumulated luminance average value of the first quadrant Q1 is found, The luminance value of the surrounding pixels surrounding the specific pixel P1 is compared with the cumulative luminance average value of the first quadrant Q1 and it is determined whether the difference between the luminance values is equal to or greater than a predetermined second reference value at step S11.

If the pixels P2, P3 and P4 whose luminance value is equal to or greater than the second reference value of the cumulative luminance average value of the first quadrant Q1 are found out of the surrounding pixels, Repeat step S11 for surrounding pixels. At this time, the pixel P1 in which the luminance value has already been compared with the accumulated luminance average value of the first quadrant Q1 can be excluded from the calculation in the judgment.

The pixels P1, P2, P3, and P4 which are equal to or greater than the second reference value of the cumulative luminance average value of the first quadrant Q1 in the steps S10 and S11 are determined to be the backlight region.

Steps S10 and S11 can be repeatedly performed until the luminance value of all the pixels of the first quadrant Q1 is compared with the accumulated luminance average value of the first quadrant Q1.

If a pixel whose luminance value is no more than the second reference value of the cumulative luminance average value of the first quadrant Q1 is found, the flow proceeds to step S13 in which the curvature of the fluid lens is controlled to avoid backlighting.

In the case where a plurality of backlight regions constituted by groups of adjacent pixels in the first quadrant Q1 are determined through steps S10 and S11, the image processing unit 40 determines whether the number of pixels constituting each backlight region is greater than a preset reference It is possible to determine only the backlight region having a number equal to or greater than the number of backlight regions as the backlight region of the avoidance object.

8, it is assumed that the backlight region exists in the first quadrant Q1. However, even if the other quadrants Q2, Q3, and Q4 are selected as the high brightness region, Proceed in a similar manner.

FIGS. 10 to 13 are diagrams for explaining a method of moving a focus position of a fluid lens when a backlight region exists in one of the regions divided in the automatic control mode. FIG.

10, when a part of the pixels in the first quadrant Q1 in the image I1 is determined to be the backlight region C, the image processing unit 40 determines the backlight region C, And calculates the center point P2 of the center point P2.

The image processing unit 40 can calculate the center point P2 of the backlight region C as an average of the coordinates of the pixels constituting the outermost region of the backlight region C. [ For example, when the coordinate values of the N pixels constituting the outermost region of the backlight region C are (X1, Y1), (X2, Y2), (X3, Y3), ... , (Xn, Yn), the coordinates of the center point P2 can be expressed as follows.

Figure pat00001

Further, the image processing section calculates the center point P1 of the image I1. The center point P1 of the image I1 can be the center of the four corner coordinates of the image I1.

The image processing unit 40 then determines whether or not the focus position of the fluid lens is located at the center point P1 of the image I1 along a virtual extension line connecting the center point P1 of the image I1 and the center point P2 of the backlight region C To the lens control unit 30 so as to move in the direction opposite to the backlight area C.

The lens control unit 30 changes the curvature of the corresponding fluid lens in response to the control command transmitted from the image processing unit 40. As a result, the image captured by the camera to which the corresponding fluid lens belongs is shifted to the image 12 of the dotted line do. The backlight C is avoided in the shifted image I2, so that the light blur due to the backlight C is relaxed.

So that the center point P1 of the shifted image I2 and the center point P1 of the image I1 in which the backlight region C exists and the center point P2 of the backlight region C are located on the same straight line, Is shifted along a virtual extension line connecting the center point P1 of the image I1 and the center point P2 of the backlight region C. [

The moving distance of the image may be inversely proportional to the distance r1 between the center point P1 of the image I1 and the center point P2 of the backlight area C. [ When the center point P2 of the backlight area C is located far from the center point P1 of the image I1, the backlight area C is located close to the outer edge of the image I1, When the backlight region C is avoided in the image I2 that has been captured and the center point P2 of the backlight region C is located close to the center point P1 of the image I1, The backlight region C is avoided in the shifted image 12 because the moving distance of the image is long.

However, when the backlight region C is located at the center of the image I1, the moving distance of the image for avoiding the backlight region C becomes excessively long, so that an abnormal point that is not necessary for the operation is photographed.

In order to prevent this, the image processing unit 40 according to this embodiment can set a virtual center area Z at the center of the image I1 as shown in Fig. 11 shows an example in which the imaginary central region Z is circular, the imaginary central region Z may be set as a square or rectangular polygon.

The image processing unit 40 may determine whether the backlight region C is included in the central region Z. [ 11, when the backlight region C is included in the central region Z, the image processing unit 40 does not transmit a separate control command to the lens control unit 30, The curvature of the fluid lens 11 and / or the second fluid lens 21 can be maintained.

12 and 13, when a part of the backlight region C is included in the central region Z, the image processing unit 40 determines the center of the backlight region C with respect to the area of the backlight region C, It can be determined whether or not to shift the image according to the ratio of the area included in the area Z. [

The image processing unit 40 determines the area of the backlight area C based on the number of pixels corresponding to the backlight area C and determines the number of pixels included in the center area Z among the backlight area C based on The area included in the central region Z of the backlight region C can be determined.

A threshold ratio may be set in the image processing unit 40 in advance. For example, the critical ratio may be 50%.

Fig. 12 shows an example in which the ratio of the area included in the central region Z in the backlight region C exceeds the critical ratio, and Fig. 13 shows an example in which the ratio of the area included in the central region Z in the backlight region C Area ratio is equal to or less than a critical ratio.

12, when the ratio of the area included in the central area Z of the backlight area C to the area of the backlight area C exceeds the threshold ratio, the image processor 40 controls the lens controller 30 It is possible to maintain the curvature of the first fluid lens 11 and / or the second fluid lens 21 without transmitting a separate control command to the first fluid lens 11 and / or the second fluid lens 21.

13, if the ratio of the area included in the central region Z of the backlight region C to the area of the backlight region C is less than the threshold ratio, the image processing unit 40 sets the backlight region And the focus position of the fluid lens is moved along a virtual extension line connecting the center point P1 and the center point P2 of the image I1 The control command is transmitted to the lens control unit 30 so as to move in the direction opposite to the backlight region C from the center point P1 of the image I1.

The lens control unit 30 changes the curvature of the corresponding fluid lens in accordance with the control command transmitted from the image processing unit 40. As a result, the image photographed by the camera to which the corresponding fluid lens belongs is the center point P1 of the image I1 ) And the center point P2, and is shifted to the image I2 of the dotted line.

On the other hand, if it is determined through step S10 and step S11 that there are a plurality of backlight regions constituted by groups of adjacent pixels in one quadrant Q1, the lens control unit 30 determines whether the number of pixels constituting each backlight region It is possible to transmit a control command to the lens control unit 30 so that only the backlight region having the preset reference number or more is scattered in the image.

After step S13, the process proceeds to step S1 again.

Hereinafter, a case where the number of quadrants Q1, Q2, Q3, and Q4 in which the variation amount of the luminance average value is equal to or larger than the reference variation amount or exceeds the reference variation amount is described.

As shown in FIG. 6, if it is determined in step S9 that the number of high brightness areas exceeds one, the process proceeds to step S14.

As shown in Fig. 7, in step S14, it is determined whether the number of high brightness areas is two or not.

If it is determined in step S14 that the number of high brightness areas is two, the image processing unit 40 compares the individual pixel brightness values for each of the two quadrants selected as the high brightness area with the average cumulative brightness value for each area calculated in step S7, It is determined whether the difference between them is equal to or greater than a preset second reference value (S17).

If a specific pixel whose luminance value is equal to or greater than a second reference value of the cumulative luminance average value of the quadrant is found in at least one of the two quadrants, the image processing unit 40 compares the luminance value of the surrounding pixels surrounding the specific pixel with the corresponding quadrant And determines whether the difference between them is equal to or greater than a preset second reference value (S18).

If a pixel in which the luminance value of the neighboring pixels is equal to or greater than the second reference value of the cumulative luminance average value of the quadrant is found, step S18 is repeated for neighboring pixels of each of the found pixels.

The detailed contents of steps S17 and S18 are similar to those of steps S10 and S11 described above, and a description thereof will be omitted.

If it is determined in step S19 that a pixel whose luminance value is no less than the second reference value of the cumulative luminance average value of the quadrant is found, control proceeds to step S20 in which curvature of the fluid lens is controlled to avoid backlight.

After step S20, the process goes back to step S1. A detailed description of step S20 will be described later.

Hereinafter, the case where the number of high luminance regions exceeds two will be described.

As shown in FIG. 7, if it is determined in step S14 that the number of high luminance areas exceeds two, the process proceeds to step S16.

If it is determined in step S14 that the number of high luminance areas exceeds two, the image processing unit 40 can select the quadrant having the highest luminance average value among the high luminance areas (S16).

The description of step S16 will be made with reference to FIG.

14 is a diagram for explaining a method of moving a focus position of a fluid lens in a case where a backlight region exists in three regions out of the regions divided in the automatic control mode.

As shown in Fig. 14, the backlight regions C1, C2, and Q3 are provided on the first quadrant Q1, the second quadrant Q2, and the third quadrant Q3, of the four quadrants Q1, Q2, Q3, The image processing unit 40 selects the quadrant having the highest luminance average value among the three quadrants Q1, Q2 and Q3 as the high luminance region when the three quadrants Q1, Q2, and Q3 exist and the three quadrants Q1, Q2, And proceeds to step S10.

For example, when the luminance average value of the third quadrant Q3 among the three quadrants Q1, Q2 and Q3 is selected as the highest quadrant, the third quadrant Q3 is selected as the high luminance region, It is determined whether or not the curvature of the first fluid lens 11 and / or the second fluid lens 21 is controlled through the lens control unit 30 by determining the backlight area C3 by performing steps S10 and S11 for the area Q3 (S13). Since the description of step S13 has been described based on Figs. 10 to 13, a description thereof will be omitted.

The lens control unit 30 changes the curvature of the corresponding fluid lens in response to the control command transmitted from the image processing unit 40. As a result, the image captured by the camera to which the corresponding fluid lens belongs is shifted to the image 12 of the dotted line do.

Since the backlight C3 having the highest luminance value among the high luminance areas C1, C2, and C3 is avoided in the shifted image I2, the light blur due to the backlight C3 is relaxed.

After step S13, the process proceeds to step S1 again.

Alternatively, in step S16, the image processing unit 40 may perform step S13 with respect to only the backlight region having the highest average luminance value among the plurality of backlight regions C1, C2, and C3.

On the other hand, FIG. 15 is a diagram for explaining a method of moving the focus position of the fluid lens in the case where there are continuous backlight regions in two regions of the regions divided in the automatic control mode.

14, when the plurality of backlight regions C1, C2, and C3 are spaced apart from each other, the fluid lens can be controlled in accordance with the above-described manner, but as shown in Fig. 15, The backlight region formed in the first quadrant Q1 and the backlight region formed in the fourth quadrant Q2 can not be separately considered.

15, when there are continuous backlight regions in two quadrants Q1 and Q4 among the divided regions, the image processing unit 40 continues to the two quadrants Q1 and Q4 And calculates the center point P2 of the entire backlight region C formed.

The image processing unit 40 then determines whether or not the focus position of the fluid lens is located at the center point P1 of the image I1 along a virtual extension line connecting the center point P1 of the image I1 and the center point P2 of the backlight region C To the lens control unit 30 so as to move in the direction opposite to the backlight area C.

The lens control unit 30 changes the curvature of the corresponding fluid lens in accordance with the control command transmitted from the image processing unit 40. As a result, the image photographed by the camera to which the corresponding fluid lens belongs is the center point P1 of the image I1 The vehicle monitoring system 1 according to an embodiment of the present invention and the backlight avoidance control method using the same will be described in detail with reference to the accompanying drawings. The left and right rear views of the vehicle are photographed by using the cameras 10 and 20 provided with the fluid lenses 11 and 12 to provide the driver with the right and left rear views of the vehicle and the fluid lenses 11 and 12 The backlight can be avoided in the image by changing the focus position.

In addition, the curvature control of the fluid lenses 11 and 12 can be manually controlled by the driver, but it is possible to avoid backlight in the image automatically using the luminance value of the image.

It will be understood by those skilled in the art that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. The scope of the present invention is defined by the appended claims rather than the detailed description and all changes or modifications derived from the meaning and scope of the claims and their equivalents are to be construed as being included within the scope of the present invention do.

1: vehicle monitoring system 2, 3: side mirror
4: Steering wheel 5: Monitor
10: first camera 20: second camera
50: display unit 51: first display unit
52: second display portion

Claims (12)

A camera including a fluid lens having a variable curvature and photographing the outside of the vehicle;
An image processing unit for detecting a backlight area in an image taken by the camera;
A lens controller for controlling a curvature of the fluid lens such that the image is shifted in a direction in which at least a part of the backlight region is avoided in the image to maintain a focal distance of the fluid lens and change a focal position of the fluid lens; And
And a display unit for visually expressing the image.
The method according to claim 1,
Wherein the image processing unit comprises:
Dividing the image into a plurality of regions, calculating a luminance average value for each region,
Calculating a cumulative luminance average value for each region of the image photographed at a time point within a predetermined time from the photographing time of the backlight detection subject image,
Wherein the backlight detection unit determines that the backlight region exists in an area in which the brightness average value of each of the plurality of areas of the backlight detection subject image is greater than or equal to a first reference value that is greater than the accumulated brightness average value of each area.
3. The method of claim 2,
Wherein the image processing unit comprises:
And determines an area having the highest luminance within the area in which the backlight area is determined to exist as the backlight area, among the plurality of areas of the backlight detection subject image.
3. The method of claim 2,
Wherein the image processing unit comprises:
Comparing the luminance values of a plurality of pixels constituting an area determined as having the backlight area among the plurality of areas of the backlight detection subject image with the cumulative luminance average value per area in accordance with a predetermined rule,
And determines a pixel whose luminance value is equal to or greater than a second reference value that is greater than the cumulative luminance average value for each region as the backlight region.
The method of claim 4, wherein
Wherein the image processing unit comprises:
Wherein the pixels surrounding the pixel determined as the backlight region determine the pixels whose luminance value is equal to or greater than the second reference value to the cumulative luminance average value for each region as the backlight region.
5. The method of claim 4,
Wherein the image processing unit comprises:
And determines the pixels as the backlight region when the luminance value among the pixels surrounding the pixel determined as the backlight region is equal to or greater than the reference number of pixels having the second reference value or greater than the accumulated luminance average value for each region.
The method according to claim 1,
Wherein the image processing unit calculates a center point of the backlight region and a center point of the image,
Wherein the lens control unit controls the curvature of the fluid lens such that the image is moved in a direction of a virtual extension line connecting the center point of the backlight region and the center point of the image.
8. The method of claim 7,
Wherein the image processing unit comprises:
And calculates a center point of the backlight area based on an average of coordinate values of pixels constituting the outermost region of the backlight region.
8. The method of claim 7,
Wherein the moving distance of the image is inversely proportional to the distance between the center point of the backlight region and the center point of the image.
The method according to claim 1,
Wherein the image processing unit sets a virtual center area at the center of the image and determines whether an area included in the center area of the backlight area is less than a preset threshold ratio,
Wherein the lens control unit controls the fluid lens such that at least a part of the backlight area is avoided in the image when the area of the backlight area included in the central area is judged to be less than the threshold ratio by the image processing unit, Vehicle monitoring system.
11. The method of claim 10,
Wherein the image processing unit comprises:
When an area included in the central area of the backlight area is determined by the image processing unit to be equal to or less than the threshold ratio,
Calculating a center point of an area not included in the center area and a center point of the image,
Wherein the lens control unit controls the curvature of the fluid lens such that the image is moved in a direction of a virtual extension line connecting the center point of the region not included in the central region and the center point of the image, .
The method according to claim 1,
The camera comprises:
A first camera for photographing the left rear of the vehicle, and a second camera for photographing the right rear of the vehicle.
KR1020150129627A 2015-09-14 2015-09-14 Monitoring system for vehicle KR20170031982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150129627A KR20170031982A (en) 2015-09-14 2015-09-14 Monitoring system for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150129627A KR20170031982A (en) 2015-09-14 2015-09-14 Monitoring system for vehicle

Publications (1)

Publication Number Publication Date
KR20170031982A true KR20170031982A (en) 2017-03-22

Family

ID=58497427

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150129627A KR20170031982A (en) 2015-09-14 2015-09-14 Monitoring system for vehicle

Country Status (1)

Country Link
KR (1) KR20170031982A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230042996A (en) * 2021-09-23 2023-03-30 한국자동차연구원 Camera control system for responding to backlight based on camera angle adjustment
EP4239588A1 (en) * 2022-03-02 2023-09-06 Hyundai Mobis Co., Ltd. Method and apparatus for detecting backlight of image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230042996A (en) * 2021-09-23 2023-03-30 한국자동차연구원 Camera control system for responding to backlight based on camera angle adjustment
EP4239588A1 (en) * 2022-03-02 2023-09-06 Hyundai Mobis Co., Ltd. Method and apparatus for detecting backlight of image

Similar Documents

Publication Publication Date Title
JP6084434B2 (en) Image processing system and image processing method
US8279280B2 (en) Lane departure warning method and system using virtual lane-dividing line
US7646889B2 (en) Rain sensor
EP2919197A1 (en) Object detection device and object detection method
CN111447358B (en) Image pickup apparatus
US20120056902A1 (en) Multi-display apparatus
JP2014116756A (en) Periphery monitoring system
JP5171723B2 (en) Obstacle detection device and vehicle equipped with the device
JP6750519B2 (en) Imaging device, imaging display method, and imaging display program
US20150227799A1 (en) Travel lane boundary line detection apparatus
EP2770478B1 (en) Image processing unit, imaging device, and vehicle control system and program
US20140055572A1 (en) Image processing apparatus for a vehicle
EP2698292A1 (en) Wiper control apparatus
US20170136962A1 (en) In-vehicle camera control device
US20200086792A1 (en) Rearview display device, rearview display method, and program
KR20170002330A (en) Driver-customized display variable apparatus and method for controlling the same
WO2015129280A1 (en) Image processing device and image processing method
KR20170031982A (en) Monitoring system for vehicle
US20130222376A1 (en) Stereo image display device
JP6668975B2 (en) Electronics and vehicles
US10089731B2 (en) Image processing device to reduce an influence of reflected light for capturing and processing images
KR20150079004A (en) Dispay apparatus of vehicle and contolling method for the same
JP2022047522A (en) Camera monitoring system for motor vehicles
KR101278237B1 (en) Method and apparatus for recognizing vehicles
JP2009025050A (en) Quality discrimination device of visual field, quality discrimination method of visual field and computer program