WO2018048167A1 - Procédé d'affichage d'image dans un mode de visualisation multiple - Google Patents

Procédé d'affichage d'image dans un mode de visualisation multiple Download PDF

Info

Publication number
WO2018048167A1
WO2018048167A1 PCT/KR2017/009674 KR2017009674W WO2018048167A1 WO 2018048167 A1 WO2018048167 A1 WO 2018048167A1 KR 2017009674 W KR2017009674 W KR 2017009674W WO 2018048167 A1 WO2018048167 A1 WO 2018048167A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
target image
mode
view
Prior art date
Application number
PCT/KR2017/009674
Other languages
English (en)
Korean (ko)
Inventor
임성현
김동균
Original Assignee
엘지이노텍(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020160165385A external-priority patent/KR20180028354A/ko
Priority claimed from KR1020160165383A external-priority patent/KR20180028353A/ko
Application filed by 엘지이노텍(주) filed Critical 엘지이노텍(주)
Priority to CN201780055360.3A priority Critical patent/CN109691091A/zh
Priority to US16/331,413 priority patent/US11477372B2/en
Priority to EP17849051.2A priority patent/EP3512195A1/fr
Publication of WO2018048167A1 publication Critical patent/WO2018048167A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present invention relates to an image display method in a multi-view mode, and more particularly, to an image obtained by at least one image acquisition unit for around view monitoring (AVM) and an around view generated using the same. view) relates to a method of displaying an image in a multi-view mode.
  • AVM around view monitoring
  • the driver assistance system is a system that assists the driver while driving or parking for the driver's safe driving.
  • the driver assistance system essentially includes a device that provides image information so that the driver can grasp the situation outside the vehicle while sitting in the driver's seat.
  • the apparatus for providing image information includes a camera.
  • a plurality of cameras facing in various directions may be installed outside the vehicle.
  • the acquired images may be differently converted according to various view modes and provided to the driver.
  • the provided images tend to be corrected based on the image according to the view mode displayed on the display screen which the driver can visually confirm. This trend can also be seen in images for around view monitoring (AVM) systems.
  • AVM view monitoring
  • the around view monitoring system is being actively researched recently to provide the driver with a video of the situation around the vehicle equipped with multiple cameras.
  • Several automotive companies in Germany and Japan have already developed and released products.
  • systems that have multiple cameras to give the driver a bird's eye view like the view from the sky have become mainstream.
  • the around view monitoring system may generate an image representing an object, for example, a 360-degree field of view around an object, for example, a vehicle, by using images acquired from a limited number of cameras through a device for providing image information.
  • the lens mounted on the camera may be a fisheye lens or a similar type of wide-angle lens to obtain a wide viewing angle.
  • the images acquired by the lens are different from those based on human vision, and the images to be finally output compared with the direction of the camera lens installed in the vehicle are top view images. Images acquired from the cameras must go through various image signal processing.
  • 1 and 2 show an image according to a conventional technology.
  • the image of FIG. 1 represents an image acquired by a camera installed on a right side of a vehicle
  • the image of FIG. 2 represents an around view image including the left image.
  • the ratio of the dark area due to the shadow of the vehicle and the bright area due to the lighting is different from the around view image, so that the brightness of the bright area due to the illumination is saturated in the image of FIG. This is a result of adjusting light exposure or adjusting white balance based on the image of FIG. 1.
  • An object of the present invention for solving the above problems is to provide a method for displaying a corrected image in the corresponding view mode by properly correcting the image displayed in the specific view mode in the corresponding view mode.
  • an image processing method supporting multiple modes determines whether to convert or match at least one received image in response to a user input, and receives the at least one received according to the conversion or matching. Outputting a target image by converting or matching an image; Extracting brightness information from the target image; And applying an adjustment value corresponding to the brightness information to convert or match the at least one image.
  • the image processing method supporting multiple modes may further include adjusting an exposure time of at least one camera device based on the target image, wherein the adjustment value may vary according to the brightness information and the exposure time. .
  • the image processing method supporting the multi mode may further include receiving the at least one image data obtained from each of the at least one camera device.
  • an image processing method supporting multiple modes may further include outputting an image image by performing color interpolation and first image processing on the Bayer pattern.
  • the first image processing includes calibration, lens distortion correction, color correction, gamma correction, color space conversion, and edge enhancement. It may include performing at least one of (Enhancement).
  • the outputting of the target image may include selecting at least one of the at least one image corresponding to whether to convert or match the target image; Generating a transformed image obtained by removing perspective from the selected image; Extracting from the converted image corresponding to an area to be inserted into the target image; Placing the extracted data in the target image; And transferring the target image to a display device.
  • the transform image may be obtained by performing an inverse perspective mapping transform.
  • the reverse projection image conversion and placement on the target image may be performed together through a lookup table.
  • the applying of the at least one image to conversion or matching may include batch converting at least some data arranged in the lookup table.
  • the at least one image may be transmitted from at least one camera device mounted in a vehicle, and the user input may be input through an interface mounted in the vehicle.
  • the at least one image may be image information of at least one of the front, rear, left, and right sides of the vehicle, and the user input may include a top view, a front view, a rear view, a left side view, and a right side view. And selecting at least one of the combination thereof.
  • the apparatus may include at least one lookup table that is distinguished in correspondence with each of the image information, and the at least one lookup table may include a weight corresponding to the image information.
  • the computer readable recording medium may be recorded by an application program, which is implemented by a processor to realize the image processing method supporting the above-described multiple modes.
  • An image conversion or registration device includes a processing system including at least one processor and at least one memory device in which a computer program is stored, wherein the image conversion or registration device is a user. Determining whether to convert or match at least one image in response to an input of a, and outputting a target image by converting or matching the at least one image according to the conversion or matching; Extracting brightness information from the target image; And applying an adjustment value corresponding to the brightness information to convert or match the at least one image.
  • the processing system further causes the image conversion or matching device to adjust the exposure time of the at least one camera device based on the target image, wherein the adjustment value is in accordance with the brightness information and the exposure time. It can support changing, multiple modes.
  • processing system may cause the image conversion or matching device to perform the step of receiving the at least one image data obtained from each of the at least one camera device.
  • the processing system when the at least one image data is received in a Bayer pattern, the processing system further performs the step of outputting the image image by performing color interpolation and first image processing on the Bayer pattern by the image conversion or matching device. You can do that.
  • the first image processing includes calibration, lens distortion correction, color correction, gamma correction, color space conversion, and edge enhancement. Edge Enhancement) may be performed.
  • the outputting of the target image may include selecting at least one of the at least one image corresponding to whether to convert or match the target image; Generating a transformed image obtained by removing perspective from the selected image; Extracting from the converted image corresponding to an area to be inserted into the target image; Placing the extracted data in the target image; And transferring the target image to a display device.
  • the transform image may be obtained by performing an inverse perspective mapping transform.
  • the reverse projection image conversion and placement on the target image may be performed together through a lookup table.
  • the applying of the at least one image to conversion or matching may include batch converting at least some data arranged in the lookup table.
  • the at least one image may be transmitted from at least one camera device mounted in a vehicle, and the user input may be input through an interface mounted in the vehicle.
  • the at least one image may be image information of at least one of the front, rear, left, and right sides of the vehicle, and the user input may include a top view, a front view, a rear view, a left side view, and a right side view. And selecting at least one of the combination thereof.
  • the apparatus may include at least one lookup table that is distinguished in correspondence with each of the image information, and the at least one lookup table may include a weight corresponding to the image information.
  • an image processing apparatus includes: a conversion or matching unit configured to output a target image by converting or matching at least one image in response to a user input; A brightness controller which receives brightness information of the target image and outputs an adjustment value for updating a look-up table in a conversion or matching unit; And an adjusting unit configured to receive the brightness information and output a control signal for adjusting an exposure time of at least one camera device.
  • the image processing apparatus may include color interpolation, demosaicing, calibration, lens distortion correction, color correction, and gamma correction in a Bayer pattern transmitted from at least one camera device.
  • the apparatus may further include an image processor configured to selectively output the at least one image by performing operations such as gamma correction, color space conversion, and edge enhancement.
  • the control unit may transmit a change amount or a change rate generated in adjusting the exposure time of the at least one camera device to the brightness controller, and the brightness controller determines the adjustment value based on the change rate and the brightness information. Can be.
  • the at least one image may be image information of at least one of the front, rear, left, and right sides of the vehicle, and the user input may include a top view, a front view, a rear view, a left side view, and a right side view. And selecting at least one of the combination thereof.
  • a method of displaying an image in a multiple view mode wherein the plurality of outputs display target images acquired by at least one image acquisition unit in a plurality of output modes. Adjusting brightness of the target image converted to be output in a desired output mode among the modes; And displaying the target image in which the white balance is adjusted.
  • a method of displaying an image in a multiple view mode wherein the plurality of outputs display target images acquired by at least one image acquisition unit in a plurality of output modes. Adjusting the light exposure time of the image acquisition unit by using the target image converted to be output in a desired output mode among modes; And displaying the target images acquired according to the adjustment.
  • a method of displaying an image in a multiple view mode wherein the plurality of outputs display target images acquired by at least one image acquisition unit in a plurality of output modes. Adjusting a white balance of the target image converted to be output in a desired output mode among the modes; And displaying the target image in which the white balance is adjusted.
  • an image corrected to be suitable for a specific view mode can be provided to a user.
  • the present invention can reduce the computation, time required to convert, process the image transmitted from the at least one camera device in response to the user input, and can reduce the degradation of the image image.
  • 1 and 2 show an image according to a conventional technology.
  • FIG. 3 illustrates an image processing method in a multiple view mode.
  • FIG. 4 illustrates a first example of an image processing system in a multiple view mode.
  • FIG. 5 illustrates a second example of an image processing system in a multiple view mode.
  • FIG. 6 is a schematic block diagram of an image display apparatus 100 that executes an image display method in a multiple view mode according to an embodiment of the present invention.
  • FIG. 7 is a front view (a), a right side view (b), a rear view (c), and a left side view (d) in the actual view mode as images displayed by the image display apparatus 100.
  • FIG. 8 is a front view a converted from an image acquired by the image acquisition unit 110 to an image to be displayed in a top view mode.
  • FIG. 9 is an around view image generated by using the four images of FIG. 8.
  • 10 is a front view of a vehicle for explaining the difference between the actual view mode and the top view mode.
  • FIG. 11 is a flowchart illustrating an image display method in a multiple view mode according to another exemplary embodiment of the present invention.
  • FIG. 12 is a flowchart illustrating an image display method in a multiple view mode according to another exemplary embodiment of the present invention.
  • FIG. 13 is a flowchart illustrating a method of displaying an image in a multiple view mode according to another embodiment of the present invention.
  • FIG. 14 is a flowchart illustrating a method of displaying an image in a multiple view mode according to another embodiment of the present invention.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • FIG. 3 illustrates an image processing method in a multiple view mode.
  • the image processing method in the multi-view mode may include determining whether to convert or match at least one image in response to a user input, and determine brightness information in an extraction region of at least one image. Collecting and determining a correction criterion 14, and converting or matching at least one image according to the correction criterion 15.
  • the image processing method in the multi-view mode includes adjusting the exposure time of the at least one camera device based on the target image (16) and images corresponding to the adjusted exposure time and the extracted brightness information. May further comprise applying 18 to transform or match.
  • the correction criteria may change in correspondence with the brightness information and the exposure time.
  • the at least one camera whose exposure time is adjusted may be a corresponding camera that captures an image disposed on the target image. For example, when the target image is an image captured by one first camera, the exposure time of the first camera may be adjusted. Alternatively, when the target image is made of at least two cameras including the first camera and the second camera, the exposure time of at least two cameras including the first camera and the second camera may be adjusted.
  • the exposure time of each camera of a single camera or at least one camera may be changed in consideration of an area output from the target image in an image photographed by each camera of the single camera or at least one camera in the target image.
  • the part constituting the target image among the images photographed by each camera may be changed, and the exposure time may be adjusted in consideration of this.
  • the image processing method in the multi-view mode may further include the step 10 of receiving at least one image data obtained from each of the at least one camera device.
  • at least one camera device may include a camera mounted on a vehicle. Imaging equipment capable of capturing at least one of the front, rear, right, and left sides of the vehicle, respectively, may acquire surrounding information of the vehicle.
  • the at least one camera device may output image data composed of a Bayer pattern.
  • a camera apparatus includes an image sensor for converting an optical signal incident through a path formed through at least one lens into an electrical signal.
  • the image sensor is composed of pixels arranged in a pattern for each color.
  • R, G, and B color filters are arranged in a specific pattern on monochrome pixel cells arranged by the number of pixels.
  • the R, G, and B color patterns intersect and are arranged according to the visual characteristics of the user (ie, human), which is called a Bayer pattern.
  • Each pattern is a monochrome pixel that only detects the brightness of black and white, not the color. When outputting data having such a pattern, it is composed of several colors through color interpolation or demosaicing process.
  • the image may be changed into the form of the image.
  • the Bayer pattern has a much smaller amount of data than the image data. Therefore, the size of the data transmitted by the in-vehicle communication network can be reduced, so that even when applied to an autonomous vehicle, the peripheral information obtained from at least one camera device disposed in the vehicle is analyzed. Can be removed.
  • the image processing method in the multi-view mode may further include outputting an image image by performing color interpolation and first image processing on the Bayer pattern.
  • the first image processing may include performing at least one of color correction, gamma correction, color space conversion, and edge enhancement.
  • each camera device mounted on the vehicle may output data in the form of an image after performing color interpolation and first image processing on the Bayer pattern output by the image sensor.
  • the image processing apparatus does not need to perform operations for color interpolation and first image processing.
  • the step of outputting the target image by converting or matching at least one image 14 may include selecting at least one of the at least one image according to whether the image is converted or matched, calibration, and lens distortion correction. (lens distortion correction), or generating a transformed image from which the perspective image is removed from the selected image, extracting corresponding to the area to be inserted into the target image from the converted image, placing the extracted data in the target image, and Delivering the image to a display device.
  • lens distortion correction or generating a transformed image from which the perspective image is removed from the selected image, extracting corresponding to the area to be inserted into the target image from the converted image, placing the extracted data in the target image, and Delivering the image to a display device.
  • a converted image may be generated by removing perspective from an image obtained through a camera device mounted on a vehicle.
  • a transform image can be obtained by performing an inverse perspective mapping transform.
  • the process of converting the images and the process of converting or matching at least two images can be done together through a lookup table. If the vehicle has height and angle with the camera installed, and horizontal and vertical angle of view information of the camera, the relationship between the image plane acquired through the camera and the actual plane (top view target image plane) to be shown to the driver or user can be known. In addition, since the camera device mounted on the vehicle is fixed, when converting or matching the image acquired by the at least one camera device, an area to be converted or matched may be set in advance. Therefore, by arranging such information in the form of a lookup table, it is possible to shorten the calculation process and time for conversion and conversion or matching.
  • the step of applying 18 to converting or matching at least one image may include batch converting at least some data arranged in the lookup table.
  • Each image to be converted or matched is obtained from a camera disposed at different positions, and photographs of objects in different directions. Therefore, the amount of light is different for each image.
  • image information that the image processing apparatus may receive may inevitably vary, and computation amount for correcting the brightness may not decrease. Therefore, by gathering brightness information from the converted or matched image and adjusting the exposure time of the camera device based on it, the data in the lookup table used when converting or matching at least one image can be adjusted to provide better quality. The resulting image can be obtained.
  • At least one image may be transmitted from at least one camera device mounted in a vehicle, and an input of a user or a driver may be input through an interface mounted in the vehicle.
  • a user or driver such as a head unit of a vehicle or a multimedia device (Audio-video-navigation), may select information of a direction or space desired by the user or the driver through an interface that can be manipulated while driving.
  • the at least one image is image information of at least one of the front, rear, left and right sides of the vehicle
  • the user input is a top view, front view, rear view, left side view, and right side view.
  • the image processing apparatus may output image information of at least one of front, rear, left, and right rooms to output an image image of the top view. May be converted or matched, and the front view may output information of the front camera.
  • the top view and the front view may have different brightness because the information shown to the user is different, and the image constituting the information is different.
  • an area extracted for the target image output from the image information acquired by each camera in response to a user input may vary.
  • a large portion of the image information obtained from the camera collecting the front information can be shown to the user.
  • a relatively small portion of the image information obtained from the camera collecting the front information may be shown to the user. That is, according to which image the user wants to check, it is different from which camera the image obtained is used, or among the images acquired by a specific camera, the area extracted for display to the user is different. Therefore, when the image is converted or matched in consideration of the image brightness of the region actually provided to the user, that is, the extraction region extracted for conversion or matching in each of the at least one image, a higher quality image may be provided to the user. .
  • the exposure time of the at least one camera is controlled using brightness information such as a top view image in which at least one image is converted or matched, the brightness difference according to the multiple views may be reduced.
  • the image processing apparatus may include at least one lookup table that is distinguished corresponding to each of the image information, and the at least one lookup table may include a weight corresponding to the image information. This may be necessary since the amount of light may be different for each image since the camera apparatus photographs an object in different directions.
  • FIG. 4 illustrates a first example of an image processing system in a multiple view mode.
  • the image processing system may include a camera device 30, an image processing device 40, and a vehicle multimedia device 20.
  • the camera device 30 outputs a Bayer pattern BP by converting an optical signal collected through the lens assembly 32 and the lens assembly 32 including at least one lens to collect the incoming optical signal into an electrical signal. It may include an image sensor 34 to. The camera device 30 may transfer the Bayer pattern BP output from the image sensor 34 to the image processing device 40 without performing color interpolation, image correction, or the like.
  • the image processing apparatus 40 may include color interpolation, demosaicing, color correction, gamma correction, and color space conversion into a Bayer pattern BP transmitted from the camera apparatus 30.
  • a target image by converting or matching the image processor 42 and the at least one image CI output from the image processor 42, which may selectively perform operations such as space conversion, edge enhancement, and the like.
  • a transform or matching unit 44 that outputs an OI, and receives image information (eg, brightness information BI) of the target image OI output from the transforming or matching unit 44. ) Image information (eg, brightness information BI) of the target image OI output from the variable for updating the table in the table, the brightness control unit 48 outputting the adjustment value LC0, and the conversion or matching unit 44.
  • To adjust the exposure time of the at least one camera device 30 It may include an adjusting unit (46) for outputting a control signal (ETC).
  • the adjuster 46 may transmit the change amount or the change rate ETR generated in adjusting the exposure time of the at least one camera device 30 to the brightness controller 48.
  • the brightness controller 148 may determine the adjustment value LC0 based on the change rate ETR and the image information (eg, the brightness information BI) of the target image OI.
  • the conversion or matching unit 44 through the user interface 22 of the in-vehicle multimedia device 20, the desired view mode (eg, top view, front view, rear view, left room view, right room view and The mode control signal VC for at least one of the combinations thereof may be received.
  • the conversion or matching unit 44 may select and convert or match one or more image images that need to be converted or matched according to a user's input.
  • the conversion or matching unit 44 may perform at least one of calibration, lens distortion correction, and at least one operation for generating a converted image from which perspective is removed from the image. For example, in order to convert an image obtained from a camera module mounted on a vehicle into a top view image, it is necessary to remove perspective effects on objects and objects in the image. If the vehicle has height and angle with the camera installed, and horizontal and vertical angle of view information of the camera, the relationship between the image plane acquired through the camera and the actual plane (top view target image plane) to be shown to the driver or user can be known. Using this relationship, the image plane obtained from the camera can be converted into a plane to be shown to the user.
  • the vehicle multimedia apparatus 20 may include a display device 24 capable of displaying a target image OI transmitted from the conversion or matching unit 44 to a user or a driver.
  • FIG. 5 illustrates a second example of an image processing system in a multiple view mode.
  • the image processing system may include a camera apparatus 50, an image processing apparatus 60, and a vehicle multimedia apparatus 20.
  • the image processing system described with reference to FIG. 5 may be similar to the image processing system described with reference to FIG. 4. The following description will focus on the differences.
  • the camera device 50 converts the optical signal collected through the lens assembly 32 and the lens assembly 32 including at least one lens to collect the incoming optical signal into an electrical signal, and outputs a Bayer pattern BP.
  • the Bayer pattern BP output from the image sensor 34 and the image sensor 34 may output the image image II through color interpolation and image correction.
  • the image processing apparatus 60 that receives at least one video image I-I may include a conversion or matching unit 44, a brightness controller 48, and an adjusting unit 46.
  • the conversion or matching unit 44, the brightness control unit 48, and the adjustment unit 46 may operate similarly to that described with reference to FIG. 3.
  • the in-vehicle multimedia device 20 may include a user interface 24 and a display 22 and operate similarly to that described in FIG. 3.
  • FIG. 6 is a schematic block diagram of an image display apparatus 100 that executes an image display method in a multiple view mode according to an embodiment of the present invention.
  • the image display apparatus includes an image acquisition unit 110, an image conversion unit 120, a control unit 130, an image correction unit 140, and an output unit 150, and further includes a memory 160. It may further include.
  • the image acquisition unit 110 acquires a digital image using an image sensor.
  • the digital image obtained for display through the output unit 150 will be referred to as a target image.
  • the image acquisition unit 110 may be implemented in the form of a camera module that is installed in the form of a component in a camera or an arbitrary device that is installed independently to photograph the subject.
  • the image acquisition unit 110 is preferably installed on the outer surface of the vehicle to capture a view including the ground on which the vehicle is located in the direction of the vehicle, the right direction, the reverse direction, and the left direction in the form of a camera module. Do.
  • the image converting unit 120 receives the target image acquired by the image obtaining unit 110 and converts the image into a form suitable for the view mode, that is, the view mode.
  • the view mode refers to a mode according to a view point (viewpoint) that changes according to the change of the output mode.
  • Conversion of the target image refers to conversion of the target image according to the change of the view mode.
  • the image conversion refers to an image acquired by the virtual image acquisition unit 110 installed at a predetermined height on the vehicle with the target image acquired by the image acquisition unit 110 installed at the rear of the vehicle, that is, the top view mode. It means to convert the image from. Therefore, when outputting the target image acquired by the image acquisition unit 110 in the actual view mode, it is not necessary to convert the target image separately.
  • the actual view mode means that there is no change in the viewpoint. That is, the image acquisition unit 110 at a specific position outputs the acquired target image as it is without conversion.
  • the controller 130 may control the image acquirer 110 to adjust a parameter indicating the attribute based on the target image acquired by the image acquirer 110 or the target image converted by the image converter 120. Can be controlled. For example, the controller 130 may adjust the degree of light exposure of the camera included in the image converter 110 based on a preset reference average brightness TH using the average brightness AVG of the converted target image. .
  • the reference average brightness refers to the average brightness of the reference image compared with the average brightness of the target image.
  • the adjustment of the light exposure is to adjust the exposure time to adjust the shutter speed or the aperture of the camera of the image acquisition unit 110. As a result, the charge accumulation time and the gain of the photodiode can be adjusted by the above adjustment.
  • the image corrector 140 may correct the image acquired by the image acquirer 110 or the target image converted by the image converter 120 according to the corresponding view mode. For example, the image corrector 140 may adjust the white balance of the target image based on a preset average brightness TH using the converted average brightness AVG.
  • the output unit 150 may output the target image corrected by the image corrector 140 in a corresponding view mode.
  • the output unit 150 may be implemented as an LCD display device. Prior to the output, the controller 130 may change the output mode, that is, the view mode.
  • the memory 160 may store the average brightness and the reference average brightness of the target image.
  • the image converter 120, the controller 130, and the image corrector 140 may execute a program command stored in the memory 160.
  • the image converter 120, the controller 130, and the image corrector 140 may include a central processing unit (CPU), a graphics processing unit (GPU), or methods according to the present invention. It may mean a dedicated processor.
  • Memory 160 may also be comprised of volatile storage media and / or non-volatile storage media.
  • the memory 160 may be configured as read only memory (ROM) and / or random access memory (RAM).
  • FIG. 7 is a front view (a), a right side view (b), a rear view (c), and a left side view (d) in the actual view mode as images displayed by the image display apparatus 100.
  • the front view (a), the right side view (b), the rear view (c), and the left side view of the image displayed by the image display apparatus 100 are displayed. Images corresponding to (d) are shown respectively.
  • Camera modules corresponding to the image acquisition unit 110 may be installed at a predetermined height on the front, right side, rear side, and right side of the outside of the vehicle.
  • the location where the camera modules are installed may be a radiator grille, right and left indicator lights and vehicle trunk cover of the vehicle.
  • the camera module may be installed such that the lens surface faces the direction of the ground at an angle to the installation position. Therefore, the images shown in FIG. 7 may appear to be longer in the direction than the actual image.
  • FIG. 8 is a front view a converted from an image acquired by the image acquisition unit 110 to an image to be displayed in a top view mode.
  • the image of the front view (a) converted by the image converter 120 is different from the image of the front view (a) before the conversion displayed in the actual view mode of FIG. 7. have.
  • the image in the top view mode which is directly viewed from a certain height on the vehicle, is a corrected image close to the actual appearance.
  • FIG. 9 is an around view image generated by using the four images of FIG. 8.
  • the around view image refers to an image that is captured by a virtual camera installed on a vehicle.
  • the around view image is mainly used to display the surrounding ground, including the roof of the vehicle, marked with parking lines.
  • the four target images actually photographed and the around view image generated based on the same are different.
  • the ratio of the total area of the entire area to the actual captured image may change in the around view mode.
  • the ratio may change even larger.
  • the area under the A-2 may appear to occupy a high ratio in the entire area as the area closest to the camera, but in FIG. The ratio is reduced.
  • 10 is a front view of a vehicle for explaining the difference between the actual view mode and the top view mode.
  • a camera module installed in a vehicle as the image acquisition unit 100 is illustrated.
  • the actual camera modules L-1 and R-1 are camera modules installed in the left or right indicator light.
  • the camera modules L-2 and R-2 are virtual camera modules.
  • a difference may occur in the appearance of images acquired by the two camera modules due to the difference in the angle at which the actual camera modules and the virtual camera modules photograph the subject.
  • the target images acquired by the actual camera modules may be converted to match the view mode so that the output image according to the view mode is output close to the actual state.
  • FIG. 11 is a flowchart illustrating an image display method in a multiple view mode according to another exemplary embodiment of the present invention.
  • a reference value for adjustment is set, and this reference value may be stored in a memory or the like (S110).
  • the image acquisition unit 110 obtains a target image (S120).
  • the image converter 120 converts the obtained target image to fit the desired output mode, that is, the view mode (S130).
  • the target image is corrected (S140). Correction of the target image may be performed in two ways. As a result, the correction of the target image may result in adjusting the brightness of the target image. As one of them, the controller 130 may calculate the average brightness of the target image and adjust the light exposure of the camera of the image acquisition unit 110 based on the reference average brightness using the calculated average brightness of the target image. . As another one, the image corrector 140 may adjust the white balance of the target image based on the reference average brightness using the average brightness of the target image.
  • the output unit 150 displays the corrected target image (S150).
  • the controller 130 may convert the output mode of the output unit 150.
  • the view mode may be changed by changing the view point at which the target image is output according to the change of the output mode.
  • the adjustment of the camera light amount or the white balance of the target image may be performed by the number of different view modes in which the obtained target image is to be displayed.
  • FIG. 12 is a flowchart illustrating an image display method in a multiple view mode according to another exemplary embodiment of the present invention.
  • a reference value for adjustment is set, and this reference value may be stored in a memory or the like (S210).
  • the image acquisition unit 110 obtains a target image (S220).
  • the image converter 120 converts the obtained target image to fit the desired output mode, that is, the view mode, and in particular, converts the obtained target image into an around view image (S230).
  • FIG. 13 is a flowchart illustrating a method of displaying an image in a multiple view mode according to another embodiment of the present invention.
  • the controller 130 converts an output mode, that is, a view mode, for displaying a target image (S310).
  • the image corrector 140 corrects or reverse corrects the target image to fit the converted view mode (S320).
  • the reverse correction here means returning to the state without the said correction.
  • the image corrector 140 corrects the target image according to the converted view mode according to the conversion of the view mode, and returns the corrected target image to the uncorrected target image when there is no conversion of the view mode. have.
  • the output unit 150 displays the corrected or de-corrected target image.
  • FIG. 14 is a flowchart illustrating a method of displaying an image in a multiple view mode according to another embodiment of the present invention.
  • FIG. 14 illustrates a flowchart of adjusting the light exposure time of the image acquisition unit 110, which may be implemented as a camera module.
  • a threshold (TH) value may be set in advance, and the reference value may be stored in advance in a memory or the like (S410).
  • the target image acquired by the image acquisition unit 110 is input to the control unit 130 (S420).
  • the controller 130 calculates an average brightness (AVG) of the target image (S430).
  • the current exposure time is maintained (S441).
  • the average brightness and the reference value is different (S445) and the average brightness and the reference value are compared (S445), if the average brightness is larger than the reference value, the current exposure time can be reduced by adjustment (S442), and the average brightness is If it is not larger than the reference value, the current exposure time may be increased by adjustment (S443).
  • an image display method in a multi-view mode may adjust brightness of each pixel constituting the target image by using a weight before converting the acquired target image into an image suitable for the corresponding view mode.
  • each pixel has a weight having a fixed weight according to the position of each pixel.
  • each component is illustrated in another block and described as an example, but each component may be configured as one block.
  • each block may be configured in a controller or a processor to perform the above-described series of operations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne un procédé d'affichage d'une image dans un mode de visualisation multiple. Un procédé selon le présent mode de réalisation comprend les étapes consistant à : régler un temps d'exposition à la lumière d'une unité d'acquisition d'image en utilisant une image cible convertie pour être émise dans un mode de sortie souhaité parmi une pluralité de modes de sortie; et afficher des images cibles acquises selon le réglage. Par conséquent, selon la présente invention, une image corrigée afin qu'elle soit appropriée à un mode de visualisation spécifique peut être fournie à un utilisateur.
PCT/KR2017/009674 2016-09-08 2017-09-05 Procédé d'affichage d'image dans un mode de visualisation multiple WO2018048167A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780055360.3A CN109691091A (zh) 2016-09-08 2017-09-05 用于在多视图模式下显示图像的方法
US16/331,413 US11477372B2 (en) 2016-09-08 2017-09-05 Image processing method and device supporting multiple modes and improved brightness uniformity, image conversion or stitching unit, and computer readable recording medium realizing the image processing method
EP17849051.2A EP3512195A1 (fr) 2016-09-08 2017-09-05 Procédé d'affichage d'image dans un mode de visualisation multiple

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
KR10-2016-0115912 2016-09-08
KR20160115913 2016-09-08
KR20160115912 2016-09-08
KR10-2016-0115913 2016-09-08
KR10-2016-0165383 2016-12-06
KR1020160165385A KR20180028354A (ko) 2016-09-08 2016-12-06 다중 뷰 모드에서의 영상 디스플레이 방법
KR1020160165383A KR20180028353A (ko) 2016-09-08 2016-12-06 다중 뷰 모드에서의 영상 디스플레이 방법
KR10-2016-0165385 2016-12-06

Publications (1)

Publication Number Publication Date
WO2018048167A1 true WO2018048167A1 (fr) 2018-03-15

Family

ID=61561417

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/009674 WO2018048167A1 (fr) 2016-09-08 2017-09-05 Procédé d'affichage d'image dans un mode de visualisation multiple

Country Status (1)

Country Link
WO (1) WO2018048167A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208140A1 (en) * 2012-02-15 2013-08-15 Harman Becker Automotive Systems Gmbh Brightness adjustment system
KR20130117564A (ko) * 2012-04-18 2013-10-28 현대모비스 주식회사 차량용 카메라로부터 획득한 영상을 보정하는 영상 처리 장치 및 상기 장치를 이용한 영상 보정 방법
US20140152778A1 (en) * 2011-07-26 2014-06-05 Magna Electronics Inc. Imaging system for vehicle
KR101558586B1 (ko) * 2009-06-15 2015-10-07 현대자동차일본기술연구소 차량 주위 영상 표시장치 및 방법
KR20150143144A (ko) * 2014-06-13 2015-12-23 현대모비스 주식회사 Avm 장치 및 동작 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101558586B1 (ko) * 2009-06-15 2015-10-07 현대자동차일본기술연구소 차량 주위 영상 표시장치 및 방법
US20140152778A1 (en) * 2011-07-26 2014-06-05 Magna Electronics Inc. Imaging system for vehicle
US20130208140A1 (en) * 2012-02-15 2013-08-15 Harman Becker Automotive Systems Gmbh Brightness adjustment system
KR20130117564A (ko) * 2012-04-18 2013-10-28 현대모비스 주식회사 차량용 카메라로부터 획득한 영상을 보정하는 영상 처리 장치 및 상기 장치를 이용한 영상 보정 방법
KR20150143144A (ko) * 2014-06-13 2015-12-23 현대모비스 주식회사 Avm 장치 및 동작 방법

Similar Documents

Publication Publication Date Title
JP4869795B2 (ja) 撮像制御装置、撮像システム、および撮像制御方法
US7245325B2 (en) Photographing device with light quantity adjustment
US9497386B1 (en) Multi-imager video camera with automatic exposure control
JP6319340B2 (ja) 動画撮像装置
EP3512195A1 (fr) Procédé d'affichage d'image dans un mode de visualisation multiple
WO2017195965A1 (fr) Appareil et procédé de traitement d'image en fonction de la vitesse d'un véhicule
WO2015083971A1 (fr) Appareil électronique et son procédé de commande
JP4487342B2 (ja) デジタルカメラ
CN109493273A (zh) 一种色彩一致性调节方法
WO2015122604A1 (fr) Capteur d'images à semi-conducteurs, dispositif électronique, et procédé de focalisation automatique
EP3207696A1 (fr) Appareil imageur et procédé d'imagerie
JP2013029995A (ja) 撮像システム
WO2021137555A1 (fr) Dispositif électronique comprenant un capteur d'image et son procédé de fonctionnement
WO2018048167A1 (fr) Procédé d'affichage d'image dans un mode de visualisation multiple
WO2019117549A1 (fr) Appareil d'imagerie, procédé d'imagerie et produit-programme informatique
KR20180028354A (ko) 다중 뷰 모드에서의 영상 디스플레이 방법
JP5545596B2 (ja) 画像入力装置
JP3397397B2 (ja) 撮像装置
WO2018012925A1 (fr) Procédé et dispositif de production d'image
WO2011027994A2 (fr) Appareil de traitement d'image et procédé de traitement d'image pour générer une image grand angle
WO2018070799A1 (fr) Procédé et appareil de mise en correspondance d'images
WO2022114424A1 (fr) Système de boîte noire de véhicule
JP2010273209A (ja) 監視装置
EP3656121A1 (fr) Appareil d'imagerie, procédé d'imagerie et produit-programme informatique
KR20180028353A (ko) 다중 뷰 모드에서의 영상 디스플레이 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17849051

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017849051

Country of ref document: EP

Effective date: 20190408