KR20170020666A - AVM system and method for compositing image with blind spot - Google Patents
AVM system and method for compositing image with blind spot Download PDFInfo
- Publication number
- KR20170020666A KR20170020666A KR1020150114844A KR20150114844A KR20170020666A KR 20170020666 A KR20170020666 A KR 20170020666A KR 1020150114844 A KR1020150114844 A KR 1020150114844A KR 20150114844 A KR20150114844 A KR 20150114844A KR 20170020666 A KR20170020666 A KR 20170020666A
- Authority
- KR
- South Korea
- Prior art keywords
- image data
- avm
- vehicle
- pixel
- image
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/02—Rear-view mirror arrangements
- B60R1/08—Rear-view mirror arrangements involving special optical features, e.g. avoiding blind spots, e.g. convex mirrors; Side-by-side associations of rear-view and other mirrors
- B60R1/081—Rear-view mirror arrangements involving special optical features, e.g. avoiding blind spots, e.g. convex mirrors; Side-by-side associations of rear-view and other mirrors avoiding blind spots, e.g. by using a side-by-side association of mirrors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- H04N5/225—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
- B60R2300/202—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Y—INDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
- B60Y2400/00—Special features of vehicle units
- B60Y2400/92—Driver displays
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
Description
The present invention relates to an AVM system and a blind zone image synthesis method.
Generally, the driver's vision (view) on the inside of the vehicle is directed mainly toward the front, and the left and right and rear views of the driver are largely obscured by the vehicle body and have a very limited clock.
In order to solve such a problem, a clock assistant means such as a side mirror or the like is provided in a vehicle, and in recent years, techniques including camera means for photographing an external image of the vehicle and providing it to the driver have been applied to vehicles.
In recent years, Around View Monitoring (AVM) system, in which a plurality of cameras are installed around a vehicle and displays 360 ° omni-directional images around the vehicle, is also applied. The AVM system combines the images of the surroundings of the vehicle captured through a plurality of cameras that photograph the surroundings of the vehicle to provide a top view image (i.e., AVM image) in which the driver looks at the vehicle in the sky, Allows obstacles around the vehicle to be visible on the screen.
1, the
However, the presence of the
The present invention is not limited to masking a blind spot generated by the structure of an automobile, the angle of view of a camera, an installation position or an attitude of a camera, and synthesizing the blurry image into a virtual image generated in consideration of a traveling speed and a traveling direction of the vehicle, And an AVM system and a blind zone image synthesis method capable of providing a more natural AVM image.
Other objects of the present invention will become readily apparent from the following description.
According to an aspect of the present invention, there is provided an AVM (Around View Monitoring) system, comprising: an image input unit for storing, in respective camera image data, image signals input in real time from a plurality of cameras provided in a vehicle; An AVM image generating unit for generating AVM image data including a black masking area using each camera image data; Recognizing a running speed and a moving direction of the vehicle and applying a predetermined weight value to pixel information of one or more reference pixels included in each of a plurality of reference areas of each of a plurality of target pixels corresponding to the black masking area A virtual image generation unit for generating virtual image data configured to have summed pixel information; And an image synthesizer for replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual image data through a display unit.
The virtual image generation unit may receive information on the direction of movement from a sensor that senses a steering wheel rotation angle of the vehicle or senses a steering angle or uses a motion vector extracted from the AVM image data generated by the AVM image generation unit The motion direction can be recognized.
The virtual image generator divides the AVM image data into a plurality of reference areas that divide a peripheral area of a target pixel into a plurality of areas, and generates a plurality of reference areas corresponding to the moving directions of the vehicle, You can specify a weight value.
A relatively large weight value can be designated in the reference area positioned to match the direction of movement of the vehicle relative to other reference areas.
Wherein the virtual image generation unit determines the number of reference pixels to be used in each reference area to calculate pixel information of the target pixel using a formula D = (SxP) / F, where D is a range value of a reference pixel, S is the traveling distance per second of the vehicle, P is the size at which one pixel of the camera image data is displayed in the AVM image, and F is the frame rate.
Each of the plurality of target pixels may have position information that does not coincide with each other within the coordinate range of the black masking area and may have reference areas that do not coincide with other target pixels corresponding to the position information.
According to another aspect of the present invention, there is provided a blind zone image synthesis method performed in an AVM system, the method comprising: (a) storing image signals input in real time from a plurality of cameras provided in a vehicle, step; (b) generating AVM image data including a black masking area displaying a blind spot using each camera image data; (c) recognizing a running speed and a moving direction of the vehicle, and each of a plurality of target pixels corresponding to the black masking area is assigned a weight value previously assigned to pixel information of one or more reference pixels included in each of the plurality of reference areas Generating virtual image data configured to have the summed pixel information by applying the virtual image data; And (d) replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual masking data through a display unit.
The movement direction may be detected by sensing a rotation angle of the steering wheel of the vehicle or receiving information on a direction of movement from a sensor that senses a steering angle or by using a motion vector extracted from the AVM image data generated in step (b) .
In the step (c), a weight value assigned to each reference area including reference pixels is divided into a plurality of reference areas that divide the peripheral area of the target pixel into a plurality of areas in the AVM image data, And may be designated to correspond to the direction of movement of the vehicle using stored reference information.
A relatively large weight value can be designated in the reference area positioned to match the direction of movement of the vehicle relative to other reference areas.
In step (c), the number of reference pixels to be used in each reference area to calculate pixel information of the target pixel is determined using the equation D = (SxP) / F, where D is a range value S is the traveling distance per second of the vehicle, P is the size at which one pixel of the camera image data is displayed in the AVM image, and F is the frame rate.
Each of the plurality of target pixels may have position information that does not coincide with each other within the coordinate range of the black masking area and may have reference areas that do not coincide with other target pixels corresponding to the position information.
Other aspects, features, and advantages will become apparent from the following drawings, claims, and detailed description of the invention.
According to the embodiment of the present invention, a blind spot generated by a structure of an automobile, an angle of view of a camera, an installation position or an attitude is not masked with black, and a virtual image generated in consideration of a traveling speed and a traveling direction of the vehicle Thereby providing a more natural AVM image to the driver.
1 is a view showing a general AVM (Around View Monitoring) image.
2 is a block diagram schematically illustrating a configuration of an AVM system according to an embodiment of the present invention.
3 is a diagram for explaining a virtual image generation technique according to an embodiment of the present invention;
4 is a flowchart illustrating a blind zone image synthesis method according to an embodiment of the present invention.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It is to be understood, however, that the invention is not to be limited to the specific embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.
The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, terms such as a first threshold value, a second threshold value, and the like which will be described later may be previously designated with threshold values that are substantially different from each other or some of which are the same value, Because there is room, the terms such as the first and the second are to be mentioned for convenience of classification.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.
It is to be understood that the components of the embodiments described with reference to the drawings are not limited to the embodiments and may be embodied in other embodiments without departing from the spirit of the invention. It is to be understood that although the description is omitted, multiple embodiments may be implemented again in one integrated embodiment.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.
FIG. 2 is a block diagram of an AVM system according to an embodiment of the present invention, and FIG. 3 is a diagram for explaining a virtual image generation technique according to an embodiment of the present invention.
2, the AVM system includes an
The
The AVM
The synthesized AVM image data generated by the AVM
Herein, the pixel position range information of the compositing object area corresponding to the
The virtual
3 (a) in Fig. 3 (a)) of the
Hereinafter, with reference to FIG. 3, a method of setting a reference for the number or range of reference pixels corresponding to the traveling speed or the moving direction of a vehicle and generating a virtual image will be briefly described.
First, a reference pixel for synthesizing pixel information of a target pixel can be determined by, for example, the following equation (1).
[Equation 1]
D = (SxP) / F [m]
Where D is the range value of the reference pixel, S is the vehicle's per-second travel distance, P is the display size of one pixel, and F is the frame rate.
For example, assuming that the running speed of the vehicle is 5 km / h, the running distance S per second of the vehicle is calculated as 1.38 m, and the frame rate can be calculated as 30, which is the normal frame rate of the camera, The display size P is about how much one pixel of the camera image data is displayed in the
As described above, the range of the reference pixel for synthesizing the pixel information of the target pixel can be determined by the running speed of the
Further, each reference pixel in each reference area specified by the range value of the designated reference pixel is subjected to different weights depending on the direction of movement of the vehicle to generate pixel information of the target pixel, as described later.
Hereinafter, a method of determining a weight for a reference region in each direction according to the movement direction of the
The moving direction of the vehicle can be detected by sensing the steering angle of the steering wheel of the vehicle or using the steering angle information provided from the steering angle sensor provided in the vehicle or by generating the AVM image by the AVM
For example, when the article angle of the
In this case, a relatively large weight value is previously designated so as to be applied to the reference region positioned to coincide with the movement direction (that is, the steering angle) of the
However, if the vehicle is stationary, or if it has not yet been started after the start-up has been turned on, then an equal weight value may be pre-specified for the reference areas.
3 (b), when the
That is, the weight value to be assigned to each reference pixel for synthesizing the pixel information of the target pixel so as to match the steering angle specifying the movement direction of the
Referring to FIGS. 3B and 3C, in order to synthesize the pixel information (for example, a color value) of the target pixel DEST, a reference area in three outer sides of the vehicle 100 (for example, Pixels located in the front A1 region, the outer diagonal direction A2 region, and the left A3 region) can be considered as reference pixels. Here, the outer dividing region, which is a reference region, is determined in real time based on the target pixel to which the pixel information is to be combined.
At this time, if the range value D of the reference pixel is set to 1 pixel as in the above example, the reference pixel for synthesizing the pixel information (for example, the color value) of the target pixel DEST is the upper side , Forward), one pixel on the left, and one pixel on the upper left, a1, a2, and a3.
In this case, the weights for the respective reference areas are calculated to be 50% for the upper side (A1 area), 30% for the left side (A3 area) and 20% for the upper left side (A2 area) If the color value a1 is 80, a2 is 10, and a3 is 10, the color value which is the pixel information of the target pixel DEST can be calculated and applied as 80x0.5 + 10x0.3 + 10x0.2 = 45.
In order to calculate the color value of another target pixel NEW positioned on the right side of the target pixel DEST, the upper pixel, the left DEST pixel and the upper left a1 pixel of the target pixel will be considered as reference pixels. That is, since each of the plurality of target pixels has position information that does not coincide with each other within the coordinate range corresponding to the synthesis target area (i.e., the black masking area), the reference areas determined by the position information of each target pixel Each of which may be different.
By repeating the above-described processes, the pixel information of each target pixel in the coordinate range information of the compositing target area corresponding to the
As described above, the direction of movement of the
That is, the
The motion direction of the
If the motion direction of the
Referring to FIG. 2 again, the
The AVM image data having the black masking area synthesized by the
For example, the AVM system operating program, the camera image data generated by the
The
4 is a flowchart illustrating a blind zone image synthesis method according to an embodiment of the present invention.
Referring to FIG. 4, in
In
In
In
It is a matter of course that the AVM system and the blind zone image synthesis method described above can be performed by an automated procedure according to a time series sequence by a built-in or installed program in the digital processing apparatus. The codes and code segments that make up the program can be easily deduced by a computer programmer in the field. In addition, the program is stored in a computer readable medium readable by the digital processing apparatus, and is read and executed by the digital processing apparatus to implement the method. The information storage medium includes a magnetic recording medium, an optical recording medium, and a carrier wave medium.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention as defined in the appended claims. It will be understood that the present invention can be changed.
100: vehicle 110a: AVM video
120: blind spot 210: image input unit
220: AVM image generation unit 230: Virtual image generation unit
240: image synthesizer 250:
260:
Claims (13)
A video input unit for storing video signals inputted respectively in real time from a plurality of cameras provided in a vehicle as respective camera video data in a storage unit;
An AVM image generating unit for generating AVM image data including a black masking area using each camera image data;
Recognizing a running speed and a moving direction of the vehicle and applying a predetermined weight value to pixel information of one or more reference pixels included in each of a plurality of reference areas of each of a plurality of target pixels corresponding to the black masking area A virtual image generation unit for generating virtual image data configured to have summed pixel information; And
And an image synthesizing unit for replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual masking data through a display unit.
The virtual image generation unit may receive information on the direction of movement from a sensor that senses a steering wheel rotation angle of the vehicle or senses a steering angle or uses a motion vector extracted from the AVM image data generated by the AVM image generation unit And recognizes the direction of movement.
The virtual image generator divides the AVM image data into a plurality of reference areas that divide a peripheral area of a target pixel into a plurality of areas, and generates a plurality of reference areas corresponding to the moving directions of the vehicle, And a weight value.
Wherein a relatively large weight value is designated in a reference area positioned to match the direction of movement of the vehicle relative to other reference areas.
Wherein the virtual image generation unit determines the number of reference pixels to be used in each reference area to calculate pixel information of the target pixel using the equation D = (SxP) / F,
Wherein D is a range value of a reference pixel, S is a traveling distance per second of the vehicle, P is a size at which one pixel of the camera image data is displayed in the AVM image, and F is a frame rate AVM system.
Wherein each of the plurality of target pixels has position information that does not coincide with each other within a coordinate range of the black masking area and reference areas that do not coincide with other target pixels according to the position information.
(a) storing a video signal input from each of a plurality of cameras provided in a vehicle in a storage unit as respective camera image data;
(b) generating AVM image data including a black masking area displaying a blind spot using each camera image data;
(c) recognizing a running speed and a moving direction of the vehicle, and each of a plurality of target pixels corresponding to the black masking area is assigned a weight value previously assigned to pixel information of one or more reference pixels included in each of the plurality of reference areas Generating virtual image data configured to have the summed pixel information by applying the virtual image data; And
(d) replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual masking data through a display unit.
The direction of movement,
Wherein the information about the direction of movement is sensed by a sensor for sensing a steering wheel rotation angle of the vehicle or sensing a steering angle or is recognized using a motion vector extracted from the AVM image data generated in the step (b) Blind zone image synthesis method.
In the step (c), a weight value assigned to each reference area including reference pixels is divided into a plurality of reference areas that divide the peripheral area of the target pixel into a plurality of areas in the AVM image data, Wherein the reference image is designated to correspond to the moving direction of the vehicle using the stored reference information.
Wherein a relatively large weight value is designated in a reference area positioned to match the direction of movement of the vehicle relative to other reference areas.
In the step (c), the number of reference pixels to be used in each reference area is calculated using the equation D = (SxP) / F to calculate the pixel information of the target pixel,
Wherein D is a range value of a reference pixel, S is a traveling distance per second of the vehicle, P is a size at which one pixel of the camera image data is displayed in the AVM image, and F is a frame rate Blind zone image synthesis method.
Wherein each of the plurality of target pixels has position information that does not coincide with each other within a coordinate range of the black masking area and has reference areas that do not coincide with other target pixels corresponding to the position information, Way.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150114844A KR20170020666A (en) | 2015-08-13 | 2015-08-13 | AVM system and method for compositing image with blind spot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150114844A KR20170020666A (en) | 2015-08-13 | 2015-08-13 | AVM system and method for compositing image with blind spot |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020170045261A Division KR101764106B1 (en) | 2017-04-07 | 2017-04-07 | AVM system and method for compositing image with blind spot |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20170020666A true KR20170020666A (en) | 2017-02-23 |
Family
ID=58315468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150114844A KR20170020666A (en) | 2015-08-13 | 2015-08-13 | AVM system and method for compositing image with blind spot |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20170020666A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200067506A (en) * | 2018-12-04 | 2020-06-12 | 현대자동차주식회사 | Apparatus and method for performing omnidirectional sensor-fusion and vehicle including the same |
CN115937421A (en) * | 2022-12-13 | 2023-04-07 | 昆易电子科技(上海)有限公司 | Method for generating simulation video data, image generating device and readable storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130124762A (en) | 2012-05-07 | 2013-11-15 | 현대모비스 주식회사 | Around view monitor system and monitoring method |
-
2015
- 2015-08-13 KR KR1020150114844A patent/KR20170020666A/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130124762A (en) | 2012-05-07 | 2013-11-15 | 현대모비스 주식회사 | Around view monitor system and monitoring method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200067506A (en) * | 2018-12-04 | 2020-06-12 | 현대자동차주식회사 | Apparatus and method for performing omnidirectional sensor-fusion and vehicle including the same |
US11789141B2 (en) | 2018-12-04 | 2023-10-17 | Hyundai Motor Company | Omnidirectional sensor fusion system and method and vehicle including the same |
CN115937421A (en) * | 2022-12-13 | 2023-04-07 | 昆易电子科技(上海)有限公司 | Method for generating simulation video data, image generating device and readable storage medium |
CN115937421B (en) * | 2022-12-13 | 2024-04-02 | 昆易电子科技(上海)有限公司 | Method for generating simulated video data, image generating device and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1179958B1 (en) | Image processing device and monitoring system | |
JP3300334B2 (en) | Image processing device and monitoring system | |
KR101764106B1 (en) | AVM system and method for compositing image with blind spot | |
CN103770706B (en) | Dynamic reversing mirror indicating characteristic | |
JP5072576B2 (en) | Image display method and image display apparatus | |
WO2015194501A1 (en) | Image synthesis system, image synthesis device therefor, and image synthesis method | |
JP7247173B2 (en) | Image processing method and apparatus | |
US20150042799A1 (en) | Object highlighting and sensing in vehicle image display systems | |
US9025819B2 (en) | Apparatus and method for tracking the position of a peripheral vehicle | |
JP2018531530A (en) | Method and apparatus for displaying surrounding scene of vehicle / towed vehicle combination | |
WO2005088970A1 (en) | Image generation device, image generation method, and image generation program | |
JP2008027138A (en) | Vehicle monitoring device | |
CN101487895B (en) | Reverse radar system capable of displaying aerial vehicle image | |
KR20190047027A (en) | How to provide a rearview mirror view of the vehicle's surroundings in the vehicle | |
JP5178454B2 (en) | Vehicle perimeter monitoring apparatus and vehicle perimeter monitoring method | |
JP6338930B2 (en) | Vehicle surrounding display device | |
KR20170118077A (en) | Method and device for the distortion-free display of an area surrounding a vehicle | |
KR20180020274A (en) | Panel conversion | |
KR20180021822A (en) | Rear Cross Traffic - QuickLux | |
Pan et al. | Rear-stitched view panorama: A low-power embedded implementation for smart rear-view mirrors on vehicles | |
KR20170020666A (en) | AVM system and method for compositing image with blind spot | |
JP2020052671A (en) | Display control device, vehicle, and display control method | |
KR20180094717A (en) | Driving assistance apparatus using avm | |
JP7029350B2 (en) | Image processing device and image processing method | |
KR102300652B1 (en) | vehicle and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application | ||
E601 | Decision to refuse application | ||
E801 | Decision on dismissal of amendment | ||
A107 | Divisional application of patent |