KR20170020666A - AVM system and method for compositing image with blind spot - Google Patents

AVM system and method for compositing image with blind spot Download PDF

Info

Publication number
KR20170020666A
KR20170020666A KR1020150114844A KR20150114844A KR20170020666A KR 20170020666 A KR20170020666 A KR 20170020666A KR 1020150114844 A KR1020150114844 A KR 1020150114844A KR 20150114844 A KR20150114844 A KR 20150114844A KR 20170020666 A KR20170020666 A KR 20170020666A
Authority
KR
South Korea
Prior art keywords
image data
avm
vehicle
pixel
image
Prior art date
Application number
KR1020150114844A
Other languages
Korean (ko)
Inventor
이재민
전도영
Original Assignee
(주)캠시스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)캠시스 filed Critical (주)캠시스
Priority to KR1020150114844A priority Critical patent/KR20170020666A/en
Publication of KR20170020666A publication Critical patent/KR20170020666A/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/02Rear-view mirror arrangements
    • B60R1/08Rear-view mirror arrangements involving special optical features, e.g. avoiding blind spots, e.g. convex mirrors; Side-by-side associations of rear-view and other mirrors
    • B60R1/081Rear-view mirror arrangements involving special optical features, e.g. avoiding blind spots, e.g. convex mirrors; Side-by-side associations of rear-view and other mirrors avoiding blind spots, e.g. by using a side-by-side association of mirrors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • H04N5/225
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2400/00Special features of vehicle units
    • B60Y2400/92Driver displays

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

Disclosed are an AVM system and a method for synthesizing a blind spot image. The AVM system comprises: an image input unit to store an image signal inputted from a plurality of cameras disposed in a vehicle in real time as image data of each camera in a storage unit; an AVM image generation unit to use the image data of each camera to generate AVM image data including a black masking area; a virtual image generation unit to recognize a driving speed and a moving direction of the vehicle, and generate virtual image data to allow a plurality of target pixels corresponding to the black masking area to have pixel information summed by applying a previously designated weight value to pixel information of one or more reference pixels included in each of multiple reference areas; and an image synthesis unit to replace the black masking area included in the AVM image data with the virtual image data to output the virtual image data by a display unit.

Description

[0001] The present invention relates to an AVM system and a blind spot image synthesis method,

The present invention relates to an AVM system and a blind zone image synthesis method.

Generally, the driver's vision (view) on the inside of the vehicle is directed mainly toward the front, and the left and right and rear views of the driver are largely obscured by the vehicle body and have a very limited clock.

In order to solve such a problem, a clock assistant means such as a side mirror or the like is provided in a vehicle, and in recent years, techniques including camera means for photographing an external image of the vehicle and providing it to the driver have been applied to vehicles.

In recent years, Around View Monitoring (AVM) system, in which a plurality of cameras are installed around a vehicle and displays 360 ° omni-directional images around the vehicle, is also applied. The AVM system combines the images of the surroundings of the vehicle captured through a plurality of cameras that photograph the surroundings of the vehicle to provide a top view image (i.e., AVM image) in which the driver looks at the vehicle in the sky, Allows obstacles around the vehicle to be visible on the screen.

1, the AVM image 110 of the surroundings of the vehicle 100 has a blind spot 120 (see FIG. 1) which is a part that is physically out of sight due to the structure of the vehicle, the angle of view of the camera, , And the blind spot 120 has no information to be displayed and is masked black and displayed on the display unit.

However, the presence of the blind spot 120, which is masked and displayed in black, causes the displayed AVM image to feel unnatural.

Korean Patent Laid-Open Publication No. 2013-0124762 (surrounding view monitor system and monitoring method)

The present invention is not limited to masking a blind spot generated by the structure of an automobile, the angle of view of a camera, an installation position or an attitude of a camera, and synthesizing the blurry image into a virtual image generated in consideration of a traveling speed and a traveling direction of the vehicle, And an AVM system and a blind zone image synthesis method capable of providing a more natural AVM image.

Other objects of the present invention will become readily apparent from the following description.

According to an aspect of the present invention, there is provided an AVM (Around View Monitoring) system, comprising: an image input unit for storing, in respective camera image data, image signals input in real time from a plurality of cameras provided in a vehicle; An AVM image generating unit for generating AVM image data including a black masking area using each camera image data; Recognizing a running speed and a moving direction of the vehicle and applying a predetermined weight value to pixel information of one or more reference pixels included in each of a plurality of reference areas of each of a plurality of target pixels corresponding to the black masking area A virtual image generation unit for generating virtual image data configured to have summed pixel information; And an image synthesizer for replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual image data through a display unit.

The virtual image generation unit may receive information on the direction of movement from a sensor that senses a steering wheel rotation angle of the vehicle or senses a steering angle or uses a motion vector extracted from the AVM image data generated by the AVM image generation unit The motion direction can be recognized.

The virtual image generator divides the AVM image data into a plurality of reference areas that divide a peripheral area of a target pixel into a plurality of areas, and generates a plurality of reference areas corresponding to the moving directions of the vehicle, You can specify a weight value.

A relatively large weight value can be designated in the reference area positioned to match the direction of movement of the vehicle relative to other reference areas.

Wherein the virtual image generation unit determines the number of reference pixels to be used in each reference area to calculate pixel information of the target pixel using a formula D = (SxP) / F, where D is a range value of a reference pixel, S is the traveling distance per second of the vehicle, P is the size at which one pixel of the camera image data is displayed in the AVM image, and F is the frame rate.

Each of the plurality of target pixels may have position information that does not coincide with each other within the coordinate range of the black masking area and may have reference areas that do not coincide with other target pixels corresponding to the position information.

According to another aspect of the present invention, there is provided a blind zone image synthesis method performed in an AVM system, the method comprising: (a) storing image signals input in real time from a plurality of cameras provided in a vehicle, step; (b) generating AVM image data including a black masking area displaying a blind spot using each camera image data; (c) recognizing a running speed and a moving direction of the vehicle, and each of a plurality of target pixels corresponding to the black masking area is assigned a weight value previously assigned to pixel information of one or more reference pixels included in each of the plurality of reference areas Generating virtual image data configured to have the summed pixel information by applying the virtual image data; And (d) replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual masking data through a display unit.

The movement direction may be detected by sensing a rotation angle of the steering wheel of the vehicle or receiving information on a direction of movement from a sensor that senses a steering angle or by using a motion vector extracted from the AVM image data generated in step (b) .

In the step (c), a weight value assigned to each reference area including reference pixels is divided into a plurality of reference areas that divide the peripheral area of the target pixel into a plurality of areas in the AVM image data, And may be designated to correspond to the direction of movement of the vehicle using stored reference information.

A relatively large weight value can be designated in the reference area positioned to match the direction of movement of the vehicle relative to other reference areas.

In step (c), the number of reference pixels to be used in each reference area to calculate pixel information of the target pixel is determined using the equation D = (SxP) / F, where D is a range value S is the traveling distance per second of the vehicle, P is the size at which one pixel of the camera image data is displayed in the AVM image, and F is the frame rate.

Each of the plurality of target pixels may have position information that does not coincide with each other within the coordinate range of the black masking area and may have reference areas that do not coincide with other target pixels corresponding to the position information.

Other aspects, features, and advantages will become apparent from the following drawings, claims, and detailed description of the invention.

According to the embodiment of the present invention, a blind spot generated by a structure of an automobile, an angle of view of a camera, an installation position or an attitude is not masked with black, and a virtual image generated in consideration of a traveling speed and a traveling direction of the vehicle Thereby providing a more natural AVM image to the driver.

1 is a view showing a general AVM (Around View Monitoring) image.
2 is a block diagram schematically illustrating a configuration of an AVM system according to an embodiment of the present invention.
3 is a diagram for explaining a virtual image generation technique according to an embodiment of the present invention;
4 is a flowchart illustrating a blind zone image synthesis method according to an embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It is to be understood, however, that the invention is not to be limited to the specific embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, terms such as a first threshold value, a second threshold value, and the like which will be described later may be previously designated with threshold values that are substantially different from each other or some of which are the same value, Because there is room, the terms such as the first and the second are to be mentioned for convenience of classification.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

It is to be understood that the components of the embodiments described with reference to the drawings are not limited to the embodiments and may be embodied in other embodiments without departing from the spirit of the invention. It is to be understood that although the description is omitted, multiple embodiments may be implemented again in one integrated embodiment.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.

FIG. 2 is a block diagram of an AVM system according to an embodiment of the present invention, and FIG. 3 is a diagram for explaining a virtual image generation technique according to an embodiment of the present invention.

2, the AVM system includes an image input unit 210, an AVM image generation unit 220, a virtual image generation unit 230, an image synthesis unit 240, a display unit 250, and a storage unit 260 . Although not shown, a controller for controlling the operation of one or more components included in the AVM system may further be included.

The image input unit 210 inputs each camera image signal picked up and input from a camera provided in each of a plurality of places (for example, positions designated to photograph the front, rear, left, and right sides of the vehicle 100) And stores the generated data in the storage unit 260. Here, the camera 110 may be implemented as a wide-angle camera having an angle of view of, for example, 180 degrees or more, in order to capture an environment around the vehicle with a small quantity.

The AVM image generation unit 220 generates AVM image data (a top view image) that looks as if the surrounding environment of the vehicle 100 is viewed from above the vehicle 100, using camera image data stored in real time in the storage unit 260. [ . The AVM image generation unit 220 converts the images captured in the horizontal direction on the ground by the cameras installed on the front and rear sides of the vehicle and the left and right sides of the vehicle into an image of a shape perpendicular to the paper surface, The process of generating the data is different from the technical idea of the present invention and is obvious to those skilled in the art, so that the description thereof will be omitted.

The synthesized AVM image data generated by the AVM image generation unit 220 may include a blind spot (e.g., a blind spot) that is physically out of sight due to the shape of the vehicle 100, the angle of view of the camera, 120). The AVM image generating unit 220 may mask the blind spot 120 included in the AVM image data to black, for example, since there is no image data to be displayed in the corresponding area.

Herein, the pixel position range information of the compositing object area corresponding to the blind spot 120 included in the AVM image data will be stored in the storage unit 260, and the compositing object area should be a virtual image It is natural that it corresponds to the area.

The virtual image generating unit 230 recognizes the traveling speed and the moving direction of the vehicle and generates a plurality of reference areas (refer to FIG. 3B) located around the synthesis subject area (see 120 in FIG. 3B) (For example, a1, a2 and a3 in Fig. 3 (c)) included in each of the pixels A, B, C and D of Fig. 3A and A1, A2 and A3 of Fig. (For example, see DEST in FIG. 3 (c)) having pixel information obtained by applying a weight value to pixel information (e.g., RGB, YUV, etc.) And stores it in the storage unit 260.

3 (a) in Fig. 3 (a)) of the vehicle 100 shown in Fig. 3 (b) The reference area A may be a part of the reference area A. In addition, although FIG. 3 illustrates four outer dividing regions, that is, reference regions vertically partitioned, it is natural that the shapes and the quantity of the outer dividing regions, which are reference regions, can be variously determined as needed.

Hereinafter, with reference to FIG. 3, a method of setting a reference for the number or range of reference pixels corresponding to the traveling speed or the moving direction of a vehicle and generating a virtual image will be briefly described.

First, a reference pixel for synthesizing pixel information of a target pixel can be determined by, for example, the following equation (1).

[Equation 1]

D = (SxP) / F [m]

Where D is the range value of the reference pixel, S is the vehicle's per-second travel distance, P is the display size of one pixel, and F is the frame rate.

For example, assuming that the running speed of the vehicle is 5 km / h, the running distance S per second of the vehicle is calculated as 1.38 m, and the frame rate can be calculated as 30, which is the normal frame rate of the camera, The display size P is about how much one pixel of the camera image data is displayed in the AVM image 110. If one pixel is displayed with a size of 2 cm, D = (1.38x2) / 2 = 0.023 m = 2.3 cm. In this way, the range value D of the reference pixel is calculated to be about 2 cm, which corresponds to one actual pixel as in the previous example, so that the range value D of the reference pixel can be designated by one pixel.

As described above, the range of the reference pixel for synthesizing the pixel information of the target pixel can be determined by the running speed of the vehicle 100. [

Further, each reference pixel in each reference area specified by the range value of the designated reference pixel is subjected to different weights depending on the direction of movement of the vehicle to generate pixel information of the target pixel, as described later.

Hereinafter, a method of determining a weight for a reference region in each direction according to the movement direction of the vehicle 100 will be described.

The moving direction of the vehicle can be detected by sensing the steering angle of the steering wheel of the vehicle or using the steering angle information provided from the steering angle sensor provided in the vehicle or by generating the AVM image by the AVM image generating unit 220, Can be determined using the motion vector extracted from the stored AVM image data.

For example, when the article angle of the vehicle 100 that determines the direction of movement of the vehicle 100 is a 0 ° state in which the vehicle runs straight ahead, when the vehicle 100 is rotated at the maximum left or right steering angle, May be previously designated and stored in the storage unit 260, and the weight value fluctuation can be specified to be varied, for example, every time the angle of the clause is changed by 1 degree.

In this case, a relatively large weight value is previously designated so as to be applied to the reference region positioned to coincide with the movement direction (that is, the steering angle) of the vehicle 100, A relatively small weight value may be applied in advance.

However, if the vehicle is stationary, or if it has not yet been started after the start-up has been turned on, then an equal weight value may be pre-specified for the reference areas.

3 (b), when the vehicle 100 travels at a steering angle of 45 degrees (that is, in a diagonal direction across the area A2) for a left turn at a gentle angle, (E.g., 60%), and assign a relatively small weight value (e.g., 20% each) to the other areas A1 and A3. Likewise, if the vehicle 100 goes straight ahead in the A1 direction, a relatively large weight value will be assigned to the reference pixels of the A1 region, and if the vehicle 100 is turned left at the maximum angle of 90 degrees steering angle, Value will be assigned.

That is, the weight value to be assigned to each reference pixel for synthesizing the pixel information of the target pixel so as to match the steering angle specifying the movement direction of the vehicle 100 may be specified in advance. At this time, the maximum value and the minimum value of the weight value to be assigned to each reference pixel may be specified in advance.

Referring to FIGS. 3B and 3C, in order to synthesize the pixel information (for example, a color value) of the target pixel DEST, a reference area in three outer sides of the vehicle 100 (for example, Pixels located in the front A1 region, the outer diagonal direction A2 region, and the left A3 region) can be considered as reference pixels. Here, the outer dividing region, which is a reference region, is determined in real time based on the target pixel to which the pixel information is to be combined.

At this time, if the range value D of the reference pixel is set to 1 pixel as in the above example, the reference pixel for synthesizing the pixel information (for example, the color value) of the target pixel DEST is the upper side , Forward), one pixel on the left, and one pixel on the upper left, a1, a2, and a3.

In this case, the weights for the respective reference areas are calculated to be 50% for the upper side (A1 area), 30% for the left side (A3 area) and 20% for the upper left side (A2 area) If the color value a1 is 80, a2 is 10, and a3 is 10, the color value which is the pixel information of the target pixel DEST can be calculated and applied as 80x0.5 + 10x0.3 + 10x0.2 = 45.

In order to calculate the color value of another target pixel NEW positioned on the right side of the target pixel DEST, the upper pixel, the left DEST pixel and the upper left a1 pixel of the target pixel will be considered as reference pixels. That is, since each of the plurality of target pixels has position information that does not coincide with each other within the coordinate range corresponding to the synthesis target area (i.e., the black masking area), the reference areas determined by the position information of each target pixel Each of which may be different.

By repeating the above-described processes, the pixel information of each target pixel in the coordinate range information of the compositing target area corresponding to the blind spot 120 can be generated. Using these, the virtual image generating unit 230 generates A corresponding virtual image can be created.

As described above, the direction of movement of the vehicle 100 may be determined using the motion vector analyzed in the AVM image data in addition to the steering angle.

That is, the virtual image generator 230 divides each of the AVM images sequentially generated in time into regions, and divides the divided regions having a predetermined number or more of feature points (e.g., corner points) a motion vector in the AVM image can be calculated using the motion vector, and the motion direction of the vehicle 100 can be recognized using the calculated motion vector.

The motion direction of the vehicle 100 may be determined by using a motion vector calculated for the entire AVM image, using an average value of motion vectors calculated for the respective reference areas, Or using a motion vector calculated for a reference area that should be considered on the basis of the motion vector.

If the motion direction of the vehicle 100 is determined using the motion vector, the process of calculating the weight value for each reference area and calculating the pixel information of the target pixel using the motion vector is performed by using the steering angle Can be performed in the same manner as the process of calculating the information.

Referring to FIG. 2 again, the image synthesis unit 240 generates AVM image data generated by the AVM image generation unit 220 and stored in the storage unit 260, and AVM image data generated by the virtual image generation unit 230, And synthesizes the virtual image data stored in the memory 260 to generate AVM image data in which the black masking area (i.e., the blind spot 120) is removed.

The AVM image data having the black masking area synthesized by the image synthesizing unit 240 removed will be output through the display unit 250.

For example, the AVM system operating program, the camera image data generated by the image input unit 210, the AVM image data generated by the AVM image generation unit 220, the virtual image generation unit 230, Reference image selection reference information for each traveling speed to be used by the vehicle, reference information for determining a moving direction of the vehicle, reference information for specifying a weight value, virtual image data generated by the virtual image generating unit 230, AVM image data in which the synthesized black masking area is removed, and the like can be stored.

The storage unit 260 can be operated by, for example, a permanent storage memory for permanently storing data and a temporary storage memory for temporarily storing and operating data necessary for operation.

4 is a flowchart illustrating a blind zone image synthesis method according to an embodiment of the present invention.

Referring to FIG. 4, in step 410, the AVM image generation unit 220 generates an AVM image using the camera image data generated by the image input unit 210 and stored in the storage unit 260, It is synthesized with AVM image data which is top view image.

In step 420, the virtual image generation unit 230 generates a virtual image by combining the weight values of the reference areas corresponding to the respective target pixels included in the combining object area with the range of reference pixels in each reference area for synthesizing the pixel information of the target pixel .

In step 430, the virtual image generation unit 230 applies the weight value determined for the reference area to which the reference pixel belongs to the pixel information of each reference pixel determined to synthesize the pixel information of each target pixel, And generates a virtual image corresponding to the compositing target area by using the pixel information of each target pixel.

In operation 440, the image synthesis unit 240 synthesizes a black masking area included in the AVM image data generated by the AVM image generation unit 220 into a virtual image generated by the virtual image generation unit 230, (250).

It is a matter of course that the AVM system and the blind zone image synthesis method described above can be performed by an automated procedure according to a time series sequence by a built-in or installed program in the digital processing apparatus. The codes and code segments that make up the program can be easily deduced by a computer programmer in the field. In addition, the program is stored in a computer readable medium readable by the digital processing apparatus, and is read and executed by the digital processing apparatus to implement the method. The information storage medium includes a magnetic recording medium, an optical recording medium, and a carrier wave medium.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention as defined in the appended claims. It will be understood that the present invention can be changed.

100: vehicle 110a: AVM video
120: blind spot 210: image input unit
220: AVM image generation unit 230: Virtual image generation unit
240: image synthesizer 250:
260:

Claims (13)

In an AVM (Around View Monitoring) system,
A video input unit for storing video signals inputted respectively in real time from a plurality of cameras provided in a vehicle as respective camera video data in a storage unit;
An AVM image generating unit for generating AVM image data including a black masking area using each camera image data;
Recognizing a running speed and a moving direction of the vehicle and applying a predetermined weight value to pixel information of one or more reference pixels included in each of a plurality of reference areas of each of a plurality of target pixels corresponding to the black masking area A virtual image generation unit for generating virtual image data configured to have summed pixel information; And
And an image synthesizing unit for replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual masking data through a display unit.
The method according to claim 1,
The virtual image generation unit may receive information on the direction of movement from a sensor that senses a steering wheel rotation angle of the vehicle or senses a steering angle or uses a motion vector extracted from the AVM image data generated by the AVM image generation unit And recognizes the direction of movement.
The method according to claim 1,
The virtual image generator divides the AVM image data into a plurality of reference areas that divide a peripheral area of a target pixel into a plurality of areas, and generates a plurality of reference areas corresponding to the moving directions of the vehicle, And a weight value.
The method of claim 3,
Wherein a relatively large weight value is designated in a reference area positioned to match the direction of movement of the vehicle relative to other reference areas.
The method of claim 3,
Wherein the virtual image generation unit determines the number of reference pixels to be used in each reference area to calculate pixel information of the target pixel using the equation D = (SxP) / F,
Wherein D is a range value of a reference pixel, S is a traveling distance per second of the vehicle, P is a size at which one pixel of the camera image data is displayed in the AVM image, and F is a frame rate AVM system.
The method according to claim 1,
Wherein each of the plurality of target pixels has position information that does not coincide with each other within a coordinate range of the black masking area and reference areas that do not coincide with other target pixels according to the position information.
In a blind zone image synthesis method performed in an AVM system,
(a) storing a video signal input from each of a plurality of cameras provided in a vehicle in a storage unit as respective camera image data;
(b) generating AVM image data including a black masking area displaying a blind spot using each camera image data;
(c) recognizing a running speed and a moving direction of the vehicle, and each of a plurality of target pixels corresponding to the black masking area is assigned a weight value previously assigned to pixel information of one or more reference pixels included in each of the plurality of reference areas Generating virtual image data configured to have the summed pixel information by applying the virtual image data; And
(d) replacing the black masking area included in the AVM image data with the virtual image data and outputting the virtual masking data through a display unit.
8. The method of claim 7,
The direction of movement,
Wherein the information about the direction of movement is sensed by a sensor for sensing a steering wheel rotation angle of the vehicle or sensing a steering angle or is recognized using a motion vector extracted from the AVM image data generated in the step (b) Blind zone image synthesis method.
8. The method of claim 7,
In the step (c), a weight value assigned to each reference area including reference pixels is divided into a plurality of reference areas that divide the peripheral area of the target pixel into a plurality of areas in the AVM image data, Wherein the reference image is designated to correspond to the moving direction of the vehicle using the stored reference information.
10. The method of claim 9,
Wherein a relatively large weight value is designated in a reference area positioned to match the direction of movement of the vehicle relative to other reference areas.
8. The method of claim 7,
In the step (c), the number of reference pixels to be used in each reference area is calculated using the equation D = (SxP) / F to calculate the pixel information of the target pixel,
Wherein D is a range value of a reference pixel, S is a traveling distance per second of the vehicle, P is a size at which one pixel of the camera image data is displayed in the AVM image, and F is a frame rate Blind zone image synthesis method.
8. The method of claim 7,
Wherein each of the plurality of target pixels has position information that does not coincide with each other within a coordinate range of the black masking area and has reference areas that do not coincide with other target pixels corresponding to the position information, Way.
A recording medium on which a program that can be read by a digital processing apparatus is recorded for performing the blind zone image synthesis method according to any one of claims 7 to 12.
KR1020150114844A 2015-08-13 2015-08-13 AVM system and method for compositing image with blind spot KR20170020666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150114844A KR20170020666A (en) 2015-08-13 2015-08-13 AVM system and method for compositing image with blind spot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150114844A KR20170020666A (en) 2015-08-13 2015-08-13 AVM system and method for compositing image with blind spot

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020170045261A Division KR101764106B1 (en) 2017-04-07 2017-04-07 AVM system and method for compositing image with blind spot

Publications (1)

Publication Number Publication Date
KR20170020666A true KR20170020666A (en) 2017-02-23

Family

ID=58315468

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150114844A KR20170020666A (en) 2015-08-13 2015-08-13 AVM system and method for compositing image with blind spot

Country Status (1)

Country Link
KR (1) KR20170020666A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200067506A (en) * 2018-12-04 2020-06-12 현대자동차주식회사 Apparatus and method for performing omnidirectional sensor-fusion and vehicle including the same
CN115937421A (en) * 2022-12-13 2023-04-07 昆易电子科技(上海)有限公司 Method for generating simulation video data, image generating device and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130124762A (en) 2012-05-07 2013-11-15 현대모비스 주식회사 Around view monitor system and monitoring method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130124762A (en) 2012-05-07 2013-11-15 현대모비스 주식회사 Around view monitor system and monitoring method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200067506A (en) * 2018-12-04 2020-06-12 현대자동차주식회사 Apparatus and method for performing omnidirectional sensor-fusion and vehicle including the same
US11789141B2 (en) 2018-12-04 2023-10-17 Hyundai Motor Company Omnidirectional sensor fusion system and method and vehicle including the same
CN115937421A (en) * 2022-12-13 2023-04-07 昆易电子科技(上海)有限公司 Method for generating simulation video data, image generating device and readable storage medium
CN115937421B (en) * 2022-12-13 2024-04-02 昆易电子科技(上海)有限公司 Method for generating simulated video data, image generating device and readable storage medium

Similar Documents

Publication Publication Date Title
EP1179958B1 (en) Image processing device and monitoring system
JP3300334B2 (en) Image processing device and monitoring system
KR101764106B1 (en) AVM system and method for compositing image with blind spot
CN103770706B (en) Dynamic reversing mirror indicating characteristic
JP5072576B2 (en) Image display method and image display apparatus
WO2015194501A1 (en) Image synthesis system, image synthesis device therefor, and image synthesis method
JP7247173B2 (en) Image processing method and apparatus
US20150042799A1 (en) Object highlighting and sensing in vehicle image display systems
US9025819B2 (en) Apparatus and method for tracking the position of a peripheral vehicle
JP2018531530A (en) Method and apparatus for displaying surrounding scene of vehicle / towed vehicle combination
WO2005088970A1 (en) Image generation device, image generation method, and image generation program
JP2008027138A (en) Vehicle monitoring device
CN101487895B (en) Reverse radar system capable of displaying aerial vehicle image
KR20190047027A (en) How to provide a rearview mirror view of the vehicle's surroundings in the vehicle
JP5178454B2 (en) Vehicle perimeter monitoring apparatus and vehicle perimeter monitoring method
JP6338930B2 (en) Vehicle surrounding display device
KR20170118077A (en) Method and device for the distortion-free display of an area surrounding a vehicle
KR20180020274A (en) Panel conversion
KR20180021822A (en) Rear Cross Traffic - QuickLux
Pan et al. Rear-stitched view panorama: A low-power embedded implementation for smart rear-view mirrors on vehicles
KR20170020666A (en) AVM system and method for compositing image with blind spot
JP2020052671A (en) Display control device, vehicle, and display control method
KR20180094717A (en) Driving assistance apparatus using avm
JP7029350B2 (en) Image processing device and image processing method
KR102300652B1 (en) vehicle and control method thereof

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application
E601 Decision to refuse application
E801 Decision on dismissal of amendment
A107 Divisional application of patent