US10904638B2 - Device and method for inserting advertisement by using frame clustering - Google Patents
Device and method for inserting advertisement by using frame clustering Download PDFInfo
- Publication number
- US10904638B2 US10904638B2 US14/898,450 US201414898450A US10904638B2 US 10904638 B2 US10904638 B2 US 10904638B2 US 201414898450 A US201414898450 A US 201414898450A US 10904638 B2 US10904638 B2 US 10904638B2
- Authority
- US
- United States
- Prior art keywords
- region
- advertisement
- virtual indirect
- case
- inpainting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title abstract description 103
- 238000003780 insertion Methods 0.000 claims abstract description 508
- 230000037431 insertion Effects 0.000 claims abstract description 508
- 230000033001 locomotion Effects 0.000 claims description 49
- 230000015572 biosynthetic process Effects 0.000 claims description 27
- 238000003786 synthesis reaction Methods 0.000 claims description 27
- 239000011159 matrix material Substances 0.000 claims description 20
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 239000003550 marker Substances 0.000 description 172
- 238000010586 diagram Methods 0.000 description 56
- 238000005259 measurement Methods 0.000 description 50
- 230000008030 elimination Effects 0.000 description 48
- 238000003379 elimination reaction Methods 0.000 description 48
- 238000012545 processing Methods 0.000 description 40
- 238000004364 calculation method Methods 0.000 description 33
- 238000005516 engineering process Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 5
- 238000009792 diffusion process Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G06K9/00744—
-
- G06K9/4661—
-
- G06K9/52—
-
- G06K9/6215—
-
- G06K9/6218—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
-
- G06K2009/4666—
Definitions
- the present invention relates to a device and method for inserting an advertisement using frame clustering, and more particularly to a device and method for inserting an advertisement using frame clustering, which search for and cluster advertisement insertion target frames including advertisement insertion regions and then insert the advertisement for the clustered frames in a uniform manner.
- the present invention claims the benefits of the filing dates of Korean Patent Application No. 10-2014-0009193 filed on Jan. 24, 2014, Korean Patent Application No. 10-2014-0009194 filed on Jan. 24, 2014, Korean Patent Application No. 10-2014-0009195 filed on Jan. 24, 2014, Korean Patent Application No. 10-2014-0009196 filed on Jan. 24, 2014, and Korean Patent Application No. 10-2014-0013842 filed on Feb. 6, 2014, in the Korean Patent and Trademark Office, this application is the PCT U.S. National Phase entry of International Application No. PCT/KR2014/012159 filed on Dec. 10, 2014, the content of which is incorporated herein in their entirety.
- Conventional advertisement insertion technology is merely technology for replacing a specific pattern in a continuous image, and does not disclose and suggest a configuration for inserting an advertisement for all frames in a uniform manner.
- An object of the present invention is to search for and cluster advertisement insertion target frames including advertisement insertion regions and then insert an advertisement for the clustered frames in a uniform manner, thereby enabling the advertisement to be more naturally inserted.
- an object of the present invention is to insert the advertisement in a uniform manner using a clustering factor, i.e., the result of the comparison between an advertisement insertion region in a reference frame and advertisement insertion regions in other frames, thereby automatically inserting the advertisement into a plurality of frames only through the insertion of the advertisement into a single frame.
- a clustering factor i.e., the result of the comparison between an advertisement insertion region in a reference frame and advertisement insertion regions in other frames
- an object of the present invention is to extract only portions including advertisement insertion regions from frames and cluster the extracted portions, thereby enabling more rapid clustering.
- an object of the present invention is to determine candidate regions into which a virtual indirect advertisement will be inserted, measure the exposure levels of the candidate regions, and provide the measured exposure levels to a user, thereby providing guide information so that the user can select a virtual indirect advertisement insertion region while more intuitively recognizing the advertising effects of the respective candidate regions.
- an object of the present invention is to provide corresponding candidate region-related information to a user only when the level of a candidate region into which a virtual indirect advertisement will be inserted exceeds a level equal to or higher than a predetermined value, thereby providing guide information so that a virtual indirect advertisement insertion region can be more rapidly selected.
- an object of the present invention is to provide a total exposure level, calculated by assigning weights based on various measurement criteria, such as the size, time, angle and location in, for, at and at which a virtual indirect advertisement is exposed, the speed at which a virtual indirect advertisement moves in a screen, the size of a region which is covered with another object, and the frequency at which a virtual indirect advertisement is covered with another object, to a user, thereby providing guide information so that a virtual indirect advertisement insertion region can be efficiently selected based on the advertising effects of respective candidate regions.
- an object of the present invention is to insert a virtual indirect advertisement using markers inserted into video image content and perform inpainting on a marker region, thereby performing processing to achieve harmonization with a surrounding image.
- an object of the present invention is to perform inpainting on only the region of a marker region which does not overlap an insertion region for a virtual indirect advertisement, thereby reducing the operation costs required for inpainting.
- an object of the present invention is to reduce the operation costs required for inpainting, thereby maximizing advertising profits from a virtual indirect advertisement.
- an object of the present invention is to insert a virtual indirect advertisement in place of an existing advertisement into video image content and perform inpainting on an existing advertisement region, thereby performing processing to achieve harmonization with a surrounding image.
- an object of the present invention is to automatically calculate the boundary region pixel value of a virtual indirect advertisement that is inserted into video image content and perform processing so that a boundary line within which the virtual indirect advertisement has been inserted can harmonize with a surrounding image.
- an object of the present invention is to calculate a boundary region pixel value after mixing a foreground pixel value and a background pixel value based on the distances by which the location of the boundary region pixel of a virtual indirect advertisement which will be inserted into video image content is spaced apart from a foreground pixel and a background pixel, thereby performing processing so that a boundary line within which a virtual indirect advertisement has been inserted can harmonize with a surrounding image.
- an object of the present invention is to automatically calculate a boundary region pixel value by referring to the pixel values of frames previous and subsequent to a current frame into which a virtual indirect advertisement has been inserted, thereby performing processing so that a boundary line within which a virtual indirect advertisement has been inserted can harmonize with a surrounding image during the playback of video.
- the present invention provides a device for inserting an advertisement using frame clustering, including: a frame search unit configured to, when an advertisement insertion region is set in at least one of frames of a video, search for advertisement insertion target frames including advertisement insertion regions; a clustering unit configured to group the advertisement insertion target frames, select any one of the advertisement insertion target frames as a reference frame, compare an advertisement insertion region in the reference frame with advertisement insertion regions in advertisement insertion target frames other than the reference frame, calculate a value of a difference with the reference frame as a clustering factor, and perform clustering; and an advertisement insertion unit configured to, when an advertisement is inserted into at least one of the clustered frames, insert the advertisement for all the clustered frames in a uniform manner.
- inserting in the uniform manner may insert the advertisement by applying the clustering factor to each of the clustered frames.
- the clustering unit may include: a scene structure analysis unit configured to analyze the scene structure of the advertisement insertion region; a camera motion analysis unit configured to analyze the camera motion of the advertisement insertion region; and a clustering factor calculation unit configured to calculate the clustering factor using the scene structure and the camera motion.
- the clustering factor calculation unit may select a frame in which the advertisement insertion region has been set as the reference frame, may compare an advertisement insertion region in the reference frame with advertisement insertion regions in advertisement insertion target frames other than the reference frame, and may calculate the value of a difference with the reference frame as the clustering factor.
- the clustering factor calculation unit may compare the scene structure and the camera motion in the reference frame with scene structures and camera motions in the advertisement insertion target frames other than the reference frame.
- the scene structure analysis unit may acquire any one or more of the location, size, rotation and perspective of the advertisement insertion region.
- the camera motion analysis unit may acquire any one or more of the focus and shaking of the advertisement insertion region.
- the clustering factor calculation unit may determine global movement between successive frames with respect to the clustered frames, and may calculate the clustering factor based on the result of the determination of the global movement.
- the clustering unit may extract only portions including the advertisement insertion regions from the respective advertisement insertion target frames, and may cluster the extracted portions.
- the clustering unit may cluster frames successive from the reference frame.
- the frame search unit may determine global movement of the successive frames, and may search for the advertisement insertion target frames based on the result of the determination of the global movement.
- the frame search unit may predict the location of the advertisement insertion region in any one or more of previous and subsequent frames using the result of the determination of the global movement, and may search for the advertisement insertion target frames.
- the advertisement insertion unit may perform inpainting on the advertisement insertion region.
- the reference frame may be the frame which is selected by the user when any one or more of advertisement insertion target frames are visually displayed to the user.
- the reference frame may the frame of advertisement insertion target frames which has the widest advertisement insertion region.
- the present invention provides a method of inserting an advertisement using frame clustering, including: when an advertisement insertion region is set in at least one of frames of a video, searching for advertisement insertion target frames including advertisement insertion regions; grouping the advertisement insertion target frames, selecting any one of the advertisement insertion target frames as a reference frame, comparing an advertisement insertion region in the reference frame with advertisement insertion regions in advertisement insertion target frames other than the reference frame, calculating a value of a difference with the reference frame as a clustering factor, and performing clustering; and, when an advertisement is inserted into at least one of the clustered frames, inserting the advertisement for all the clustered frames in a uniform manner.
- inserting in the uniform manner may insert the advertisement by applying the clustering factor to each of the clustered frames.
- performing the clustering may include: analyzing the scene structure of the advertisement insertion region; analyzing the camera motion of the advertisement insertion region; and calculating the clustering factor using the scene structure and the camera motion.
- calculating the clustering factor may include selecting a frame in which the advertisement insertion region has been set as the reference frame, comparing an advertisement insertion region in the reference frame with advertisement insertion regions in advertisement insertion target frames other than the reference frame, and calculating the value of a difference with the reference frame as the clustering factor.
- calculating the clustering factor may include comparing the scene structure and the camera motion in the reference frame with scene structures and camera motions in the advertisement insertion target frames other than the reference frame.
- the present invention provides a device for providing insertion region guide information for the insertion of a virtual indirect advertisement, including: a candidate region determination unit configured to determine a candidate region into which a virtual indirect advertisement will be inserted in video image content; an advertisement insertion unit configured to process the virtual indirect advertisement and insert the processed virtual indirect advertisement into the candidate region; an exposure measurement unit configured to measure the exposure level of the virtual indirect advertisement based on exposure characteristics in which the virtual indirect advertisement is exposed in the video image content; and an information provision unit configured to provide insertion region guide information including the exposure level of the virtual indirect advertisement to a user.
- the information provision unit may determine whether the exposure level of the virtual indirect advertisement exceeds a preset reference exposure level, and may provide the insertion region guide information including the exposure level of the virtual indirect advertisement to the user if the exposure level of the virtual indirect advertisement exceeds the preset reference exposure level.
- the information provision unit may display any one or more of a candidate region into which the virtual indirect advertisement has been inserted and a candidate region which is present before the insertion of the virtual indirect advertisement.
- the device for providing insertion region guide information for the insertion of a virtual indirect advertisement may further include an advertising expense calculation unit configured to calculate the advertising expenses of the virtual indirect advertisement based on the exposure level, and the information provision unit may display the advertising expenses to the user.
- the advertisement insertion unit may include: an advertisement selection unit configured to select the virtual indirect advertisement which will be inserted into the candidate region; an advertisement processing unit configured to process the virtual indirect advertisement based on a preliminary processing characteristic so that the virtual indirect advertisement can be inserted into the candidate region; and a processed advertisement insertion unit configured to insert the processed virtual indirect advertisement into the candidate region.
- the exposure characteristics may include a real-time characteristic which is measured in real time while the virtual indirect advertisement is being exposed in the video image content, and the preliminary processing characteristic.
- the exposure measurement unit may include a size measurement unit configured to measure the exposure level in proportion to the ratio of the size of the exposed region of the virtual indirect advertisement which is exposed in the video image content to the size of the overall screen.
- the exposure measurement unit may include a time measurement unit configured to measure the exposure level in proportion to the time for which the virtual indirect advertisement is exposed in the video image content.
- the exposure measurement unit may include a deformation measurement unit configured to measure the exposure level in inverse proportion to the difference between the angle at which the virtual indirect advertisement is exposed in the video image content and a preset reference angle.
- the exposure measurement unit may include a covered size measurement unit configured to measure the exposure level in inverse proportion to the size of the portion of the virtual indirect advertisement which is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure measurement unit may include a frequency measurement unit configured to measure the exposure level in proportion to the frequency at which the virtual indirect advertisement is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure measurement unit may include a covered size measurement unit configured to measure the exposure level in the proximity of the virtual indirect advertisement to the center of the screen.
- the exposure measurement unit may include a speed measurement unit configured to measure the exposure level in proportion to the speed at which the virtual indirect advertisement moves within the screen while the virtual indirect advertisement is being exposed in the video image content.
- the exposure measurement unit may calculate a total exposure level by collecting the exposure levels measured based on the exposure characteristics, and the advertising expense calculation unit may calculate the advertising expenses in proportion to the total exposure level.
- the present invention provides a method of providing insertion region guide information for the insertion of a virtual indirect advertisement, including: determining a candidate region into which a virtual indirect advertisement will be inserted in video image content; processing the virtual indirect advertisement, and inserting the processed virtual indirect advertisement into the candidate region; measuring the exposure level of the virtual indirect advertisement based on exposure characteristics in which the virtual indirect advertisement is exposed in the video image content; and providing insertion region guide information including the exposure level of the virtual indirect advertisement to a user.
- providing the insertion region guide information may include determining whether the exposure level of the virtual indirect advertisement exceeds a preset reference exposure level, and providing the insertion region guide information, including the exposure level of the virtual indirect advertisement, to the user if the exposure level of the virtual indirect advertisement exceeds the reference exposure level.
- providing the insertion region guide information may include any one or more of a candidate region into which the virtual indirect advertisement has been inserted and a candidate region which is present before the insertion of the virtual indirect advertisement to the user.
- the method of providing insertion region guide information for the insertion of a virtual indirect advertisement may further include calculating the advertising expenses of the virtual indirect advertisement based on the exposure level, and providing the insertion region guide information may include displaying the advertising expenses to the user.
- processing and inserting the virtual indirect advertisement may include: selecting the virtual indirect advertisement which will be inserted into the candidate region; processing the virtual indirect advertisement based on a preliminary processing characteristic so that the virtual indirect advertisement can be inserted into the candidate region; and inserting the processed virtual indirect advertisement into the candidate region.
- the exposure characteristics may include a real-time characteristic which is measured in real time while the virtual indirect advertisement is being exposed in the video image content, and the preliminary processing characteristic.
- the present invention provides a device for inserting an advertisement, including: a marker recognition unit configured to recognize markers included in the frames of a video; a region determination unit configured to determine a marker region corresponding to the markers, an insertion region into which a virtual indirect advertisement will be inserted, and a marker elimination region using the marker region and the insertion region; a marker elimination unit configured to perform inpainting on only the marker elimination region; and an advertisement insertion unit configured to insert the virtual indirect advertisement into the insertion region after the inpainting of the marker elimination region.
- the region determination unit may divide the marker region into an inpainting region and a non-inpainting region, and may determine the inpainting region to be the marker elimination region.
- the region determination unit may determine whether each marker pixel within the marker region is present in the insertion region, and may determine marker pixels, not present in the insertion region, to be the inpainting region.
- the region determination unit may determine whether each marker pixel within the marker region is present in the insertion region, and may determine marker pixels, present in the insertion region, to be the non-inpainting region.
- the region determination unit may generate an insertion boundary region including pixels forming the insertion boundary, and may include part of the insertion boundary region in the inpainting region.
- the region determination unit may determine whether each of the pixels forming the boundary of the insertion region is present within a preset distance from the boundary of the insertion region, may determine a region composed of pixels present within a preset distance to be the insertion boundary region, and may determine the marker elimination region by considering the insertion boundary region.
- the region determination unit may determine whether each of the pixels within the insertion boundary region is present in the marker region and the insertion region, and may include pixels, present in the marker region and the insertion region, in the inpainting region.
- the marker elimination unit may include: an image separation unit configured to divide an image corresponding to the inpainting region into a structure image and a texture image; a structure inpainting unit configured to generate a processed structure image by performing inpainting on the structure image in such a manner that a pixel proximate to the periphery of the inpainting region diffuses into the inpainting region along a line of an equal gray value; a texture synthesis unit configured to generate a processed texture image by performing texture synthesis on the texture image; and an image synthesis unit configured to synthesize the processed structure image with the processed texture image.
- the marker elimination unit may include: a patch segmentation unit configured to divide a region, excluding the inpainting region and the inpainting region, into unit patches; a patch selection unit configured to select the most consistent patch from a region, excluding the inpainting region, with respect to each patch of the inpainting region; and a patch synthesis unit configured to synthesize each patch of the inpainting region with the selected patch.
- the marker elimination unit may perform inpainting from a unit region corresponding to the periphery of the inpainting region in an internal direction.
- the region determination unit may generate a marker boundary region including pixels forming the marker boundary, and may include part of the marker boundary region in the inpainting region.
- the region determination unit may determine whether each of pixels forming the boundary of the marker region is present within a preset distance from the boundary of the marker region, may determine a region composed of pixels, present within the preset distance, to be the marker boundary region, and may determine the marker elimination region by considering the marker boundary region.
- the region determination unit may determine whether each of pixels within the marker boundary region is present in the marker region and the insertion region, and may include pixels, not present in the marker region and the insertion region, in the inpainting region.
- the present invention provides a method of eliminating markers for the insertion of a virtual indirect advertisement, including: recognizing markers included in the frames of a video; determining a marker region corresponding to the markers, an insertion region into which a virtual indirect advertisement will be inserted, and a marker elimination region using the marker region and the insertion region; performing inpainting on only the marker elimination region; and inserting the virtual indirect advertisement into the insertion region after the inpainting of the marker elimination region.
- determining the marker elimination region may include dividing the marker region into an inpainting region and a non-inpainting region, and determining the inpainting region to be the marker elimination region.
- determining the marker elimination region may include determining whether each marker pixel within the marker region is present in the insertion region, and determining marker pixels, not present in the insertion region, to be the inpainting region.
- determining the marker elimination region may include determining whether each marker pixel within the marker region is present in the insertion region, and determining marker pixels, present in the insertion region, to be the non-inpainting region.
- determining the marker elimination region may include generating an insertion boundary region including pixels forming the insertion boundary, and including part of the insertion boundary region in the inpainting region.
- determining the marker elimination region may include determining whether each of the pixels forming the boundary of the insertion region is present within a preset distance from the boundary of the insertion region, determining a region composed of pixels present within a preset distance to be the insertion boundary region, and determining the marker elimination region by considering the insertion boundary region.
- determining the marker elimination region may include determining whether each of the pixels within the insertion boundary region is present in the marker region and the insertion region, and including pixels, present in the marker region and the insertion region, in the inpainting region.
- the present invention provides a device for calculating a boundary value for the insertion of a virtual indirect advertisement, including: a region determination unit configured to determine a boundary region including the boundary pixels of a virtual indirect advertisement inserted into a frame of a video; a reference pixel selection unit configured to, with respect to each boundary region pixel within the boundary region, select a foreground reference pixel and a background reference pixel from a remaining region other than the boundary region by considering a location of the boundary region pixel; and a boundary value calculation unit configured to calculate a value of the boundary region pixel using pixel values of the foreground reference pixel and the background reference pixel and weights of the foreground reference pixel and the background reference pixel.
- the region determination unit may divide a remaining region other than the boundary region into a foreground region and a background region corresponding to the virtual indirect advertisement; and the reference pixel selection unit may include: a foreground reference pixel selection unit configured to select one of the pixels of the foreground region, which is closest to the boundary region pixel, to be the foreground reference pixel; and a background reference pixel selection unit configured to select one of the pixels of the background region, which is closest to the boundary region pixel, to be the background reference pixel.
- a reference pixel selection unit may include: a previous frame reference pixel selection unit configured to, with respect to each boundary region pixel within the boundary region, select a previous frame reference pixel from a frame previous to the video frame by considering a location of the boundary region pixel; and a subsequent frame reference pixel selection unit configured to select a subsequent frame reference pixel from a frame subsequent to the video frame; and the boundary value calculation unit may calculate the boundary region pixel value further using the pixel values of the previous frame reference pixel and the subsequent frame reference pixel and the weights of the previous frame reference pixel and the subsequent frame reference pixel.
- the reference pixel selection unit may select a pixel, corresponding to the location of the boundary region pixel in the previous frame, as the previous frame reference pixel, and may select a pixel, corresponding to the location of the boundary region pixel in the subsequent frame, as the subsequent frame reference pixel.
- the boundary value calculation unit may set the weight of the foreground reference pixel based on the distance between the boundary region pixel and the foreground reference pixel, and may set the weight of the background reference pixel based on the distance between the boundary region pixel and the background reference pixel.
- the boundary value calculation unit may set the weights of the foreground reference pixel and the background reference pixel in inverse proportion to the distance to the boundary region pixel.
- the region determination unit may determine whether each of the pixels of the frame of the video is present within a preset distance from the boundary pixel, and may determine a region, composed of pixels present within the preset distance, to be a boundary region.
- the region determination unit may set the preset distance by considering the overall resolution of the video frame.
- the region determination unit may set a longer preset distance for the higher overall resolution of the frame of the video.
- the reference pixel selection unit may further include a perpendicular line generation unit configured to generate a perpendicular line between the boundary region pixel and the boundary line of the remaining region; the foreground reference pixel selection unit may select one of the pixels of the foreground region, corresponding to the foot of the perpendicular line, which is closest to a boundary region pixel, as the foreground reference pixel; and the background reference pixel selection unit may select one of the pixels of the background region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, as the background reference pixel.
- the present invention provides a method of calculating a boundary value for the insertion of a virtual indirect advertisement, including: determining a boundary region including boundary pixels of a virtual indirect advertisement inserted into a frame of a video; with respect to each boundary region pixel within the boundary region, selecting a foreground reference pixel and a background reference pixel from a remaining region other than the boundary region by considering the location of the boundary region pixel; and calculating a value of the boundary region pixel using pixel values of the foreground reference pixel and the background reference pixel and weights of the foreground reference pixel and the background reference pixel.
- the method of calculating a boundary value for the insertion of a virtual indirect advertisement may further include dividing a remaining region other than the boundary region into a foreground region and a background region corresponding to the virtual indirect advertisement; and selecting the foreground reference pixel and the background reference pixel may include selecting one of the pixels of the foreground region, which is closest to the boundary region pixel, to be the foreground reference pixel, and selecting one of the pixels of the background region, which is closest to the boundary region pixel, to be the background reference pixel.
- the method of calculating a boundary value for the insertion of a virtual indirect advertisement may further include, with respect to each boundary region pixel within the boundary region, selecting a previous frame reference pixel from a frame previous to the video frame by considering a location of the boundary region pixel, and selecting a subsequent frame reference pixel from a frame subsequent to the video frame; and calculating boundary region pixel value may include calculating the boundary region pixel value further using the pixel values of the previous frame reference pixel and the subsequent frame reference pixel and the weights of the previous frame reference pixel and the subsequent frame reference pixel.
- selecting the previous frame reference pixel and the subsequent frame reference pixel may include selecting a pixel, corresponding to the location of the boundary region pixel in the previous frame, as the previous frame reference pixel, and selecting a pixel, corresponding to the location of the boundary region pixel in the subsequent frame, as the subsequent frame reference pixel.
- calculating the boundary region pixel value may include setting the weight of the foreground reference pixel based on the distance between the boundary region pixel and the foreground reference pixel, and setting the weight of the background reference pixel based on the distance between the boundary region pixel and the background reference pixel.
- calculating the boundary region pixel value may include setting the weights of the foreground reference pixel and the background reference pixel in inverse proportion to the distance to the boundary region pixel.
- determining the boundary region may include determining whether each of the pixels of the frame of the video is present within a preset distance from the boundary pixel, and determining a region, composed of pixels present within the preset distance, to be a boundary region.
- determining the boundary region may include setting the preset distance by considering the overall resolution of the video frame.
- determining the boundary region may include setting a longer preset distance for the higher overall resolution of the frame of the video.
- the method of calculating a boundary value for the insertion of a virtual indirect advertisement may further include generating a perpendicular line between the boundary region pixel and the boundary line of the remaining region; selecting as the foreground reference pixel may include selecting one of the pixels of the foreground region, corresponding to the foot of the perpendicular line, which is closest to a boundary region pixel, as the foreground reference pixel; and selecting as the background reference pixel may include selecting one of the pixels of the background region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, as the background reference pixel.
- advertisement insertion target frames including advertisement insertion regions are searched for and clustered and then an advertisement is inserted for the clustered frames in a uniform manner, thereby enabling the advertisement to be more naturally inserted.
- the present invention inserts the advertisement in a uniform manner using a clustering factor, i.e., the result of the comparison between an advertisement insertion region in a reference frame and advertisement insertion regions in other frames, thereby automatically inserting the advertisement into a plurality of frames only through the insertion of the advertisement into a single frame.
- a clustering factor i.e., the result of the comparison between an advertisement insertion region in a reference frame and advertisement insertion regions in other frames
- the present invention extracts only portions including advertisement insertion regions from frames and clusters the extracted portions, thereby enabling more rapid clustering.
- the present invention determines candidate regions into which a virtual indirect advertisement will be inserted, measures the exposure levels of the candidate regions, and provides the measured exposure levels to a user, thereby providing guide information so that the user can select a virtual indirect advertisement insertion region while more intuitively recognizing the advertising effects of the respective candidate regions.
- the present invention provides corresponding candidate region-related information to a user only when the level of a candidate region into which a virtual indirect advertisement will be inserted exceeds a level equal to or higher than a predetermined value, thereby enabling a virtual indirect advertisement insertion region to be more rapidly selected.
- the present invention provides a total exposure level, calculated by assigning weights based on various measurement criteria, such as the size, time, angle and location in, for, at and at which a virtual indirect advertisement is exposed, the speed at which a virtual indirect advertisement moves in a screen, the size of a region which is covered with another object, and the frequency at which a virtual indirect advertisement is covered with another object, to a user, thereby enabling a virtual indirect advertisement insertion region to be efficiently selected based on the advertising effects of respective candidate regions.
- the present invention inserts a virtual indirect advertisement using markers inserted into video image content and performs inpainting on a marker region, thereby performing processing to achieve harmonization with a surrounding image.
- the present invention performs inpainting on only the region of a marker region which does not overlap an insertion region for a virtual indirect advertisement, thereby reducing the operation costs required for inpainting.
- the present invention reduces the operation costs required for inpainting, thereby maximizing advertising profits from a virtual indirect advertisement.
- a virtual indirect advertisement is inserted into video image content in place of an existing advertisement, and inpainting is performed on an existing advertisement region, thereby performing processing to achieve harmonization with a surrounding image.
- the boundary region pixel value of a virtual indirect advertisement that is inserted into video image content can be automatically calculated, and processing can be performed so that a boundary line within which the virtual indirect advertisement has been inserted can harmonize with a surrounding image.
- the present invention calculates a boundary region pixel value after mixing a foreground pixel value and a background pixel value based on the distances by which the location of the boundary region pixel of a virtual indirect advertisement which will be inserted into video image content is spaced apart from a foreground pixel and a background pixel, thereby performing processing so that a boundary line within which a virtual indirect advertisement has been inserted can harmonize with a surrounding image.
- the present invention automatically calculates a boundary region pixel value by referring to the pixel values of frames previous and subsequent to a current frame into which a virtual indirect advertisement has been inserted, thereby performing processing so that a boundary line within which a virtual indirect advertisement has been inserted can harmonize with a surrounding image during the playback of video.
- FIG. 1 is a diagram showing an overall method of inserting an advertisement using frame clustering the present invention
- FIG. 2 is a block diagram showing a device for inserting an advertisement using frame clustering according to an embodiment of the present invention
- FIG. 3 is a block diagram showing an example of the clustering unit shown in FIG. 1 ;
- FIG. 4 is an operation flowchart showing a method of inserting an advertisement using frame clustering according to an embodiment of the present invention
- FIG. 5 is an operation flowchart showing an example of the clustering step shown in FIG. 4 ;
- FIG. 6 is a diagram showing examples of a reference frame and frame clustering according to present invention.
- FIG. 7 is a diagram showing an example of a method of inserting an advertisement using frame clustering according to the present invention.
- FIG. 8 is a diagram showing another example of a method of inserting an advertisement using frame clustering according to the present invention.
- FIG. 9 is a block diagram showing a device for providing insertion region guide information for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- FIG. 10 is a block diagram showing an example of the advertisement insertion unit shown in FIG. 9 ;
- FIG. 11 is a block diagram showing an example of the exposure measurement unit shown in FIG. 9 ;
- FIG. 12 is an equation showing an example of an equation computed by the exposure measurement unit shown in FIG. 9 ;
- FIG. 13 is an operation flowchart showing a method of providing insertion region guide information for the insertion of a virtual indirect advertisement according to an embodiment of the present invention
- FIG. 14 is an operation flowchart showing an example of the step of providing insertion region guide information, which is shown in FIG. 13 ;
- FIG. 15 is an operation flowchart showing an example of the step of processing and inserting virtual indirect advertisement, which is shown in FIG. 13 ;
- FIG. 16 is a block diagram showing a device for eliminating markers for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- FIG. 17 is a block diagram showing an example of the marker elimination unit shown in FIG. 16 ;
- FIG. 18 is a block diagram showing another example of the marker elimination unit shown in FIG. 16 ;
- FIG. 19 is a diagram showing an example of a video frame including markers for the insertion of a virtual indirect advertisement according to the present invention.
- FIG. 20 is a diagram showing an example of a video frame into which a virtual indirect advertisement has been inserted according to the present invention.
- FIG. 21 is a diagram showing an example of a video frame into which a virtual indirect advertisement has been inserted and from which markers have been eliminated according to the present invention
- FIG. 22 is a diagram showing examples of a marker region and an insertion region according to the present invention.
- FIG. 23 is a diagram showing an example of an inpainting region according to the present invention.
- FIG. 24 is a diagram showing another example of an inpainting region according to the present invention.
- FIG. 25 is an operation flowchart showing a method of eliminating markers for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- FIG. 26 is a block diagram showing a virtual indirect advertisement service device according to an embodiment of the present invention.
- FIG. 17 is a block diagram showing an example of the inpainting unit shown in FIG. 26 ;
- FIG. 18 is a block diagram showing another example of the inpainting unit shown in FIG. 26 ;
- FIG. 27 is a diagram showing an example of an inpainting region according to an embodiment of the present invention.
- FIG. 28 is a diagram showing an example of video image content into which virtual indirect advertisement has been inserted according to an embodiment of the present invention.
- FIG. 29 is an operation flowchart showing an example of a virtual indirect advertisement service method according to an embodiment of the present invention.
- FIG. 30 is an operation flowchart showing an example of the step of performing inpainting, which is shown in FIG. 29 ;
- FIG. 31 is an operation flowchart showing another example of the step of performing inpainting, which is shown in FIG. 29 ;
- FIG. 32 is a block diagram showing a device for calculating a boundary value for the insertion of a virtual indirect advertisement according to an embodiment of the present invention
- FIG. 33 is a block diagram showing an example of the reference pixel selection unit shown in FIG. 32 ;
- FIG. 34 is a diagram showing an example of a video frame into which a virtual indirect advertisement has been inserted according to the present invention.
- FIG. 35 is a diagram showing examples of a boundary region, a foreground region and a background region according to the present invention.
- FIG. 36 is a diagram showing an example of selecting a foreground reference pixel and a background reference pixel according to the present invention.
- FIG. 37 is a diagram showing an example of selecting a previous frame reference pixel and a subsequent frame reference pixel according to the present invention.
- FIG. 38 is an operation flowchart showing a method of calculating a boundary value for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- FIG. 39 is an operation flowchart showing an example of the step of selecting a foreground reference pixel and a background reference pixel, which is shown in FIG. 38 .
- a device for inserting an advertisement using frame clustering is provided by a processor, a memory unit, and a user interface.
- the processor receives a plurality of successive frames, stores the plurality of successive frames in the memory unit, and inserts an advertisement related to the plurality of successive video frames stored in the memory unit.
- the memory unit stores various types of information including the processing program of the processor.
- the user interface receives various types of information from a user, and provides the information to the processor.
- FIG. 1 is a diagram showing an overall method of inserting an advertisement using frame clustering the present invention.
- the method of inserting an advertisement using frame clustering performs clustering by performing target search on advertisement insertion target frames 710 , 720 and 730 including advertisement insertion regions 610 .
- the method of inserting an advertisement using frame clustering displays a stitching frame 700 , i.e., the result of the stitching of the advertisement insertion target frames 710 , 720 and 730 , to the user as the result of the performance of clustering.
- the method of inserting an advertisement using frame clustering when a user inserts an advertisement into the stitching frame 700 , inserts the advertisement into all the advertisement insertion target frames 710 , 720 and 730 , acquired by clustering regions 611 into which the advertisement has been inserted, in a uniform manner while displaying the result 800 that the advertisement has been inserted into the stitching frame 700 , thereby acquiring frames 810 , 820 and 830 into which the advertisement has been inserted.
- advertisement insertion target frames including advertisement insertion regions are searched for, the advertisement insertion target frames are clustered, and an advertisement is inserted for all the clustered frames in a uniform manner, thereby enabling the more natural insertion of the advertisement.
- FIG. 2 is a block diagram showing a device for inserting an advertisement using frame clustering according to an embodiment of the present invention.
- the device for inserting an advertisement using frame clustering includes a frame search unit 210 , a clustering unit 220 , and an advertisement insertion unit 230 .
- the frame search unit 210 searches for advertisement insertion target frames including advertisement insertion regions when an advertisement insertion region is set in at least one of the frames of a video.
- the frame search unit 210 may extract a feature point from a reference frame, and may acquire advertisement insertion target frames using the feature point.
- the feature point may be extracted using the Kanade-Lucas-Tomasi Feature Tracker (KLT).
- KLT Kanade-Lucas-Tomasi Feature Tracker
- the reference frame may be the frame of advertisement insertion target frames in which the user has set an advertisement insertion region.
- the reference frame may be the frame which is selected by the user when any one or more of advertisement insertion target frames are visually displayed to the user.
- the reference frame may the frame of advertisement insertion target frames which has the widest advertisement insertion region.
- the frame search unit 210 may determine the global movement of successive frames, and may search for advertisement insertion target frames based on the result of the determination of the global movement.
- the frame search unit 210 may predict the locations of advertisement insertion regions in any one or more of previous and subsequent frames using the result of the determination of the global movement, and search for the advertisement insertion target frames.
- the clustering unit 220 groups advertisement insertion target frames, calculates a clustering factor for the advertisement insertion target frames, and clusters the advertisement insertion target frames.
- the clustering unit 220 may detects a homography matrix using the feature points of the advertisement insertion target frames, and may perform clustering using the homography matrix so that the advertisement insertion target frames are stitched together with respect to a single frame.
- the feature points may be extracted using the Kanade-Lucas-Tomasi Feature Tracker (KLT).
- KLT Kanade-Lucas-Tomasi Feature Tracker
- the homography matrix may be detected using the RANSAC process.
- Homography matrix H that satisfies a relationship, such as that of the following Equation 1, between the four feature points is calculated:
- Equation 1 H is a homography matrix. In the case of a two dimensional (2D) image, H may be represented as a 3*3 matrix.
- d h square root over ((P 2 K ⁇ HP 1 K ) 2 ) ⁇ is calculated, where P 1 is the feature point of the first frame, P 2 is the feature point of the second frame and k is the number of feature points.
- P 2 HP 1 in accordance with Equation 1
- an error may occur in the calculation of H, and thus the same value is not calculated for P 2 and HP 1 .
- d k denotes an error between P 2 and HP 1 .
- NI i The number of points where d k is equal to or lower than a predetermined value.
- Steps 1 to 4 are performed a predetermined number of times.
- Final homography matrix H is obtained by recalculating H using feature points when NI i is maximum.
- H is calculated by repeatedly selecting an arbitrary feature point a predetermined number of times, and H when NI i , i.e., the number of points where d k is equal to or lower than a predetermined value, is maximum is determined to be the homography matrix of the present invention.
- the clustering unit 220 may analyze the scene structures of the advertisement insertion regions, may analyze the camera motions of the advertisement insertion regions, and may calculate a clustering factor using the scene structures and the camera motions.
- the clustering unit 220 may select any one of the advertisement insertion target frames as a reference frame, may compare the advertisement insertion region of the reference frame with the advertisement insertion regions of advertisement insertion target frames other than the reference frame, and may calculate the value of a difference with the reference frame as the clustering factor.
- the clustering unit 220 may compare the scene structure and the camera motion in the reference frame with the scene structures and the camera motions in the advertisement insertion target frames other than the reference frame.
- the clustering unit 220 may acquire any one or more of location, size, rotation and perspective of each of the advertisement insertion regions.
- the clustering unit 220 may acquire any one or more of the focus and shaking of each of the advertisement insertion regions.
- the clustering unit 220 may determine global movement between successive frames for the clustered frames, and may calculate the clustering factor based on the result of the determination of the global movement.
- the clustering unit 220 extracts only a portion including an advertisement insertion region from each of the advertisement insertion target frames, and then may perform clustering.
- the clustering unit 220 may cluster frames successive from the reference frame.
- the advertisement insertion unit 230 inserts an advertisement for all the clustered frames in a uniform manner when the advertisement is inserted into at least one of the clustered frames.
- the integrated insertion may refer to inserting the advertisement by applying the clustering factor to each of the clustered frames.
- the advertisement insertion unit 230 may separate the advertisement using the inverse matrix of the homography matrix after the insertion of the advertisement when the advertisement insertion target frames are stitched together with respect to a single frame.
- the advertisement insertion unit 230 may perform inpainting on the advertisement insertion regions.
- the advertisement insertion unit 230 may insert a virtual image into the advertisement insertion regions.
- FIG. 3 is a block diagram showing an example of the clustering unit 220 shown in FIG. 1 .
- the clustering unit 220 shown in FIG. 1 includes a scene structure analysis unit 310 , a camera motion analysis unit 320 , and a clustering factor calculation unit 330 .
- the scene structure analysis unit 310 analyzes the scene structures of the advertisement insertion regions.
- the scene structure analysis unit 310 may acquire any one or more of the location, size, rotation and perspective of each of the advertisement insertion regions.
- the camera motion analysis unit 320 analyzes the camera motions of the advertisement insertion regions.
- the camera motion analysis unit 320 may acquire any one or more of the focus and shaking of each of the advertisement insertion regions.
- the clustering factor calculation unit 330 may calculate a clustering factor using the scene structures and the camera motions.
- the clustering factor calculation unit 330 may select any one of the advertisement insertion target frames as a reference frame, may compare the advertisement insertion region in the reference frame with the advertisement insertion regions in advertisement insertion target frames other than the reference frame, and may calculate the value of a difference with the reference frame as the clustering factor.
- the clustering factor calculation unit 330 may compare the scene structure and the camera motion in the reference frame with the scene structures and the camera motions in the advertisement insertion target frames other than the reference frame.
- the clustering factor calculation unit 330 may determine global movement between successive frames with respect to the clustered frames, and may calculate the clustering factor based on the result of the determination of the global movement.
- FIG. 4 is an operation flowchart showing a method of inserting an advertisement using frame clustering according to an embodiment of the present invention.
- advertisement insertion target frames including advertisement insertion regions are searched for at step S 410 .
- advertisement insertion target frames may be searched for in frames successive from a reference frame.
- a feature point may be detected from the reference frame, and advertisement insertion target frames may be acquired using the feature point.
- the feature point may be detected using the Kanade-Lucas-Tomasi Feature Tracker (KLT).
- KLT Kanade-Lucas-Tomasi Feature Tracker
- the reference frame may be the frame of advertisement insertion target frames in which a user has set an advertisement insertion region.
- the reference frame may be the frame which visually displays any one or more of advertisement insertion target frames to the user and has been selected by the user.
- the reference frame may the frame of advertisement insertion target frames which has the widest advertisement insertion region.
- step S 410 the global movement of successive frames may be determined, and advertisement insertion target frames may be searched for based on the result of the determination of the global movement.
- the locations of advertisement insertion regions may be predicted in any one or more of previous and subsequent frames using the result of the determination of the global movement, and advertisement insertion target frames may be searched for.
- the advertisement insertion target frames are grouped, a clustering factor is calculated for the advertisement insertion target frames, and the advertisement insertion target frames are clustered at step S 420 .
- a homography matrix may be detected using the feature points of the advertisement insertion target frames, and clustering may be performed using the homography matrix so that the advertisement insertion target frames are stitched together with respect to a single frame.
- the feature points may be extracted using the Kanade-Lucas-Tomasi Feature Tracker (KLT).
- KLT Kanade-Lucas-Tomasi Feature Tracker
- the homography matrix may be detected using the RANSAC process.
- Homography matrix H that satisfies a relationship, such as that of the above Equation 1, between the four feature points is calculated.
- d h square root over (P 2 K ⁇ HP 1 K ) 2 ) ⁇ is calculated, where P 1 is the feature point of the first frame, P 2 is the feature point of the second frame, and k is the number of feature points.
- P 2 HP 1 in accordance with Equation 1
- an error may occur in the calculation of H, and thus the same value is not calculated for P 2 and HP 1 .
- d k denotes an error between P 2 and HP 1 .
- NI i The number of points where d k is equal to or lower than a predetermined value.
- Steps 1 to 4 are performed a predetermined number of times.
- Final homography matrix H is obtained by recalculating H using feature points when NI i is maximum.
- H is calculated by repeatedly selecting an arbitrary feature point a predetermined number of times, and H when NI i , i.e., the number of points where d k is equal to or lower than a predetermined value, is maximum is determined to be the homography matrix of the present invention.
- the scene structures of the advertisement insertion regions may be analyzed, the camera motions of the advertisement insertion region may be analyzed, and a clustering factor may be calculated using the scene structures and the camera motions.
- any one of the advertisement insertion target frames may be selected as a reference frame, the advertisement insertion region of the reference frame may be compared with the advertisement insertion regions of advertisement insertion target frames other than the reference frame, and the value of a difference with the reference frame may be calculated as the clustering factor.
- the scene structure and the camera motion in the reference frame may be compared with the scene structures and the camera motions in the advertisement insertion target frames other than the reference frame
- any one or more of location, size, rotation and perspective of each of the advertisement insertion regions may be acquired.
- any one or more of the focus and shaking of each of the advertisement insertion regions may be acquired.
- step S 420 global movement between successive frames may be determined for the clustered frames, and the clustering factor may be calculated based on the result of the determination of the global movement.
- step S 420 only a portion including an advertisement insertion region may be extracted from each of the advertisement insertion target frames, and then clustering may be performed.
- frames successive from the reference frame may be clustered.
- an advertisement is inserted for all the clustered frames in a uniform manner at step S 430 .
- the advertisement when the advertisement insertion target frames are stitched together with respect to a single frame, the advertisement may be separated using the inverse matrix of the homography matrix after the insertion of the advertisement.
- inpainting may be performed on the advertisement insertion regions.
- a virtual image may be inserted into the advertisement insertion regions.
- FIG. 5 is an operation flowchart showing an example of clustering step S 420 shown in FIG. 4 .
- step S 420 shown in FIG. 4 the scene structures of the advertisement insertion regions are analyzed at step S 510 .
- any one or more of the location, size, rotation and perspective of each of the advertisement insertion regions may be acquired.
- step S 420 shown in FIG. 4 the camera motions of the advertisement insertion regions are analyzed at step S 520 .
- any one or more of the focus and shaking of each of the advertisement insertion regions may be acquired.
- a clustering factor is calculated using the scene structures and the camera motions at step S 530 .
- any one of the advertisement insertion target frames may be selected as a reference frame, the advertisement insertion region in the reference frame may be compared with the advertisement insertion regions in advertisement insertion target frames other than the reference frame, and the value of a difference with the reference frame may be calculated as the clustering factor.
- the scene structure and the camera motion in the reference frame may be compared with the scene structures and the camera motions in the advertisement insertion target frames other than the reference frame.
- step S 530 global movement between successive frames may be determined for the clustered frames, and the clustering factor may be calculated based on the result of the determination of the global movement.
- FIG. 6 is a diagram showing examples of a reference frame 620 and frame clustering according to present invention.
- advertisement insertion target frames t 0 to t n-1 including advertisement insertion regions 610 may be searched for in video frames.
- any one of the advertisement insertion target frames t 0 to t n-1 may be made a reference frame 620 for the performance of clustering.
- the earliest one t 0 of the advertisement insertion target frames t 0 to t n-1 may be made the reference frame 620 .
- the advertisement insertion region 610 in the reference frame 620 may be compared with the advertisement insertion regions 610 in the advertisement insertion target frames t 1 to t n-1 other than the reference frame 620 , and the value of a difference with the reference frame may be calculated as a clustering factor.
- the advertisement insertion region 610 in the reference frame 620 may be compared with the advertisement insertion regions 610 in the advertisement insertion target frames t 1 to t n-1 other than the reference frame 620 , and the relative locations of the advertisement insertion regions 610 in the advertisement insertion target frames t 1 to t n-1 may be acquired and made the clustering factor.
- the frame of n advertisement insertion target frames t 0 to t n-1 in which a user has set the advertisement insertion region 610 may be made the reference frame 620 .
- any one or more of the advertisement insertion target frames t 0 to t n-1 may be visually displayed to the user, and a frame selected by the user may be made the reference frame 620 .
- a frame having an advertisement insertion region 610 that is the widest of the advertisement insertion target frames t 0 to t n-1 may be made the reference frame 620 .
- FIG. 7 is a diagram showing an example of a method of inserting an advertisement using frame clustering according to the present invention.
- advertisement insertion target frames 710 , 720 and 730 including advertisement insertion regions 610 are clustered.
- a stitching frame 700 i.e., the result of the stitching of the advertisement insertion target frames 710 , 720 and 730 , may be displayed to a user as the result of the performance of the clustering.
- the advertisement insertion target frames 710 , 720 and 730 may be some of the advertisement insertion target frames t 0 to t n-1 shown in FIG. 6 .
- FIG. 8 is a diagram showing another example of a method of inserting an advertisement using frame clustering according to the present invention.
- frames 810 , 820 and 830 into which the advertisement has been inserted are acquired by inserting the advertisement into all the clustered advertisement insertion target frames 710 , 720 and 730 , acquired by clustering regions 611 into which the advertisement has been inserted, in a uniform manner.
- the stitching frame 700 shown in FIG. 7 may be displayed to a user, and the result of the insertion of the advertisement into the stitching frame 700 may be displayed when the user inserts the advertisement into the stitching frame 700 .
- the method of inserting an advertisement using frame clustering according to the present invention may be implemented as a program or smart phone app that can be executed by various computer means.
- the program or smart phone app may be recorded on a computer-readable storage medium.
- the computer-readable storage medium may include program instructions, data files, and data structures solely or in combination. Program instructions recorded on the storage medium may have been specially designed and configured for the present invention, or may be known to or available to those who have ordinary knowledge in the field of computer software.
- Examples of the computer-readable storage medium include all types of hardware devices specially configured to record and execute program instructions, such as magnetic media, such as a hard disk, a floppy disk, and magnetic tape, optical media, such as compact disk (CD)-read only memory (ROM) and a digital versatile disk (DVD), magneto-optical media, such as a floptical disk, ROM, random access memory (RAM), and flash memory.
- Examples of the program instructions include machine code, such as code created by a compiler, and high-level language code executable by a computer using an interpreter. These hardware devices may be configured to operate as one or more software modules in order to perform the operation of the present invention, and the vice versa.
- the device and method for inserting an advertisement using frame clustering according to the present invention are not limited to the configurations and methods of the above-described embodiments, but some or all of the embodiments may be configured to be selectively combined such that the embodiments can be modified in various manners.
- a device and method for providing insertion region guide information for the insertion of a virtual indirect advertisement according to embodiments of the present invention are described below.
- FIG. 9 is a block diagram showing a device for providing insertion region guide information for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- the device for providing insertion region guide information for the insertion of a virtual indirect advertisement includes a candidate region determination unit 1110 , an advertisement insertion unit 1120 , an exposure measurement unit 1130 , and an information provision unit 1140 .
- the candidate region determination unit 1110 determines a candidate region, into which a virtual indirect advertisement will be inserted in video image content.
- the advertisement insertion unit 1120 processes the virtual indirect advertisement and inserts the virtual indirect advertisement into the candidate region.
- the advertisement insertion unit 1120 may select the virtual indirect advertisement to be inserted into the candidate region, may process the virtual indirect advertisement based on preliminary processing characteristics so that the virtual indirect advertisement can be inserted into the candidate region, and may insert the processed virtual indirect advertisement into the candidate region.
- the preliminary processing characteristics may include any one or more of the size of the portion of the virtual indirect advertisement which is exposed in the video image content, the deformation which is made based on the angle, the size of the portion of the virtual indirect advertisement which is covered with another object, and the speed at which the virtual indirect advertisement moves within a screen.
- the exposure measurement unit 1130 measures the exposure level of the virtual indirect advertisement based on exposure characteristics in which the virtual indirect advertisement is exposed in the video image content.
- the exposure characteristics may include a real-time characteristic which is measured in real time while the virtual indirect advertisement is being exposed in the video image content, and the preliminary processing characteristic.
- the real-time characteristic may include any one or more of the time for which the virtual indirect advertisement is exposed in the video image content, the frequency at which the virtual indirect advertisement is covered with another object, and the location at which the virtual indirect advertisement is disposed in the screen.
- the exposure measurement unit 1130 may measure the exposure level in proportion to the ratio of the size of the exposed region of the virtual indirect advertisement which is exposed in the video image content to the size of the overall screen.
- the exposure measurement unit 1130 may measure the exposure level in proportion to the time for which the virtual indirect advertisement is exposed in the video image content.
- the exposure measurement unit 1130 may measure the exposure level in inverse proportion to the difference between the angle at which the virtual indirect advertisement is exposed in the video image content and a preset reference angle.
- the exposure measurement unit 1130 may measure the exposure level in inverse proportion to the size of the portion of the virtual indirect advertisement which is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure measurement unit 1130 may measure the exposure level in proportion to the frequency at which the virtual indirect advertisement is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure measurement unit 1130 may measure the exposure level in the proximity of the virtual indirect advertisement to the center of the screen.
- the exposure measurement unit 1130 may measure the exposure level in proportion to the speed at which the virtual indirect advertisement moves within the screen while the virtual indirect advertisement is being exposed in the video image content.
- the exposure measurement unit 1130 may calculate a total exposure level by collecting the exposure levels measured based on the exposure characteristics.
- the information provision unit 1140 provides insertion region guide information, including the exposure level of the virtual indirect advertisement, to a user.
- the information provision unit 1140 may determine whether the exposure level of the virtual indirect advertisement exceeds a preset reference exposure level, and may provide the insertion region guide information, including the exposure level of the virtual indirect advertisement, to the user if the exposure level of the virtual indirect advertisement exceeds the reference exposure level.
- the information provision unit 1140 may display any one or more of a candidate region into which the virtual indirect advertisement has been inserted and a candidate region which is present before the insertion of the virtual indirect advertisement to the user.
- a user may be provided with a candidate region to which the virtual indirect advertisement will be inserted and exposure level information corresponding to the candidate region by the device for providing insertion region guide information for the insertion of a virtual indirect advertisement according to the embodiment of the present invention.
- the user may select a region into which the virtual indirect advertisement will be finally inserted while viewing the candidate region and the exposure level together.
- the user may select a region into which the virtual indirect advertisement will be finally inserted while viewing the candidate region into which the virtual indirect advertisement will be inserted.
- insertion regions may be selected more conveniently.
- the user may receive only information corresponding to candidate regions whose exposure level exceeds a predetermined level using the device for providing insertion region guide information for the insertion of a virtual indirect advertisement according to the embodiment of the present invention.
- the user may receive only information corresponding to candidate regions above the upper 30%.
- the user may more rapidly select a region having a high advertising effect from a plurality of candidate regions into which the virtual indirect advertisement can be inserted.
- the device for providing insertion region guide information for the insertion of a virtual indirect advertisement may further include an advertising expense calculation unit configured to calculate the advertising expenses of the virtual indirect advertisement based on the exposure level.
- the information provision unit 1140 may display the advertising expenses to the user.
- the advertising expense calculation unit may measure the advertising expenses of the virtual indirect advertisement in proportion to the exposure level, and may measure the advertising expenses in proportion to the total exposure level calculated by an exposure calculation unit.
- FIG. 10 is a block diagram showing an example of the advertisement insertion unit 1120 shown in FIG. 9 .
- the advertisement insertion unit 1120 shown in FIG. 9 includes an advertisement selection unit 1210 , an advertisement processing unit 1220 , and a processed advertisement insertion unit 1230 .
- the advertisement selection unit 1210 selects a virtual indirect advertisement which will be inserted into the candidate region.
- the advertisement processing unit 1220 processes the virtual indirect advertisement based on preliminary processing characteristics so that the virtual indirect advertisement can be inserted into the candidate region.
- the preliminary processing characteristics may include any one or more of the size of the portion of the virtual indirect advertisement which is exposed in the video image content, the deformation which is made based on the angle, the size of the portion of the virtual indirect advertisement which is covered with another object, and the speed at which the virtual indirect advertisement moves within a screen.
- the processed advertisement insertion unit 1230 inserts the processed virtual indirect advertisement into the candidate region.
- FIG. 11 is a block diagram showing an example of the exposure measurement unit 1130 shown in FIG. 9 .
- the exposure measurement unit 1130 shown in FIG. 9 includes a size measurement unit 1310 , a time measurement unit 1320 , a deformation measurement unit 1330 , a covered size measurement unit 1340 , a covering frequency measurement unit 1350 , a location measurement unit 1360 , a speed measurement unit 1370 , and an exposure calculation unit 1380 .
- the size measurement unit 1310 measures the exposure level in proportion to the ratio of the size of the exposed region of the virtual indirect advertisement which is exposed in the video image content to the size of the overall screen.
- the size of the exposed region of the virtual indirect advertisement which is exposed in the video image content may not be measured in real time, but may be measured based on information when the virtual indirect advertisement is processed by the advertisement processing unit.
- the time measurement unit 1320 measures the exposure level in proportion to the exposure time for which the virtual indirect advertisement is exposed in the video image content.
- the deformation measurement unit 1330 measures the level at which the virtual indirect advertisement is deformed based on the angle of the view point of a camera.
- the deformation measurement unit 1330 may measure the exposure level in inverse proportion to the difference between the angle at which the virtual indirect advertisement is exposed in the video image content and a preset reference angle.
- the preset reference angle may be an angle corresponding to the front surface of the virtual indirect advertisement.
- a deformation level is in proportion to the difference between the angle at which the virtual indirect advertisement is exposed and the angle corresponding to the front surface of the virtual indirect advertisement, and the exposure level is measured in inverse proportion to the difference.
- the angle at which the virtual indirect advertisement is exposed in the video image content may not be measured in real time, but may be measured based on information when the virtual indirect advertisement is processed by the advertisement processing unit.
- the covered size measurement unit 1340 measures the exposure level in inverse proportion to the size of the region of the virtual indirect advertisement which is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure level is in inverse proportion to the level at which the virtual indirect advertisement is covered with another object.
- the size of the region of the virtual indirect advertisement which is covered with another object while the virtual indirect advertisement is being exposed in the video image content may not be measured in real time, but may be measured based on information when the virtual indirect advertisement is processed by the advertisement processing unit.
- the covering frequency measurement unit 1350 measures the exposure level in inverse proportion to the frequency at which the virtual indirect advertisement is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure level is in inverse proportion to the frequency at which the virtual indirect advertisement is covered with another object.
- the location measurement unit 1360 measures the exposure level in proportion to the proximity of the disposed virtual indirect advertisement to the center of a screen.
- a virtual indirect advertisement that is disposed closer to the center of a screen may be a virtual indirect advertisement having higher prominence.
- a viewer will be more interested in a virtual indirect advertisement having higher prominence. Accordingly, the exposure level is measured in proportion to the prominence of a virtual indirect advertisement.
- the speed measurement unit 1370 measures the exposure level in inverse proportion to the speed at which the virtual indirect advertisement moves within a screen while the virtual indirect advertisement is being exposed in the video image content.
- the exposure level is measured in inverse proportion to the speed.
- the speed at which the virtual indirect advertisement moves within a screen while the virtual indirect advertisement is being exposed in the video image content may not be measured in real time, but may be measured based on information when the virtual indirect advertisement is processed by the advertisement processing unit.
- the exposure calculation unit 1380 calculates a total exposure level by collecting the exposure levels measured based on the exposure characteristics.
- an exposure level measured based on the size of an exposed region may have a larger weight than an exposure level measured based on deformation based on an angle.
- FIG. 12 is an equation showing an example of an equation computed by the exposure measurement unit 1130 shown in FIG. 9 .
- the equation calculated by the exposure measurement unit 1130 includes coefficients a to g and variables E, S, T, A, C, F, D and V.
- variable E denotes an exposure level. This is derived from the equation shown in FIG. 12 .
- variable S denotes the size of the region of the virtual indirect advertisement which is exposed in the video image content.
- S is calculated by dividing the number of pixels of the exposed region by the number of pixels of an overall screen.
- the exposure level is calculated in proportion to the variable S.
- variable T denotes the time during which the virtual indirect advertisement is exposed in the video image content.
- variable T may be calculated in seconds, milliseconds, on a frame count basis, or the like.
- the exposure level is calculated in proportion to the variable T.
- variable A denotes the level at which the virtual indirect advertisement is deformed based on an angle while the virtual indirect advertisement is being exposed in the video image content.
- variable A is calculated based on the difference between a preset reference angle and the angle at which the virtual indirect advertisement is exposed in the video image content.
- the preset reference angle may be an angle corresponding to the front surface of the virtual indirect advertisement.
- the exposure level is calculated in inverse proportion to the variable A.
- variable C denotes the size of the region of the virtual indirect advertisement is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- variable C is calculated by dividing the number of pixels included in the region covered with another object by the number of pixels included in the region of the virtual indirect advertisement that is exposed in the video image content.
- the exposure level is calculated in inverse proportion to the variable C.
- variable F denotes the frequency at which the virtual indirect advertisement is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- variable F is calculated by dividing the time during which the virtual indirect advertisement is covered with another object in the video image content by an overall exposure time.
- the exposure level is calculated in inverse proportion to the variable F.
- variable D denotes the distance between the location at which the virtual indirect advertisement is exposed in the video image content and the center of a screen.
- the exposure level is calculated in inverse proportion to the variable D.
- variable V denotes the speed at which the virtual indirect advertisement moves within a screen while the virtual indirect advertisement is being exposed in the video image content.
- the exposure level is calculated in inverse proportion to the variable V.
- a method by which the exposure level is calculated by the exposure measurement unit shown in FIG. 9 is not limited to the equation shown in FIG. 12 , but the exposure level may be differently calculated in accordance with an equation different from the equation shown in FIG. 12 .
- FIG. 13 is an operation flowchart showing a method of providing insertion region guide information for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- a candidate region into which a virtual indirect advertisement will be inserted in video image content is determined at step S 1510 .
- the virtual indirect advertisement is processed and inserted into the candidate region at step S 1520 .
- the virtual indirect advertisement to be inserted into the candidate region may be selected, the virtual indirect advertisement may be processed based on preliminary processing characteristics so that the virtual indirect advertisement can be inserted into the candidate region, and the processed virtual indirect advertisement may be inserted to the candidate region.
- the preliminary processing characteristics may include any one or more of the size of the portion of the virtual indirect advertisement which is exposed in the video image content, the deformation which is made based on the angle, the size of the portion of the virtual indirect advertisement which is covered with another object, and the speed at which the virtual indirect advertisement moves within a screen.
- the exposure level of the virtual indirect advertisement is measured based on exposure characteristics in which the virtual indirect advertisement is exposed in the video image content at step S 1530 .
- the exposure characteristics may include a real-time characteristic which is measured in real time while the virtual indirect advertisement is being exposed in the video image content and the preliminary processing characteristic.
- the real-time characteristic may include any one or more of the time for which the virtual indirect advertisement is exposed in the video image content, the frequency at which the virtual indirect advertisement is covered with another object, and the location at which the virtual indirect advertisement is disposed in the screen.
- the exposure level may be measured in proportion to the ratio of the size of the exposed region of the virtual indirect advertisement which is exposed in the video image content to the size of the overall screen.
- the exposure level may be measured in proportion to the time for which the virtual indirect advertisement is exposed in the video image content.
- the exposure level may be measured in inverse proportion to the difference between the angle at which the virtual indirect advertisement is exposed in the video image content and a preset reference angle.
- the exposure level may be measured in inverse proportion to the size of the portion of the virtual indirect advertisement which is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure level may be measured in proportion to the frequency at which the virtual indirect advertisement is covered with another object while the virtual indirect advertisement is being exposed in the video image content.
- the exposure level may be measured in the proximity of the virtual indirect advertisement to the center of the screen.
- the exposure level may be measured in proportion to the speed at which the virtual indirect advertisement moves within the screen while the virtual indirect advertisement is being exposed in the video image content.
- a total exposure level may be calculated by collecting the exposure levels measured based on the exposure characteristics.
- insertion region guide information for the insertion of a virtual indirect advertisement according to the embodiment of the present invention, insertion region guide information, including the exposure level of the virtual indirect advertisement, is provided to a user at step S 1540 .
- step S 1540 it may be determined whether the exposure level of the virtual indirect advertisement exceeds a preset reference exposure level, and the insertion region guide information, including the exposure level of the virtual indirect advertisement, may be provided to the user if the exposure level of the virtual indirect advertisement exceeds the reference exposure level.
- any one or more of a candidate region into which the virtual indirect advertisement has been inserted and a candidate region which is present before the insertion of the virtual indirect advertisement may be displayed to the user.
- the advertising expenses of the virtual indirect advertisement may be calculated based on the exposure level in the method of providing insertion region guide information for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- the advertising expenses may be displayed to the user.
- FIG. 14 is an operation flowchart showing an example of step S 1540 of providing insertion region guide information, which is shown in FIG. 13 .
- step S 1540 of providing insertion region guide information it is determined whether the exposure level of the virtual indirect advertisement exceeds a preset reference exposure level at step S 1610 .
- the reference exposure level may be an absolute value or a relative value.
- the reference exposure level when the reference exposure level is an absolute value, the reference exposure level may be set to a value corresponding to 50% of the maximum value of the exposure level.
- the reference exposure level when the reference exposure level is a relative value, the reference exposure level may be set to a value which falls within the upper 30% of the exposure levels of the virtual indirect advertisement inserted into all the candidate regions.
- step S 1540 of providing insertion region guide information which is shown in FIG. 13 , if, as a result of the determination at step S 1610 , the exposure level of the virtual indirect advertisement does not exceed the reference exposure level, the provision of insertion region guide information is skipped at step S 1621 .
- the reference exposure level may be adjusted, and the provision of insertion region guide information may be sipped for some of the candidate regions having low exposure levels.
- step S 1540 of providing insertion region guide information shown in FIG. 13 , if, as a result of the determination at step S 1610 , the exposure level of the virtual indirect advertisement exceeds the reference exposure level, candidate regions into which the virtual indirect advertisement has been inserted are displayed at step S 1622 .
- step S 1540 of providing insertion region guide information shown in FIG. 13
- the exposure level of the virtual indirect advertisement is displayed at step S 1630 .
- step S 1540 of providing insertion region guide information shown in FIG. 13
- the advertising expenses of the virtual indirect advertisement are displayed at step S 1640 .
- the reference exposure level is adjusted and then insertion region guide information is provided only for some of the candidate regions having exposure levels that exceed the reference exposure level.
- FIG. 15 is an operation flowchart showing an example of step S 1520 of processing and inserting virtual indirect advertisement, which is shown in FIG. 13 .
- step S 1520 of processing and inserting virtual indirect advertisement shown in FIG. 13
- the virtual indirect advertisement which will be inserted into the candidate region is selected at step S 1710 .
- step S 1520 of processing and inserting virtual indirect advertisement shown in FIG. 13 , the virtual indirect advertisement is processed based on preliminary processing characteristics so that the virtual indirect advertisement can be inserted into the candidate region at step S 1720 .
- the preliminary processing characteristics may include any one or more of the size of the portion of the virtual indirect advertisement which is exposed in the video image content, the deformation which is made based on the angle, the size of the portion of the virtual indirect advertisement which is covered with another object, and the speed at which the virtual indirect advertisement moves within a screen.
- the processed advertisement insertion unit 1230 inserts the processed virtual indirect advertisement into the candidate region.
- step S 1520 of processing and inserting virtual indirect advertisement shown in FIG. 13
- the processed virtual indirect advertisement is inserted into the candidate region at step S 1730 .
- the steps shown in each of FIGS. 13 to 15 may be performed in the sequence shown in each of FIGS. 13 to 15 , in a sequence reverse to the former sequence, or concurrently.
- the method of providing insertion region guide information for the insertion of a virtual indirect advertisement according to the present invention may be implemented as a program or smart phone app that can be executed by various computer means.
- the program or smart phone app may be recorded on a computer-readable storage medium.
- the computer-readable storage medium may include program instructions, data files, and data structures solely or in combination. Program instructions recorded on the storage medium may have been specially designed and configured for the present invention, or may be known to or available to those who have ordinary knowledge in the field of computer software.
- Examples of the computer-readable storage medium include all types of hardware devices specially configured to record and execute program instructions, such as magnetic media, such as a hard disk, a floppy disk, and magnetic tape, optical media, such as compact disk (CD)-read only memory (ROM) and a digital versatile disk (DVD), magneto-optical media, such as a floptical disk, ROM, random access memory (RAM), and flash memory.
- Examples of the program instructions include machine code, such as code created by a compiler, and high-level language code executable by a computer using an interpreter. These hardware devices may be configured to operate as one or more software modules in order to perform the operation of the present invention, and the vice versa.
- the device and method for inserting an advertisement using frame clustering according to the present invention are not limited to the configurations and methods of the above-described embodiments, but some or all of the embodiments may be configured to be selectively combined such that the embodiments can be modified in various manners.
- a device and method for eliminating markers for the insertion of a virtual indirect advertisement according to embodiments of the present invention are described below.
- FIG. 16 is a block diagram showing a device for eliminating markers for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- the device for eliminating markers for the insertion of a virtual indirect advertisement includes a marker recognition unit 2110 , a region determination unit 2120 , a marker elimination unit 2130 , and an advertisement insertion unit 2140 .
- the marker recognition unit 2110 recognizes markers included in the frames of a video.
- the marker recognition unit 2110 may recognize markers included in the frames of a video using a preset algorithm or input received from a user.
- the region determination unit 2120 determines a marker region corresponding to each marker, an insertion region into which a virtual indirect advertisement will be inserted, and a marker elimination region using the marker region and the insertion region.
- the region determination unit 2120 may determine the marker region and the insertion region using a preset algorithm or input received from the user.
- the region determination unit 2120 may divide the marker region into an inpainting region and a non-inpainting region, and may determine the inpainting region to be a marker elimination region.
- the region determination unit 2120 may determine whether each of marker pixels within the marker region is present in the insertion region, and may determine marker pixels, not present in the insertion region, to be the inpainting region.
- the region determination unit 2120 may determine whether each of marker pixels within the marker region is present in the insertion region, and may determine marker pixels, present in the insertion region, to be the non-inpainting region.
- non-inpainting region is a marker portion that is covered with and hidden by the virtual indirect advertisement, inpainting may be performed on only a non-overlapping region.
- this can reduce inpainting operation costs compared to performing inpainting on the overall marker.
- the region determination unit 2120 may generate an insertion boundary region including pixels forming an insertion boundary, and may include part of the insertion boundary region in the inpainting region.
- the region determination unit 2120 may determine whether each of the pixels forming the boundary of the insertion region is present within a preset distance from the boundary of the insertion region, may determine a region composed of pixels present within a preset distance to be the insertion boundary region, and may determine the marker elimination region by considering the insertion boundary region.
- the region determination unit 2120 may determine whether each of the pixels within the insertion boundary region is present in the marker region and the insertion region, and may include pixels, present in the marker region and the insertion region, in the inpainting region.
- the region determination unit 2120 may generate the marker boundary region including the pixels forming the marker boundary, and may include part of the marker boundary region in the inpainting region.
- the region determination unit 2120 may determine whether each of pixels forming the boundary of the marker region is present within a preset distance from the boundary of the marker region, may determine a region composed of pixels, present within the preset distance, to be the marker boundary region, and may determine the marker elimination region by considering the marker boundary region.
- the region determination unit 2120 may determine whether each of pixels within the marker boundary region is present in the marker region and the insertion region, and may include pixels, not present in the marker region and the insertion region, in the inpainting region.
- the marker elimination unit 2130 performs inpainting on only the marker elimination region.
- the marker elimination unit 2130 may divide an image, corresponding to the inpainting region, into a structure image and a texture image, may generate a processed structure image by performing inpainting on the structure image in such a manner that a pixel proximate to the periphery of the inpainting region diffuses into the inpainting region along a line of an equal gray value, may generate a processed texture image by performing texture synthesis on the texture image, and may synthesize the processed structure image with the processed texture image.
- the structure image may be an image including only the structure of an object included in the image. That is, the structure image includes the shape, color, brightness and the like of the object.
- the texture image may be an image including only the texture of the object included in the image. That is, the texture image includes the pattern, texture and the like of the object.
- inpainting based on the attribute of a region, such as the structure, texture or the like may generate a more natural image than inpainting based on gray values when inpainting is performed.
- the texture synthesis refers to generating the texture information of a large image from the texture information of a small sample image. That is, the texture information of a large image which will fill the inpainting region is generated from the texture information of a sample image selected from a region excluding the inpainting region.
- a single sample image may be selected and then texture synthesis may be performed, or a plurality of sample images may be selected and then texture synthesis may be performed, thereby generating a single large image.
- the marker elimination unit 2130 may segment each of the inpainting region and the region excluding the inpainting region into unit patches, may select the most consistent patch in the region excluding the inpainting region with respect to each patch of the inpainting region, and may synthesize each patch of the inpainting region with a selected patch.
- the size of the patch may not be fixed, but may vary depending on the situation.
- the most consistent patch may be selected based on assigned priorities or weights.
- a higher priority may be assigned to a patch having high sparsity different from surroundings.
- the weight may be assigned based on the distance.
- a single patch may not be selected, but the sum of the weights of a plurality of candidate patches may be obtained using nonlocal means and then selection may be performed.
- local patch consistency may be used instead of the nonlocal means.
- the marker elimination unit 2130 may perform inpainting from a unit region corresponding to the periphery of the inpainting region in an internal direction.
- this can perform inpainting more natural than inpainting in a unit region corresponding to the interior of the inpainting region where an image that can be referred to is not present in surroundings, and is more efficient because inpainting can be terminated when a target region is terminated or a non-inpainting region is met.
- the advertisement insertion unit 2140 inserts a virtual indirect advertisement into the insertion region after the inpainting of the marker elimination region.
- FIG. 17 is a block diagram showing an example of the marker elimination unit 2130 shown in FIG. 16 .
- the marker elimination unit 2130 shown in FIG. 16 includes an image separation unit 2210 , a structure inpainting unit 2220 , a texture synthesis unit 2230 , and an image synthesis unit 2240 .
- the image separation unit 2210 divides an image, corresponding to an inpainting region, into a structure image and a texture image.
- the structure image may be an image including only the structure of an object included in the image. That is, the structure image includes the shape, color, brightness and the like of the object.
- the texture image may be an image including only the texture of the object included in the image. That is, the texture image includes the pattern, texture and the like of the object.
- inpainting based on the attribute of a region, such as the structure, texture or the like may generate a more natural image than inpainting based on gray values when inpainting is performed.
- inpainting is performed on the structure and the texture. Since an algorithm for generating an optimum result for the structure of the region and an algorithm for generating an optimum result for the texture of a region are different, separate optimum inpainting is performed on each of the structure and texture of the region, and then results are synthesized into a single image. In this case, more natural inpainting can be performed.
- the structure inpainting unit 2220 generates a processed structure image by performing inpainting on the structure image in such a manner that a pixel proximate to the periphery of the inpainting region diffuses into the inpainting region along a line of an equal gray value.
- inpainting cannot be performed on the texture information of the image corresponding to the inpainting region, but inpainting can be performed on the structure information thereof.
- the texture synthesis unit 2230 generates a processed texture image by performing texture synthesis on the texture image.
- the texture synthesis refers to generating the texture information of a large image from the texture information of a small sample image. That is, the texture information of a large image which will fill the inpainting region is generated from the texture information of a sample image selected from a region excluding the inpainting region.
- a single sample image may be selected and then texture synthesis may be performed, or a plurality of sample images may be selected and then texture synthesis may be performed, thereby generating a single large image.
- inpainting cannot be performed on the structure information of the image corresponding to the inpainting region, but inpainting can be performed on the texture information thereof.
- the image synthesis unit 2240 synthesizes the processed structure image with the processed texture image.
- an image in which inpainting has been performed on both the structure and texture of the image corresponding to the inpainting region can be acquired.
- FIG. 18 is a block diagram showing another example of the marker elimination unit 2130 shown in FIG. 16 .
- the marker elimination unit 2130 shown in FIG. 16 includes a patch segmentation unit 2310 , a patch selection unit 2320 , and a patch synthesis unit 2330 .
- the patch segmentation unit 2310 segments each of the inpainting region and a region excluding the inpainting region into unit patches.
- the size of the patch may not be fixed, but may vary depending on the situation.
- the patch selection unit 2320 may select the most consistent patch from the region excluding the inpainting region for each patch of the inpainting region.
- the most consistent patch may be selected based on assigned priorities or weights.
- a higher priority may be assigned to a patch having high sparsity different from surroundings.
- the weight may be assigned based on the distance.
- a single patch may not be selected, but the sum of the weights of a plurality of candidate patches may be obtained using nonlocal means and then selection may be performed.
- local patch consistency may be used instead of the nonlocal means.
- the patch synthesis unit 2330 synthesizes each patch of the inpainting region with a selected patch.
- FIG. 19 is a diagram showing an example of a video frame 2400 including markers 2410 for the insertion of a virtual indirect advertisement according to the present invention.
- the device for eliminating markers for the insertion of a virtual indirect advertisement recognizes the markers 2410 in the video frame 2400 .
- the device for eliminating markers for the insertion of a virtual indirect advertisement may recognize the markers 2410 using a preset algorithm or input received from a user.
- the video frame 2400 includes markers 2410 directly indicated in a shooting spot.
- the markers 2410 are directly indicated in a shooting spot, and are indications that can be easily distinguished from surroundings.
- the markers 2410 may be tapes including a specific color or pattern.
- FIG. 20 is a diagram showing an example of a video frame 2400 into which a virtual indirect advertisement 2510 has been inserted according to the present invention.
- the device for eliminating markers for the insertion of a virtual indirect advertisement determines an insertion region into which a virtual indirect advertisement 2510 will be inserted based on markers 2410 .
- the device for eliminating markers for the insertion of a virtual indirect advertisement may determine the insertion region into which the virtual indirect advertisement 2510 will be inserted using a preset algorithm or input received from a user.
- the device for eliminating markers for the insertion of a virtual indirect advertisement may determine a region of a shape, having the center points of the markers 2410 as its vertices, to be the insertion region, and may insert the virtual indirect advertisement 2510 into the insertion region.
- FIG. 21 is a diagram showing an example of a video frame 2400 into which a virtual indirect advertisement 2510 has been inserted and from which markers 2410 have been eliminated according to the present invention.
- the device for eliminating markers for the insertion of a virtual indirect advertisement eliminates the markers 2410 from the video frame 2400 using inpainting.
- the device for eliminating markers for the insertion of a virtual indirect advertisement inserts the virtual indirect advertisement 2510 determined using the insertion region markers 2410 and eliminates the markers 2410 based on the video frame 2400 including the markers 2410 directly indicated in a shooting spot.
- FIG. 22 is a diagram showing examples of a marker region 2710 and an insertion region 2720 according to the present invention.
- the marker region 2710 corresponds to the marker 2410 shown in FIG. 19 .
- the insertion region 2720 is a region which will be inserted into the virtual indirect advertisement 2510 shown in FIG. 20 .
- the marker region 2710 and the insertion region 2720 according to the present invention are determined by the region determination unit 2120 shown in FIG. 16 .
- the region determination unit 2120 shown in FIG. 16 may determine a region composed of pixels including recognized markers to be the marker region 2710 .
- the marker region 2710 may be determined using a preset algorithm or input received from a user.
- the region determination unit 2120 shown in FIG. 16 may determine a region, calculated using the recognized markers, to be the insertion region 2720 .
- the insertion region 2720 may be a region of a shape having the center points of the recognized markers as its vertices.
- the insertion region 2720 may be determined using a preset algorithm or input received from a user.
- the marker region 2710 is divided into a portion overlapping the insertion region 2720 and a non-overlapping portion.
- a region in which the marker region 2710 and the insertion region 2720 overlap each other is covered with and hidden by a virtual indirect advertisement inserted or to be inserted, inpainting may be performed on only a non-overlapping region.
- FIGS. 23 and 24 a method for performing inpainting only part of a marker region 2710 is described in detail below.
- FIG. 23 is a diagram showing an example of an inpainting region 2711 according to the present invention.
- the inpainting region 2711 according to the present invention is part of the marker region 2710 shown in FIG. 22 .
- the inpainting region 2711 according to the present invention may be determined by the region determination unit 2120 shown in FIG. 16 .
- the region determination unit 2120 shown in FIG. 16 may divide the marker region 2710 into the inpainting region 2711 and the non-inpainting region 2712 , and may determine the inpainting region 2711 to be a marker elimination region.
- the region determination unit 2120 may determine whether each of marker pixels within the marker region 2710 is present in the insertion region 2720 , and may determine marker pixels, not present in the insertion region 2720 , to be the inpainting region 2711 .
- the region determination unit 2120 may determine whether each of marker pixels within the marker region 2710 is present in the insertion region 2720 , and may determine marker pixels, present in the insertion region 2720 , to be the non-inpainting region 2712 .
- non-inpainting region is a marker portion that is covered with and hidden by the virtual indirect advertisement, inpainting may be performed on only a non-overlapping region.
- this can reduce inpainting operation costs compared to performing inpainting on the overall marker.
- FIG. 24 is a diagram showing another example of an inpainting region according to the present invention.
- the inpainting region 2711 according to the present invention is part of the marker region 2710 shown in FIG. 22 .
- the inpainting region 2711 according to the present invention includes part of the insertion region 2720 , unlike the inpainting region shown in FIG. 23 .
- the inpainting region 2711 according to the present invention may be determined by the region determination unit 2120 shown in FIG. 16 .
- the region determination unit 2120 shown in FIG. 16 may generate an insertion boundary region 2725 including pixels forming an insertion boundary, and may include part of the insertion boundary region 2725 in the inpainting region 2711 .
- the region determination unit 2120 shown in FIG. 16 may determine whether each of pixels forming the boundary of the insertion region 2720 is present within a preset distance from the boundary insertion region 2720 , may determine a region composed of pixels present within the preset distance to be the insertion boundary region 2725 , and may determine a marker elimination region by considering the insertion boundary region 2725 .
- the region determination unit 2120 shown in FIG. 16 may determine whether each of pixels within the insertion boundary region 2725 is present in the marker region 2710 and the insertion region 2720 , and may include pixels, present in the marker region 2710 and the insertion region 2720 , in the inpainting region 2711 .
- the traces of markers that may occur on the boundary surface of the insertion region 2720 can be more efficiently eliminated by extending the inpainting region 2711 according to the present invention.
- the region determination unit 2120 shown in FIG. 16 may extend the inpainting region 2711 in order to perform inpainting on predetermined pixels outside the marker boundary.
- the region determination unit 2120 shown in FIG. 16 may generate a marker boundary region including pixels forming the marker boundary, and may include part of the marker boundary region in the inpainting region 2711 .
- the region determination unit 2120 shown in FIG. 16 may determine whether each of pixels forming the boundary of the marker region 2710 is present within a preset distance from the boundary of the marker region 2710 , may determine a region composed of pixels present within the preset distance to be a marker boundary region, and may determine a marker elimination region by considering the marker boundary region.
- the region determination unit 2120 shown in FIG. 16 may determine whether each of pixels within the marker boundary region is present in the marker region 2710 and the insertion region 272 , and may include pixels, not present in the marker region 2710 and the insertion region 2720 , in the inpainting region 2711 .
- the traces of markers that may occur on the boundary surface of the insertion region 2710 can be more efficiently eliminated by extending the inpainting region 2711 according to the present invention.
- FIG. 25 is an operation flowchart showing a method of eliminating markers for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- markers included in the frames of a video are recognized at step S 2010 .
- markers included in the frames of a video may be recognized using a preset algorithm or input received from a user.
- a marker region corresponding to each marker, an insertion region into which a virtual indirect advertisement will be inserted, and a marker elimination region are determined using the marker region and the insertion region at step S 2020 .
- the marker region and the insertion region may be determined using a preset algorithm or input received from the user.
- the marker region may be divided into an inpainting region and a non-inpainting region, and the inpainting region may be determined to be a marker elimination region.
- step S 2020 it may be determined whether each of marker pixels within the marker region is present in the insertion region, and marker pixels, not present in the insertion region, may be determined to be the inpainting region.
- step S 2020 it may be determined whether each of marker pixels within the marker region is present in the insertion region, and marker pixels, present in the insertion region, may be determined to be the non-inpainting region.
- non-inpainting region is a marker portion that is covered with and hidden by the virtual indirect advertisement, inpainting may be performed on only a non-overlapping region.
- this can reduce inpainting operation costs compared to performing inpainting on the overall marker.
- an insertion boundary region including pixels forming an insertion boundary may be generated, and part of the insertion boundary region may be included in the inpainting region.
- step S 2020 it may be determined whether each of the pixels forming the boundary of the insertion region is present within a preset distance from the boundary of the insertion region, a region composed of pixels present within a preset distance may be determined to be the insertion boundary region, and the marker elimination region may be determined by considering the insertion boundary region.
- step S 2020 it may be determined whether each of the pixels within the insertion boundary region is present in the marker region and the insertion region, and pixels, present in the marker region and the insertion region, may be included in the inpainting region.
- the marker boundary region including the pixels forming the marker boundary may be generated, and part of the marker boundary region may be included in the inpainting region.
- step S 2020 it may be determined whether each of pixels forming the boundary of the marker region is present within a preset distance from the boundary of the marker region, a region composed of pixels, present within the preset distance, may be determined to be the marker boundary region, and the marker elimination region may be determined by considering the marker boundary region.
- step S 2020 it may be determined whether each of pixels within the marker boundary region is present in the marker region and the insertion region, and pixels, not present in the marker region and the insertion region, may be included in the inpainting region.
- inpainting is performed on only the marker elimination region at step S 2030 .
- an image, corresponding to the inpainting region may be divided into a structure image and a texture image
- a processed structure image may be generated by performing inpainting on the structure image in such a manner that a pixel proximate to the periphery of the inpainting region diffuses into the inpainting region along a line of an equal gray value
- a processed texture image may be generated by performing texture synthesis on the texture image
- the processed structure image may be synthesized with the processed texture image.
- the structure image may be an image including only the structure of an object included in the image. That is, the structure image includes the shape, color, brightness and the like of the object.
- the texture image may be an image including only the texture of the object included in the image. That is, the texture image includes the pattern, texture and the like of the object.
- inpainting based on the attribute of a region, such as the structure, texture or the like may generate a more natural image than inpainting based on gray values when inpainting is performed.
- the texture synthesis refers to generating the texture information of a large image from the texture information of a small sample image. That is, the texture information of a large image which will fill the inpainting region is generated from the texture information of a sample image selected from a region excluding the inpainting region.
- a single sample image may be selected and then texture synthesis may be performed, or a plurality of sample images may be selected and then texture synthesis may be performed, thereby generating a single large image.
- each of the inpainting region and the region excluding the inpainting region may be segmented into unit patches, the most consistent patch may be selected in the region excluding the inpainting region with respect to each patch of the inpainting region, and each patch of the inpainting region may be synthesized with a selected patch.
- the size of the patch may not be fixed, but may vary depending on the situation.
- the most consistent patch may be selected based on assigned priorities or weights.
- a higher priority may be assigned to a patch having high sparsity different from surroundings.
- the weight may be assigned based on the distance.
- a single patch may not be selected, but the sum of the weights of a plurality of candidate patches may be obtained using nonlocal means and then selection may be performed.
- local patch consistency may be used instead of the nonlocal means.
- inpainting may be performed from a unit region corresponding to the periphery of the inpainting region in an internal direction.
- this can perform inpainting more natural than inpainting in a unit region corresponding to the interior of the inpainting region where an image that can be referred to is not present in surroundings, and is more efficient because inpainting can be terminated when a target region is terminated or a non-inpainting region is met.
- a virtual indirect advertisement is inserted into the insertion region after the inpainting of the marker elimination region at step 2040 .
- the steps shown in FIG. 25 may be performed in the sequence shown in FIG. 25 , in a sequence reverse to the former sequence, or concurrently.
- the method of eliminating markers for the insertion of a virtual indirect advertisement according to the present invention may be implemented as a program or smart phone app that can be executed by various computer means.
- the program or smart phone app may be recorded on a computer-readable storage medium.
- the computer-readable storage medium may include program instructions, data files, and data structures solely or in combination. Program instructions recorded on the storage medium may have been specially designed and configured for the present invention, or may be known to or available to those who have ordinary knowledge in the field of computer software.
- Examples of the computer-readable storage medium include all types of hardware devices specially configured to record and execute program instructions, such as magnetic media, such as a hard disk, a floppy disk, and magnetic tape, optical media, such as compact disk (CD)-read only memory (ROM) and a digital versatile disk (DVD), magneto-optical media, such as a floptical disk, ROM, random access memory (RAM), and flash memory.
- Examples of the program instructions include machine code, such as code created by a compiler, and high-level language code executable by a computer using an interpreter. These hardware devices may be configured to operate as one or more software modules in order to perform the operation of the present invention, and the vice versa.
- the device and method of eliminating markers for the insertion of a virtual indirect advertisement according to the present invention are not limited to the configurations and methods of the above-described embodiments, but some or all of the embodiments may be configured to be selectively combined such that the embodiments can be modified in various manners.
- a virtual indirect advertisement service device and method according to embodiments of the present invention are described below.
- FIG. 26 is a block diagram showing a virtual indirect advertisement service device according to an embodiment of the present invention.
- the virtual indirect advertisement service device includes an advertisement selection unit 3110 , an inpainting region generation unit 3120 , an inpainting unit 3130 , and an advertisement insertion unit 3140 .
- the advertisement selection unit 3110 selects a virtual indirect advertisement which will be inserted into video image content.
- the inpainting region generation unit 3120 generates an inpainting region and a non-inpainting region by comparing the virtual indirect advertisement with an existing advertisement included in the video image content.
- the inpainting region generation unit 3120 may generate a target region including the region of the existing advertisement, and may generate the inpainting region and the non-inpainting region by dividing the target region.
- the target region is a region including the region of the existing advertisement, and may be set using a preset algorithm or input received from a user.
- the non-inpainting region is a region that is completely covered when the virtual indirect advertisement is inserted.
- the non-inpainting region may be a region which is smaller than or equal to a virtual indirect advertisement to be inserted.
- the inpainting region may be a region obtained by excluding the non-inpainting region from the target region.
- inpainting may be performed on only a non-overlapping region.
- this can reduce inpainting operation costs compared to performing inpainting on the overall target region.
- the inpainting unit 3130 performs inpainting on only the inpainting region.
- the inpainting unit 3130 may perform inpainting from a unit region corresponding to the periphery of the inpainting region in an internal direction.
- this can perform inpainting more natural than inpainting in a unit region corresponding to the interior of the inpainting region where an image that can be referred to is not present in surroundings, and is more efficient because inpainting can be terminated when a target region is terminated or a non-inpainting region is met.
- the advertisement insertion unit 3140 inserts a virtual indirect advertisement into the video image content after the inpainting of the marker elimination region.
- FIG. 27 is a diagram showing an example of an inpainting region according to an embodiment of the present invention.
- video image content into which a virtual indirect advertisement has been inserted includes an existing advertisement region 3410 , a target region 3420 , a virtual indirect advertisement 3430 , and a non-inpainting region 3440 .
- the target region 3420 including the existing advertisement region 3410 is generated.
- the target region 3420 may be set using a preset algorithm or input received from a user.
- the target region 3420 may be generated to include the existing advertisement region 3410 while having no connection with the shape of the existing advertisement region 3410 or maintaining the shape of the existing advertisement region 3410 .
- the non-inpainting region 3440 is a region that is completely covered when the virtual indirect advertisement 3430 is inserted.
- the non-inpainting region 3440 may be the same as or smaller than the region of the virtual indirect advertisement 3430 that is inserted.
- the inpainting region is a region obtained by excluding the non-inpainting region 3440 from the target region 3420 .
- Performing inpainting on only the inpainting region can reduce inpainting operation costs compared to performing inpainting on the overall target region 3420 .
- FIG. 28 is a diagram showing an example of video image content into which virtual indirect advertisement has been inserted according to an embodiment of the present invention.
- the video image content into which a virtual indirect advertisement has been inserted includes an inpainting region 3510 and a virtual indirect advertisement 3520 .
- the inpainting region 3510 is a region generated by comparing the virtual indirect advertisement 3520 with an existing advertisement included in the video image content.
- inpainting regions 3510 and 3511 may be separated into two or more discontinuous regions.
- a target region including the existing advertisement region is generated.
- the target region may be set using a preset algorithm or input received from the user.
- the target region may be generated to include the existing advertisement region while having no connection with the shape of the existing advertisement region or maintaining the shape of the existing advertisement region.
- the non-inpainting region is a region that is completely covered when the virtual indirect advertisement 3520 is inserted.
- the non-inpainting region may be the same as or smaller than the region of the virtual indirect advertisement 3520 that is inserted.
- the inpainting region 3510 and 3511 is a region obtained by excluding the non-inpainting region from the target region.
- Performing inpainting on only the inpainting regions 3510 and 3511 can reduce inpainting operation costs compared to performing inpainting on the overall target region.
- FIG. 29 is an operation flowchart showing an example of a virtual indirect advertisement service method according to an embodiment of the present invention.
- a virtual indirect advertisement which will be inserted into video image content is selected at step S 3610 .
- an inpainting region and a non-inpainting region are generated by comparing the virtual indirect advertisement with an existing advertisement included in the video image content at step S 3620 .
- a target region including the existing advertisement region may be generated, and the inpainting region and the non-inpainting region may be generated by dividing the target region.
- the target region is a region including the existing advertisement region, and may be set using a preset algorithm or input received from the user.
- the non-inpainting region is a region that is completely covered when the virtual indirect advertisement is inserted.
- the non-inpainting region may be the same as or smaller than the virtual indirect advertisement that is inserted.
- the inpainting region is a region obtained by excluding the non-inpainting region from the target region.
- inpainting may be performed on only a non-overlapping region.
- this can reduce inpainting operation costs compared to performing inpainting on the overall target region.
- the non-inpainting region may be a region which belongs to the target region and overlaps the region of the virtual indirect advertisement.
- the target region and the virtual indirect advertisement may be generated as a non-inpainting region on which inpainting is not performed.
- inpainting is performed on only the inpainting region at step S 3630 .
- inpainting may be performed from a unit region corresponding to the periphery of the inpainting region in an internal direction.
- this can perform inpainting more natural than inpainting in a unit region corresponding to the interior of the inpainting region where an image that can be referred to is not present in surroundings, and is more efficient because inpainting can be terminated when a target region is terminated or a non-inpainting region is met.
- a virtual indirect advertisement is inserted into the video image content after the inpainting of the inpainting region at step S 3640 .
- FIG. 30 is an operation flowchart showing an example of the step of performing inpainting, which is shown in FIG. 29 .
- an image corresponding to the inpainting region is divided into a structure image and a texture image at step S 3710 .
- the structure image may be an image including only the structure of an object included in the image. That is, the structure image includes the shape, color, brightness and the like of the object.
- the texture image may be an image including only the texture of the object included in the image. That is, the texture image includes the pattern, texture and the like of the object.
- inpainting based on the attribute of a region, such as the structure, texture or the like may generate a more natural image than inpainting based on gray values when inpainting is performed.
- inpainting is performed on the structure and the texture. Since an algorithm for generating an optimum result for the structure of the region and an algorithm for generating an optimum result for the texture of a region are different, separate optimum inpainting is performed on each of the structure and texture of the region, and then results are synthesized into a single image. In this case, more natural inpainting can be performed.
- a processed structure image is generated by performing inpainting on the structure image in such a manner that a pixel proximate to the periphery of the inpainting region diffuses into the inpainting region along a line of an equal gray value.
- inpainting cannot be performed on the texture information of the image corresponding to the inpainting region, but inpainting can be performed on the structure information thereof.
- a processed texture image is generated by performing texture synthesis on the texture image.
- the texture synthesis refers to generating the texture information of a large image from the texture information of a small sample image. That is, the texture information of a large image which will fill the inpainting region is generated from the texture information of a sample image selected from a region excluding the inpainting region.
- a single sample image may be selected and then texture synthesis may be performed, or a plurality of sample images may be selected and then texture synthesis may be performed, thereby generating a single large image.
- inpainting cannot be performed on the structure information of the image corresponding to the inpainting region, but inpainting can be performed on the texture information thereof.
- the processed structure image is synthesized with the processed texture image at step S 3740 .
- an image in which inpainting has been performed on both the structure and texture of the image corresponding to the inpainting region can be acquired.
- FIG. 31 is an operation flowchart showing another example of the step of performing inpainting, which is shown in FIG. 29 .
- each of the inpainting region and a region excluding the inpainting region is divided into unit patches at step S 3810 .
- the size of the patch may not be fixed, but may vary depending on the situation.
- the most consistent patch may be selected in the region excluding the inpainting region with respect to each patch of the inpainting region at step S 3820 .
- the most consistent patch may be selected based on assigned priorities or weights.
- a higher priority may be assigned to a patch having high sparsity different from surroundings.
- the weight may be assigned based on the distance.
- a single patch may not be selected, but the sum of the weights of a plurality of candidate patches may be obtained using nonlocal means and then selection may be performed.
- local patch consistency may be used instead of the nonlocal means.
- each patch of the inpainting region is synthesized with the selected patch at step S 3830 .
- the steps shown in each of FIGS. 29, 30 and 31 may be performed in the sequence shown in each of FIGS. 29, 30 and 31 , in a sequence reverse to the former sequence, or concurrently.
- the virtual indirect advertisement service method according to the present invention may be implemented as a program or smart phone app that can be executed by various computer means.
- the program or smart phone app may be recorded on a computer-readable storage medium.
- the computer-readable storage medium may include program instructions, data files, and data structures solely or in combination. Program instructions recorded on the storage medium may have been specially designed and configured for the present invention, or may be known to or available to those who have ordinary knowledge in the field of computer software.
- Examples of the computer-readable storage medium include all types of hardware devices specially configured to record and execute program instructions, such as magnetic media, such as a hard disk, a floppy disk, and magnetic tape, optical media, such as compact disk (CD)-read only memory (ROM) and a digital versatile disk (DVD), magneto-optical media, such as a floptical disk, ROM, random access memory (RAM), and flash memory.
- Examples of the program instructions include machine code, such as code created by a compiler, and high-level language code executable by a computer using an interpreter. These hardware devices may be configured to operate as one or more software modules in order to perform the operation of the present invention, and the vice versa.
- the virtual indirect advertisement service method and device according to the present invention are not limited to the configurations and methods of the above-described embodiments, but some or all of the embodiments may be configured to be selectively combined such that the embodiments can be modified in various manners.
- a device and method for calculating a boundary value for the insertion of a virtual indirect advertisement according to embodiments of the present invention are described below.
- FIG. 32 is a block diagram showing a device for calculating a boundary value for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement includes a region determination unit 4110 , a reference pixel selection unit 4120 , and a boundary value calculation unit 4130 .
- the region determination unit 4110 determines a boundary region including a boundary pixel of a virtual indirect advertisement that is inserted into a frame of a video.
- the region determination unit 4110 may determine whether each of the pixels of the frame of the video is present within a preset distance from the boundary pixel, and may determine a region, composed of pixels present within the preset distance, to be a boundary region.
- the region determination unit 4110 may set the preset distance by considering the overall resolution of the video frame.
- the region determination unit 4110 may set a longer preset distance for the higher overall resolution of the frame of the video.
- the region determination unit 4110 may set the preset distance in proportion to the overall resolution of the frame of the video.
- the region determination unit 4110 may set the preset distance to 4 pixels when the overall resolution of the frame of the video is 1280*4720, and may set the preset distance to 6 pixels when the overall resolution of the frame of the video is 1920*1080.
- the region determination unit 4110 may divide a remaining region other than the boundary region into a foreground region and a background region corresponding to the virtual indirect advertisement.
- the region determination unit 4110 may determine a remaining region, into which the virtual indirect advertisement has been inserted, excluding the boundary region, to be a foreground region, and may determine a region, obtained by excluding the boundary region and the foreground region from the frame of the video, to be a background region.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement may designate pixels surrounding a virtual indirect advertisement insertion boundary line as a boundary region, and may newly calculate the pixel value of the boundary region, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed.
- the reference pixel selection unit 4120 with respect to each boundary region pixel within the boundary region, selects a foreground reference pixel and a background reference pixel from a remaining region other than the boundary region by considering the location of the boundary region pixel.
- the reference pixel selection unit 4120 may select one of the pixels of the foreground region, which is closest to the boundary region pixel, as the foreground reference pixel.
- the reference pixel selection unit 4120 may select one of the pixels of the background region, which is closes to the boundary region pixel, as the background reference pixel.
- the foreground reference pixel is a pixel of the foreground region
- the background reference pixel is a pixel of the background region.
- the reference pixel selection unit 4120 may generate a perpendicular line between the boundary region pixel and the boundary line of the remaining region.
- the reference pixel selection unit 4120 may draw a perpendicular line from the boundary region pixel to the boundary line of the foreground region.
- the reference pixel selection unit 4120 may select one of the pixels of the foreground region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, as the foreground reference pixel.
- the reference pixel selection unit 4120 may draw a perpendicular line from the boundary region pixel to the boundary line of the background region.
- the reference pixel selection unit 4120 may select one of the pixels of the background region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, as the background reference pixel.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement newly calculates the pixel value of the boundary region by referring to the pixel values of the background and the foreground, with respect to pixels surrounding the virtual indirect advertisement insertion boundary line, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed.
- the reference pixel selection unit 4120 with respect to each boundary region pixel within the boundary region, selects a previous frame reference pixel from a frame previous to the video frame and a subsequent frame reference pixel from a frame subsequent to the video frame by considering the location of the boundary region pixel.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement newly calculates the pixel value of the current frame by referring to not only pixels surrounding the virtual indirect advertisement insertion boundary line but also the pixel values of the previous/subsequent frames, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed during the playback of video.
- the reference pixel selection unit 4120 may select a pixel corresponding to the location of the boundary region pixel from the previous frame as the previous frame reference pixel, and may select a pixel corresponding to the location of the boundary region pixel from the subsequent frame as the subsequent frame reference pixel.
- the boundary value calculation unit 4130 calculates the boundary region pixel value using the pixel values of the foreground reference pixel and the background reference pixel and the weights of the foreground reference pixel and the background reference pixel.
- the boundary value calculation unit 4130 may set the weight of the foreground reference pixel based on the distance between the boundary region pixel and the foreground reference pixel, and may set the weight of the background reference pixel based on the distance between the boundary region pixel and the background reference pixel.
- the boundary value calculation unit 4130 may set the weight of the foreground reference pixel and the background reference pixel in inverse proportion to the distance from the boundary region pixel.
- the boundary value calculation unit 4130 may set the weights so that the sum of the weight of the foreground reference pixel and the weight of the background reference pixel becomes 1.
- the boundary value calculation unit 4130 may set the weight of the foreground reference pixel to 0.75 and the weight of the background reference pixel to 0.25.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement newly calculates the pixel value of the boundary region by referring to the pixel values of the background and the foreground, with respect to pixels surrounding the virtual indirect advertisement insertion boundary line, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed.
- the boundary value calculation unit 4130 may calculate the boundary region pixel value using the pixel values of the previous frame reference pixel and the subsequent frame reference pixel and the weights of the previous frame reference pixel and the subsequent frame reference pixel.
- A is the final boundary region pixel value
- A(t) is the reference frame reference pixel value
- A(t ⁇ 1) is the boundary region pixel value of the previous frame
- A(t+1) is the boundary region pixel value of the subsequent frame
- w t is the weight of the reference frame reference pixel
- w t ⁇ 1 is the weight of the previous frame reference pixel
- w t+1 is the weight of the subsequent frame reference pixel
- a f,t is the foreground reference pixel value of the reference frame
- a b,t is the background reference pixel value of the reference frame
- w f,t is the foreground reference pixel weight of the reference frame
- w b,t is the background reference pixel weight of the reference frame.
- the boundary value calculation unit 4130 may set the weights so that the sum of the weight of the previous frame reference pixel, the weight of the subsequent frame reference pixel and the weight of the reference frame reference pixel becomes 1.
- the boundary value calculation unit 4130 may set the weight of the foreground reference pixel to 0.75, the weight of the background reference pixel to 0.25, the weight of the previous frame reference pixel to 0.25, the weight of the subsequent frame reference pixel to 0.25, and the weight of the reference frame reference pixel to 0.5.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement newly calculates the pixel value of the current frame by referring to not only pixels surrounding the virtual indirect advertisement insertion boundary line but also the pixel values of the previous/subsequent frames, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed during the playback of video.
- FIG. 33 is a block diagram showing an example of the reference pixel selection unit 4120 shown in FIG. 32 .
- the reference pixel selection unit 4120 shown in FIG. 32 includes a perpendicular line generation unit 4210 , a foreground reference pixel selection unit 4220 , and a background reference pixel selection unit 4230 .
- the perpendicular line generation unit 4210 may draw a perpendicular line from a boundary region pixel to the boundary line of a foreground region.
- the foreground reference pixel selection unit 4220 may select one of the pixels of the foreground region, corresponding to the foot of the perpendicular line, which is closest to a boundary region pixel, as a foreground reference pixel.
- the background reference pixel selection unit 4230 may select one of the pixels of the background region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, as a background reference pixel.
- the reference pixel selection unit 4120 shown in FIG. 32 selects the foreground reference pixel and the background reference pixel using the foot of the perpendicular line.
- the reference pixel selection unit 4120 shown in FIG. 32 may select a pixel corresponding to the location of the boundary region pixel from a previous frame as a previous frame reference pixel, and may select a pixel corresponding to the location of the boundary region pixel from a subsequent frame as a subsequent frame reference pixel.
- the reference pixel selection unit 4120 shown in FIG. 32 selects not only the foreground reference pixel and the background reference pixel but also the previous frame reference pixel and the subsequent frame reference pixel.
- FIG. 34 is a diagram showing an example of a video frame 4300 into which a virtual indirect advertisement 4310 has been inserted according to the present invention.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement 4310 recognizes the boundary pixels of the virtual indirect advertisement 4310 in the video frame 4300 into which the virtual indirect advertisement 4310 has been inserted, and determines a boundary region 4320 including the boundary pixels.
- the video frame 4300 includes the virtual indirect advertisement 4310 .
- the device for calculating a boundary value for the insertion of the virtual indirect advertisement 4310 calculates the pixel value of the surroundings of a boundary line within which the inserted virtual indirect advertisement 4310 has been inserted after the step of inserting the virtual indirect advertisement 4310 .
- the boundary region 4320 may be a region composed of pixels present within a preset distance from the boundary line within which the virtual indirect advertisement 4310 has been inserted.
- the boundary region 4320 is intended to enable the boundary line within which the virtual indirect advertisement 4310 has been inserted to be naturally viewed to a user, and thus the width of the boundary region 4320 may be determined by considering the overall resolution of the video frame 4300 .
- the boundary region 4320 has a width of 9 pixels when the overall resolution of the video frame 4300 is 1280*4720, and has a width of 13 pixels when the overall resolution of the video frame 4300 is 1920*1080.
- FIG. 35 is a diagram showing examples of a boundary region, a foreground region and a background region according to the present invention.
- FIG. 35 shows the extensions of part of the boundary line of the virtual indirect advertisement 4310 and the boundary region 4320 shown in FIG. 34 .
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement 4310 determines a region, which belongs to a region excluding the boundary region 4320 and corresponds to the virtual indirect advertisement 4310 , to be a foreground region 4415 , and determines a region, which corresponds to a remaining portion corresponding to a background before the insertion of the virtual indirect advertisement 4310 , to be a background region 4425 .
- the device for calculating a boundary value for the insertion of the virtual indirect advertisement 4310 selects reference pixels from the boundary line 4410 of the foreground region and the boundary line 4420 of the background region.
- the device for calculating a boundary value for the insertion of the virtual indirect advertisement 4310 determines a region, surrounding a boundary line within which the virtual indirect advertisement 4310 has been inserted, to be the boundary region 4320 and calculates a pixel value within the boundary region 4320 from pixels constituting the boundary line 4410 of the foreground region and the boundary line 4420 of the background region located outside the boundary region 4320 .
- FIG. 36 is a diagram showing an example of selecting a foreground reference pixel and a background reference pixel according to the present invention.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement selects a foreground reference pixel 4510 and a background reference pixel 4520 from a foreground region boundary line 4410 and a background region boundary line 4420 , respectively, by considering the location of the boundary region pixel 4500 .
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement may select one of the pixels of the foreground region boundary line 4410 , which is closest to the boundary region pixel 4500 , as the foreground reference pixel 4510 .
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement may select one of the pixels of the background region boundary line 4420 , which is closest to the boundary region pixel 4500 , as the background reference pixel 4520 .
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement according to the present invention may generate perpendicular lines between the boundary region pixel 4500 and the foreground region boundary line 4410 and between the boundary region pixel 4500 and the background region boundary line 4420 .
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement may select one of the pixels of the foreground region boundary line 4410 , corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel 4500 , as the foreground reference pixel 4510 .
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement may select one of the pixels of the background region boundary line 4420 , corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel 4500 , as the background reference pixel 4520 .
- the reference pixel selection unit 4120 shown in FIG. 32 selects the foreground reference pixel and the background reference pixel using the foot of a perpendicular line.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement selects the foreground reference pixel and the background reference pixel using the foot of a perpendicular line.
- FIG. 37 is a diagram showing an example of selecting a previous frame reference pixel and a subsequent frame reference pixel according to the present invention.
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement according to the present invention may select a previous frame reference pixel 4615 from a frame 4610 previous to the current frame 4600 of a video and a subsequent frame reference pixel 4625 from a frame 4620 subsequent to the current frame 4600 of the video by considering the location of the boundary region pixel 4500 .
- the device for calculating a boundary value for the insertion of a virtual indirect advertisement may select a pixel corresponding to the location of the boundary region pixel 4500 from the previous frame 4610 as the previous frame reference pixel 4615 , and may select a pixel corresponding to the location of the boundary region pixel 4500 from the subsequent frame 4620 as the subsequent frame reference pixel 4625 .
- FIG. 38 is an operation flowchart showing a method of calculating a boundary value for the insertion of a virtual indirect advertisement according to an embodiment of the present invention.
- a boundary region including a boundary pixel of a virtual indirect advertisement that is inserted into a frame of a video is determined at step 4710 .
- step 4710 it may be determined whether each of the pixels of the frame of the video is present within a preset distance from the boundary pixel, and a region composed of pixels present within the preset distance may be determined to be a boundary region.
- the preset distance may be set by considering the overall resolution of the video frame.
- a longer preset distance may be set for the higher overall resolution of the frame of the video.
- the preset distance may be set in proportion to the overall resolution of the frame of the video.
- the preset distance may be set to 4 pixels when the overall resolution of the frame of the video is 1280*4720, and the preset distance may be set to 6 pixels when the overall resolution of the frame of the video is 1920*1080.
- a remaining region other than the boundary region may be determined to be a foreground region and a background region corresponding to the virtual indirect advertisement at step S 4720 .
- a remaining region, into which the virtual indirect advertisement has been inserted, excluding the boundary region may be determined to be a foreground region, and a region, obtained by excluding the boundary region and the foreground region from the frame of the video, may be determined to be a background region.
- pixels surrounding a virtual indirect advertisement insertion boundary line is designated as a boundary region, and the pixel value of the boundary region is newly calculated, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed.
- a foreground reference pixel and a background reference pixel are selected from a remaining region other than the boundary region by considering the location of the boundary region pixel at step S 4730 .
- one of the pixels of the foreground region, which is closest to the boundary region pixel, may be selected as the foreground reference pixel.
- one of the pixels of the background region, which is closes to the boundary region pixel, may be selected as the background reference pixel.
- the foreground reference pixel is a pixel of the foreground region
- the background reference pixel is a pixel of the background region.
- a perpendicular line may be generated between the boundary region pixel and the boundary line of the remaining region.
- a perpendicular line may be drawn from the boundary region pixel to the boundary line of the foreground region.
- one of the pixels of the foreground region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, may be selected as the foreground reference pixel.
- a perpendicular line may be drawn from the boundary region pixel to the boundary line of the background region.
- one of the pixels of the background region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, may be selected as the background reference pixel.
- the pixel value of the boundary region is newly calculated by referring to the pixel values of the background and the foreground, with respect to pixels surrounding the virtual indirect advertisement insertion boundary line, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed.
- a previous frame reference pixel may be selected from a frame previous to the video frame and a subsequent frame reference pixel may be selected from a frame subsequent to the video frame, by considering the location of the boundary region pixel.
- the pixel value of the current frame is newly calculated by referring to not only pixels surrounding the virtual indirect advertisement insertion boundary line but also the pixel values of the previous/subsequent frames, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed during the playback of video.
- a pixel corresponding to the location of the boundary region pixel may be selected from the previous frame as the previous frame reference pixel, and a pixel corresponding to the location of the boundary region pixel may be selected from the subsequent frame as the subsequent frame reference pixel.
- the boundary region pixel value is calculated using the pixel values of the foreground reference pixel and the background reference pixel and the weights of the foreground reference pixel and the background reference pixel at step S 4740 .
- the boundary region pixel value may be calculated, as in Equation 2.
- the weight of the foreground reference pixel may be set based on the distance between the boundary region pixel and the foreground reference pixel
- the weight of the background reference pixel may be set based on the distance between the boundary region pixel and the background reference pixel.
- the weight of the foreground reference pixel and the background reference pixel may be set in inverse proportion to the distance from the boundary region pixel.
- the weights may be set such that the sum of the weight of the foreground reference pixel and the weight of the background reference pixel becomes 1.
- the weight of the foreground reference pixel may be set to 0.75 and the weight of the background reference pixel may be set to 0.25.
- the pixel value of the boundary region is newly calculated by referring to the pixel values of the background and the foreground, with respect to pixels surrounding the virtual indirect advertisement insertion boundary line, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed.
- the boundary region pixel value may be calculated using the pixel values of the previous frame reference pixel and the subsequent frame reference pixel and the weights of the previous frame reference pixel and the subsequent frame reference pixel.
- the boundary region pixel value may be calculated, as in Equation 3.
- the weights may be set such that the sum of the weight of the previous frame reference pixel, the weight of the subsequent frame reference pixel and the weight of the reference frame reference pixel becomes 1.
- the weight of the foreground reference pixel may be set to 0.75
- the weight of the background reference pixel may be set to 0.25
- the weight of the previous frame reference pixel may be set to 0.25
- the weight of the subsequent frame reference pixel may be set to 0.25
- the weight of the reference frame reference pixel may be set to 0.5.
- the pixel value of the current frame is newly calculated by referring to not only pixels surrounding the virtual indirect advertisement insertion boundary line but also the pixel values of the previous/subsequent frames, thereby enabling a virtual indirect advertisement insertion boundary surface to be naturally viewed during the playback of video.
- FIG. 39 is an operation flowchart showing an example of step S 4730 of selecting a foreground reference pixel and a background reference pixel, which is shown in FIG. 38 .
- a perpendicular line may be drawn from a boundary region pixel to the boundary line of a foreground region at step S 4810 .
- step S 4730 one of the pixels of the foreground region, corresponding to the foot of the perpendicular line, which is closest to a boundary region pixel, may be selected as a foreground reference pixel at step S 4820 .
- step S 4730 one of the pixels of the background region, corresponding to the foot of the perpendicular line, which is closest to the boundary region pixel, may be selected as a background reference pixel at step 4830 .
- FIGS. 38 and 39 may be performed in the sequence shown in FIG. 38 or 39 , in a sequence reverse to the former sequence, or concurrently.
- the method of calculating a boundary value for the insertion of a virtual indirect advertisement according to the present invention may be implemented as a program or smart phone app that can be executed by various computer means.
- the program or smart phone app may be recorded on a computer-readable storage medium.
- the computer-readable storage medium may include program instructions, data files, and data structures solely or in combination. Program instructions recorded on the storage medium may have been specially designed and configured for the present invention, or may be known to or available to those who have ordinary knowledge in the field of computer software.
- Examples of the computer-readable storage medium include all types of hardware devices specially configured to record and execute program instructions, such as magnetic media, such as a hard disk, a floppy disk, and magnetic tape, optical media, such as compact disk (CD)-read only memory (ROM) and a digital versatile disk (DVD), magneto-optical media, such as a floptical disk, ROM, random access memory (RAM), and flash memory.
- Examples of the program instructions include machine code, such as code created by a compiler, and high-level language code executable by a computer using an interpreter. These hardware devices may be configured to operate as one or more software modules in order to perform the operation of the present invention, and the vice versa.
- the device and method for calculating a boundary value for the insertion of a virtual indirect advertisement according to the present invention are not limited to the configurations and methods of the above-described embodiments, but some or all of the embodiments may be configured to be selectively combined such that the embodiments can be modified in various manners.
- advertisement insertion target frames including advertisement insertion regions are searched for and clustered in order to insert the advertisement for all the frames in a uniform manner.
- the utilization of the technology for inserting an advertisement using frame clustering according to the present invention will be high.
- candidate regions into which a virtual indirect advertisement will be inserted are determined, the exposure levels of the candidate regions are measured, and the measured exposure levels are provided to a user, thereby providing guide information so that the user can select a virtual indirect advertisement insertion region while more intuitively recognizing the advertising effects of the respective candidate regions.
- the utilization of the technology for providing insertion region guide information for the insertion of an virtual indirect advertisement according to the present invention will be high.
- a region into which a virtual indirect advertisement will be inserted is determined using the markers and then the markers are eliminated, a region from which the markers will be eliminated is limited to a region that does not overlap the region into which a virtual indirect advertisement will be inserted, and then inpainting is performed.
- the utilization of the technology for eliminating markers for the insertion of an virtual indirect advertisement according to the present invention will be high.
- a virtual indirect advertisement can be inserted into video image content in place of an existing advertisement.
- inpainting must be performed on the region of the existing advertisement to harmonize with a surrounding image.
- a region on which inpainting will be performed can be minimized by comparing the region of the existing advertisement region with the region of the virtual indirect advertisement. Accordingly, the operation costs required for inpainting can be reduced, and the advertising profits from a virtual indirect advertisement can be maximized.
- the boundary region pixel value of a virtual indirect advertisement to be inserted into video image content is automatically calculated, and is processed such that a boundary line within which the virtual indirect advertisement has been inserted can harmonize with a surrounding image.
- the utilization of the technology for calculating a boundary value for the insertion of an virtual indirect advertisement according to the present invention will be high.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Marketing (AREA)
- Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
A=w f *A f +w b *A b (2)
where A is the boundary region pixel value, Af is the foreground reference pixel value, Ab is the background reference pixel value, wf is the foreground reference pixel weight, and wb is the background reference pixel weight.
A=w t−1 *A(t−1)+w t *A(t)+w t+1 *A(t−1),A(t)=w f,t *A f,t +w b,t *A b,t (3)
where A is the boundary region pixel value, Af is the foreground reference pixel value, Ab is the background reference pixel value, wf is the foreground reference pixel weight, and wb is the background reference pixel weight.
where A is the final boundary region pixel value, A(t) is the reference frame reference pixel value, A(t−1) is the boundary region pixel value of the previous frame, A(t+1) is the boundary region pixel value of the subsequent frame, wt is the weight of the reference frame reference pixel, wt−1 is the weight of the previous frame reference pixel, wt+1 is the weight of the subsequent frame reference pixel, Af,t is the foreground reference pixel value of the reference frame, Ab,t is the background reference pixel value of the reference frame, wf,t is the foreground reference pixel weight of the reference frame, and wb,t is the background reference pixel weight of the reference frame.
Claims (5)
Applications Claiming Priority (11)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140009194A KR101564313B1 (en) | 2014-01-24 | 2014-01-24 | Insertion area guide information apparatus for inserting digital product placement and method for the same |
KR10-2014-0009196 | 2014-01-24 | ||
KR1020140009196A KR101561083B1 (en) | 2014-01-24 | 2014-01-24 | Boundary value calculation apparatus for inserting digital product placement and method for the same |
KR10-2014-0009195 | 2014-01-24 | ||
KR10-2014-0009194 | 2014-01-24 | ||
KR1020140009193A KR101573482B1 (en) | 2014-01-24 | 2014-01-24 | Apparatus for inserting advertisement using frame clustering and method thereof |
KR10-2014-0009193 | 2014-01-24 | ||
KR20140009195A KR101503029B1 (en) | 2014-01-24 | 2014-01-24 | Marker removal apparatus for inserting digital product placement and method for the same |
KR10-2014-0013842 | 2014-02-06 | ||
KR1020140013842A KR102135671B1 (en) | 2014-02-06 | 2014-02-06 | Method of servicing virtual indirect advertisement and apparatus for the same |
PCT/KR2014/012159 WO2015111840A1 (en) | 2014-01-24 | 2014-12-10 | Device and method for inserting advertisement by using frame clustering |
Publications (2)
Publication Number | Publication Date |
---|---|
US20160142792A1 US20160142792A1 (en) | 2016-05-19 |
US10904638B2 true US10904638B2 (en) | 2021-01-26 |
Family
ID=53681611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/898,450 Active 2037-01-23 US10904638B2 (en) | 2014-01-24 | 2014-12-10 | Device and method for inserting advertisement by using frame clustering |
Country Status (3)
Country | Link |
---|---|
US (1) | US10904638B2 (en) |
CN (1) | CN105284122B (en) |
WO (1) | WO2015111840A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200293791A1 (en) * | 2016-10-28 | 2020-09-17 | Axon Enterprise, Inc. | Identifying and redacting captured data |
US11234027B2 (en) * | 2019-01-10 | 2022-01-25 | Disney Enterprises, Inc. | Automated content compilation |
US12061862B2 (en) * | 2020-06-11 | 2024-08-13 | Capital One Services, Llc | Systems and methods for generating customized content based on user preferences |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9852523B2 (en) | 2016-02-24 | 2017-12-26 | Ondrej Jamri{hacek over (s)}ka | Appearance transfer techniques maintaining temporal coherence |
US9870638B2 (en) * | 2016-02-24 | 2018-01-16 | Ondrej Jamri{hacek over (s)}ka | Appearance transfer techniques |
WO2018033137A1 (en) * | 2016-08-19 | 2018-02-22 | 北京市商汤科技开发有限公司 | Method, apparatus, and electronic device for displaying service object in video image |
CN108875692B (en) * | 2018-07-03 | 2020-10-16 | 中影数字巨幕(北京)有限公司 | Thumbnail film generation method, medium and computing device based on key frame processing technology |
US11032607B2 (en) * | 2018-12-07 | 2021-06-08 | At&T Intellectual Property I, L.P. | Methods, devices, and systems for embedding visual advertisements in video content |
US11074457B2 (en) * | 2019-04-17 | 2021-07-27 | International Business Machines Corporation | Identifying advertisements embedded in videos |
US11042969B2 (en) * | 2019-05-23 | 2021-06-22 | Adobe Inc. | Automatic synthesis of a content-aware sampling region for a content-aware fill |
US10950022B2 (en) * | 2019-06-06 | 2021-03-16 | Sony Interactive Entertainment Inc. | Using machine learning and image recognition for automatic relocation of camera display area and sizing of camera image |
CN110278446B (en) * | 2019-06-20 | 2022-01-28 | 北京字节跳动网络技术有限公司 | Method and device for determining virtual gift display information and electronic equipment |
CN110290426B (en) * | 2019-06-24 | 2022-04-19 | 腾讯科技(深圳)有限公司 | Method, device and equipment for displaying resources and storage medium |
CN110213629B (en) | 2019-06-27 | 2022-02-11 | 腾讯科技(深圳)有限公司 | Information implantation method, device, server and storage medium |
CN110381369B (en) * | 2019-07-19 | 2022-02-22 | 腾讯科技(深圳)有限公司 | Method, device and equipment for determining recommended information implantation position and storage medium |
CN110996121A (en) * | 2019-12-11 | 2020-04-10 | 北京市商汤科技开发有限公司 | Information processing method and device, electronic equipment and storage medium |
CN111818364B (en) * | 2020-07-30 | 2021-08-06 | 广州云从博衍智能科技有限公司 | Video fusion method, system, device and medium |
CN116074582B (en) * | 2023-01-31 | 2024-08-30 | 北京奇艺世纪科技有限公司 | Implant position determining method and device, electronic equipment and storage medium |
CN116074581B (en) * | 2023-01-31 | 2024-08-30 | 北京奇艺世纪科技有限公司 | Implant position determining method and device, electronic equipment and storage medium |
Citations (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5107252A (en) * | 1988-09-20 | 1992-04-21 | Quantel Limited | Video processing system |
US5263933A (en) * | 1988-12-14 | 1993-11-23 | Patco Ventures Ltd. | Safety syringe needle device with interchangeable and retractable needle platform |
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
WO1995025399A1 (en) | 1994-03-14 | 1995-09-21 | Scitex America Corporation | A system for implanting an image into a video stream |
US5751838A (en) * | 1996-01-26 | 1998-05-12 | Nec Research Institute, Inc. | Correction of camera motion between two image frames |
US6181345B1 (en) * | 1998-03-06 | 2001-01-30 | Symah Vision | Method and apparatus for replacing target zones in a video sequence |
US6297853B1 (en) * | 1993-02-14 | 2001-10-02 | Orad Hi-Tech Systems Ltd. | Apparatus and method for detecting, identifying and incorporating advertisements in a video image |
US20020008697A1 (en) * | 2000-03-17 | 2002-01-24 | Deering Michael F. | Matching the edges of multiple overlapping screen images |
KR20020047271A (en) | 1999-10-25 | 2002-06-21 | 추후제출 | Method and system for advertising |
KR20030002919A (en) | 2001-07-02 | 2003-01-09 | 에이알비전 (주) | realtime image implanting system for a live broadcast |
US6987520B2 (en) | 2003-02-24 | 2006-01-17 | Microsoft Corporation | Image region filling by exemplar-based inpainting |
US20060026628A1 (en) * | 2004-07-30 | 2006-02-02 | Kong Wah Wan | Method and apparatus for insertion of additional content into video |
WO2006105660A1 (en) | 2005-04-05 | 2006-10-12 | Google Inc. | Method and system supporting audited reporting of advertising impressions from video games |
KR20060105910A (en) | 2005-04-01 | 2006-10-12 | 한국전자통신연구원 | Interactive digital broadcasting system and method for playing/storing particular contents, and interactive digital broadcasting receiving system and method according to it |
US20070014554A1 (en) | 2004-12-24 | 2007-01-18 | Casio Computer Co., Ltd. | Image processor and image processing program |
US7199793B2 (en) * | 2002-05-21 | 2007-04-03 | Mok3, Inc. | Image-based modeling and photo editing |
US20070092145A1 (en) * | 2005-10-26 | 2007-04-26 | Casio Computer Co., Ltd. | Image processing device and program |
US20070182861A1 (en) * | 2006-02-03 | 2007-08-09 | Jiebo Luo | Analyzing camera captured video for key frames |
KR20070115348A (en) | 2006-06-02 | 2007-12-06 | 김일 | A method and system of exposure time analysis of internet advertisement |
US7375745B2 (en) * | 2004-09-03 | 2008-05-20 | Seiko Epson Corporation | Method for digital image stitching and apparatus for performing the same |
WO2008111860A1 (en) | 2007-03-12 | 2008-09-18 | Vortex Technology Services Limited | Intentionality matching |
US20080255943A1 (en) | 2007-04-10 | 2008-10-16 | Widevine Technologies, Inc. | Refreshing advertisements in offline or virally distributed content |
KR20090007401A (en) * | 2006-04-07 | 2009-01-16 | 바이엘 머티리얼사이언스 아게 | Nitrocellulose-based binding agents for aqueous nail polishes |
KR100886149B1 (en) | 2007-10-18 | 2009-02-27 | (주)이엠티 | Method for forming moving image by inserting image into original image and recording media |
WO2009032993A2 (en) | 2007-09-07 | 2009-03-12 | Yahoo! Inc. | Delayed advertisement insertion in videos |
US20090167763A1 (en) * | 2000-06-19 | 2009-07-02 | Carsten Waechter | Quasi-monte carlo light transport simulation by efficient ray tracing |
US20100074548A1 (en) * | 2008-09-23 | 2010-03-25 | Sharp Laboratories Of America, Inc. | Image sharpening technique |
US20100111396A1 (en) * | 2008-11-06 | 2010-05-06 | Los Alamos National Security | Object and spatial level quantitative image analysis |
KR20100084075A (en) | 2009-01-15 | 2010-07-23 | 연세대학교 산학협력단 | Multi-frame combined video object matting system and method thereof |
KR20100088282A (en) | 2009-01-30 | 2010-08-09 | 서강대학교산학협력단 | Method and apparatus of inpainting for video data |
KR20100101397A (en) | 2009-03-09 | 2010-09-17 | 김태규 | Real-time online ad insert system using digital contents and method thereof |
US20100296571A1 (en) * | 2009-05-22 | 2010-11-25 | Microsoft Corporation | Composite Video Generation |
JP2011055378A (en) | 2009-09-04 | 2011-03-17 | Yahoo Japan Corp | Content insertion management apparatus, method and program |
WO2012011180A1 (en) | 2010-07-22 | 2012-01-26 | パイオニア株式会社 | Augmented reality device and method of controlling the same |
KR20120011216A (en) | 2010-07-28 | 2012-02-07 | 엘지전자 주식회사 | Method for sysnthesizing image in mobile terminal |
KR20120071226A (en) | 2010-12-22 | 2012-07-02 | 한국전자통신연구원 | Apparatus and method for extracting object |
US20120180084A1 (en) * | 2011-01-12 | 2012-07-12 | Futurewei Technologies, Inc. | Method and Apparatus for Video Insertion |
KR20120131425A (en) | 2011-05-25 | 2012-12-05 | 연세대학교 산학협력단 | Interactive advertisement authoring device and interactive advertisement authoring method |
US8374457B1 (en) * | 2008-12-08 | 2013-02-12 | Adobe Systems Incorporated | System and method for interactive image-noise separation |
KR20130047846A (en) | 2011-11-01 | 2013-05-09 | 정인수 | Internet advertising system |
KR20130056407A (en) | 2011-11-22 | 2013-05-30 | 건국대학교 산학협력단 | Inpainting system and method for h.264 error concealment image |
KR20130056532A (en) | 2011-11-22 | 2013-05-30 | 주식회사 씬멀티미디어 | Selective ppl inserting in motion video and advertisement service method |
US20130162787A1 (en) * | 2011-12-23 | 2013-06-27 | Samsung Electronics Co., Ltd. | Method and apparatus for generating multi-view |
KR20130089715A (en) | 2011-12-29 | 2013-08-13 | 주식회사 씨에이취피커뮤니케이션 | Method for detecting blank area in window, system and method for internet advertisement using the same |
KR20130104215A (en) | 2012-03-13 | 2013-09-25 | 계원예술대학교 산학협력단 | Method for adaptive and partial replacement of moving picture, and method of generating program moving picture including embedded advertisement image employing the same |
KR101353038B1 (en) | 2012-08-06 | 2014-01-17 | 광주과학기술원 | Apparatus and method for processing boundary part of depth image |
US8717412B2 (en) * | 2007-07-18 | 2014-05-06 | Samsung Electronics Co., Ltd. | Panoramic image production |
US20160267349A1 (en) * | 2015-03-11 | 2016-09-15 | Microsoft Technology Licensing, Llc | Methods and systems for generating enhanced images using multi-frame processing |
US20180167610A1 (en) * | 2015-06-10 | 2018-06-14 | Lg Electronics Inc. | Method and apparatus for inter prediction on basis of virtual reference picture in video coding system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101621636B (en) * | 2008-06-30 | 2011-04-20 | 北京大学 | Method and system for inserting and transforming advertisement sign based on visual attention module |
-
2014
- 2014-12-10 CN CN201480033739.0A patent/CN105284122B/en active Active
- 2014-12-10 WO PCT/KR2014/012159 patent/WO2015111840A1/en active Application Filing
- 2014-12-10 US US14/898,450 patent/US10904638B2/en active Active
Patent Citations (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5107252A (en) * | 1988-09-20 | 1992-04-21 | Quantel Limited | Video processing system |
US5263933A (en) * | 1988-12-14 | 1993-11-23 | Patco Ventures Ltd. | Safety syringe needle device with interchangeable and retractable needle platform |
US6297853B1 (en) * | 1993-02-14 | 2001-10-02 | Orad Hi-Tech Systems Ltd. | Apparatus and method for detecting, identifying and incorporating advertisements in a video image |
WO1995025399A1 (en) | 1994-03-14 | 1995-09-21 | Scitex America Corporation | A system for implanting an image into a video stream |
US5731846A (en) * | 1994-03-14 | 1998-03-24 | Scidel Technologies Ltd. | Method and system for perspectively distoring an image and implanting same into a video stream |
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
US5751838A (en) * | 1996-01-26 | 1998-05-12 | Nec Research Institute, Inc. | Correction of camera motion between two image frames |
US6181345B1 (en) * | 1998-03-06 | 2001-01-30 | Symah Vision | Method and apparatus for replacing target zones in a video sequence |
KR20020047271A (en) | 1999-10-25 | 2002-06-21 | 추후제출 | Method and system for advertising |
US20020008697A1 (en) * | 2000-03-17 | 2002-01-24 | Deering Michael F. | Matching the edges of multiple overlapping screen images |
US20090167763A1 (en) * | 2000-06-19 | 2009-07-02 | Carsten Waechter | Quasi-monte carlo light transport simulation by efficient ray tracing |
KR20030002919A (en) | 2001-07-02 | 2003-01-09 | 에이알비전 (주) | realtime image implanting system for a live broadcast |
US7199793B2 (en) * | 2002-05-21 | 2007-04-03 | Mok3, Inc. | Image-based modeling and photo editing |
US6987520B2 (en) | 2003-02-24 | 2006-01-17 | Microsoft Corporation | Image region filling by exemplar-based inpainting |
US20060026628A1 (en) * | 2004-07-30 | 2006-02-02 | Kong Wah Wan | Method and apparatus for insertion of additional content into video |
US7375745B2 (en) * | 2004-09-03 | 2008-05-20 | Seiko Epson Corporation | Method for digital image stitching and apparatus for performing the same |
US20070014554A1 (en) | 2004-12-24 | 2007-01-18 | Casio Computer Co., Ltd. | Image processor and image processing program |
KR20070088528A (en) | 2004-12-24 | 2007-08-29 | 가시오게산키 가부시키가이샤 | Image processor and image processing program |
KR20060105910A (en) | 2005-04-01 | 2006-10-12 | 한국전자통신연구원 | Interactive digital broadcasting system and method for playing/storing particular contents, and interactive digital broadcasting receiving system and method according to it |
KR20080004548A (en) | 2005-04-05 | 2008-01-09 | 구글 인코포레이티드 | Method and system supporting audited reporting of advertising impressions from video games |
WO2006105660A1 (en) | 2005-04-05 | 2006-10-12 | Google Inc. | Method and system supporting audited reporting of advertising impressions from video games |
US20070092145A1 (en) * | 2005-10-26 | 2007-04-26 | Casio Computer Co., Ltd. | Image processing device and program |
US20070182861A1 (en) * | 2006-02-03 | 2007-08-09 | Jiebo Luo | Analyzing camera captured video for key frames |
KR20090007401A (en) * | 2006-04-07 | 2009-01-16 | 바이엘 머티리얼사이언스 아게 | Nitrocellulose-based binding agents for aqueous nail polishes |
KR20070115348A (en) | 2006-06-02 | 2007-12-06 | 김일 | A method and system of exposure time analysis of internet advertisement |
KR20100015479A (en) | 2007-03-12 | 2010-02-12 | 볼텍스 테크놀로지 서비스 리미티드 | Intentionality matching |
WO2008111860A1 (en) | 2007-03-12 | 2008-09-18 | Vortex Technology Services Limited | Intentionality matching |
US20080255943A1 (en) | 2007-04-10 | 2008-10-16 | Widevine Technologies, Inc. | Refreshing advertisements in offline or virally distributed content |
US8717412B2 (en) * | 2007-07-18 | 2014-05-06 | Samsung Electronics Co., Ltd. | Panoramic image production |
WO2009032993A2 (en) | 2007-09-07 | 2009-03-12 | Yahoo! Inc. | Delayed advertisement insertion in videos |
KR20100056549A (en) | 2007-09-07 | 2010-05-27 | 야후! 인크. | Delayed advertisement insertion in videos |
KR100886149B1 (en) | 2007-10-18 | 2009-02-27 | (주)이엠티 | Method for forming moving image by inserting image into original image and recording media |
US20100074548A1 (en) * | 2008-09-23 | 2010-03-25 | Sharp Laboratories Of America, Inc. | Image sharpening technique |
US20100111396A1 (en) * | 2008-11-06 | 2010-05-06 | Los Alamos National Security | Object and spatial level quantitative image analysis |
US8374457B1 (en) * | 2008-12-08 | 2013-02-12 | Adobe Systems Incorporated | System and method for interactive image-noise separation |
KR20100084075A (en) | 2009-01-15 | 2010-07-23 | 연세대학교 산학협력단 | Multi-frame combined video object matting system and method thereof |
KR20100088282A (en) | 2009-01-30 | 2010-08-09 | 서강대학교산학협력단 | Method and apparatus of inpainting for video data |
KR20100101397A (en) | 2009-03-09 | 2010-09-17 | 김태규 | Real-time online ad insert system using digital contents and method thereof |
US20100296571A1 (en) * | 2009-05-22 | 2010-11-25 | Microsoft Corporation | Composite Video Generation |
JP2011055378A (en) | 2009-09-04 | 2011-03-17 | Yahoo Japan Corp | Content insertion management apparatus, method and program |
WO2012011180A1 (en) | 2010-07-22 | 2012-01-26 | パイオニア株式会社 | Augmented reality device and method of controlling the same |
EP2597621A1 (en) | 2010-07-22 | 2013-05-29 | Pioneer Corporation | Augmented reality device and method of controlling the same |
KR20120011216A (en) | 2010-07-28 | 2012-02-07 | 엘지전자 주식회사 | Method for sysnthesizing image in mobile terminal |
KR20120071226A (en) | 2010-12-22 | 2012-07-02 | 한국전자통신연구원 | Apparatus and method for extracting object |
CN103299610A (en) | 2011-01-12 | 2013-09-11 | 华为技术有限公司 | Method and apparatus for video insertion |
US20120180084A1 (en) * | 2011-01-12 | 2012-07-12 | Futurewei Technologies, Inc. | Method and Apparatus for Video Insertion |
KR20120131425A (en) | 2011-05-25 | 2012-12-05 | 연세대학교 산학협력단 | Interactive advertisement authoring device and interactive advertisement authoring method |
KR20130047846A (en) | 2011-11-01 | 2013-05-09 | 정인수 | Internet advertising system |
KR20130056407A (en) | 2011-11-22 | 2013-05-30 | 건국대학교 산학협력단 | Inpainting system and method for h.264 error concealment image |
KR20130056532A (en) | 2011-11-22 | 2013-05-30 | 주식회사 씬멀티미디어 | Selective ppl inserting in motion video and advertisement service method |
US20130162787A1 (en) * | 2011-12-23 | 2013-06-27 | Samsung Electronics Co., Ltd. | Method and apparatus for generating multi-view |
KR20130089715A (en) | 2011-12-29 | 2013-08-13 | 주식회사 씨에이취피커뮤니케이션 | Method for detecting blank area in window, system and method for internet advertisement using the same |
KR20130104215A (en) | 2012-03-13 | 2013-09-25 | 계원예술대학교 산학협력단 | Method for adaptive and partial replacement of moving picture, and method of generating program moving picture including embedded advertisement image employing the same |
KR101353038B1 (en) | 2012-08-06 | 2014-01-17 | 광주과학기술원 | Apparatus and method for processing boundary part of depth image |
US20160267349A1 (en) * | 2015-03-11 | 2016-09-15 | Microsoft Technology Licensing, Llc | Methods and systems for generating enhanced images using multi-frame processing |
US20180167610A1 (en) * | 2015-06-10 | 2018-06-14 | Lg Electronics Inc. | Method and apparatus for inter prediction on basis of virtual reference picture in video coding system |
Non-Patent Citations (5)
Title |
---|
Ardis, P. A. (2009). Controlling and evaluating inpainting with attentional models (Order No. 3395371). Available from ProQuest Dissertations and Theses Professional. (89170326). Retrieved from https://dialog.proquest.com/professional/docview/89170326?accountid=131444 (Year: 2009). * |
Autostitch, Aug. 22, 2015, https://web.archive.org/web/20150822224307/http://matthewalunbrown.com/autostitch/autostitch.html (Year: 2015). * |
Chinese Office Action in Chinese Application No. 201480033739.0, dated Dec. 4, 2017, 10 pages (with English translation). |
International Search Report dated Feb. 5, 2015 for PCT/KR2014/012159. |
Richard Szeliski, Image Alignment and Stitching: A Tutorial, Jan. 26, 2005, Microsoft Research Microsoft Research Corporation, pp. 1-10, https://courses.cs.washington.edu/courses/cse576/05sp/papers/MSR-TR-2004-92.pdf (Year: 2005). * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200293791A1 (en) * | 2016-10-28 | 2020-09-17 | Axon Enterprise, Inc. | Identifying and redacting captured data |
US11234027B2 (en) * | 2019-01-10 | 2022-01-25 | Disney Enterprises, Inc. | Automated content compilation |
US12061862B2 (en) * | 2020-06-11 | 2024-08-13 | Capital One Services, Llc | Systems and methods for generating customized content based on user preferences |
Also Published As
Publication number | Publication date |
---|---|
CN105284122B (en) | 2018-12-04 |
CN105284122A (en) | 2016-01-27 |
WO2015111840A1 (en) | 2015-07-30 |
US20160142792A1 (en) | 2016-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10904638B2 (en) | Device and method for inserting advertisement by using frame clustering | |
Margolin et al. | Saliency for image manipulation | |
US8494256B2 (en) | Image processing apparatus and method, learning apparatus and method, and program | |
US10332159B2 (en) | Apparatus and method for providing virtual advertisement | |
US7760956B2 (en) | System and method for producing a page using frames of a video stream | |
US10540791B2 (en) | Image processing apparatus, and image processing method for performing scaling processing based on image characteristics | |
Yildirim et al. | FASA: fast, accurate, and size-aware salient object detection | |
KR20090006068A (en) | Method and apparatus for modifying a moving image sequence | |
US20110050939A1 (en) | Image processing apparatus, image processing method, program, and electronic device | |
JP2006331460A (en) | Image similarity calculation system, image search system, image similarity calculation method, and image similarity calculation program | |
US10402698B1 (en) | Systems and methods for identifying interesting moments within videos | |
JP6610535B2 (en) | Image processing apparatus and image processing method | |
US11978216B2 (en) | Patch-based image matting using deep learning | |
CN109982036A (en) | A kind of method, terminal and the storage medium of panoramic video data processing | |
CN111985419B (en) | Video processing method and related equipment | |
CN113822898A (en) | Intelligent cropping of images | |
EP2530642A1 (en) | Method of cropping a 3D content | |
KR20190087711A (en) | Method, apparatus and computer program for pre-processing video | |
US20240135552A1 (en) | Object feature extraction device, object feature extraction method, and non-transitory computer-readable medium | |
CN112954443A (en) | Panoramic video playing method and device, computer equipment and storage medium | |
US11647294B2 (en) | Panoramic video data process | |
KR20080011050A (en) | Viedo window detector | |
Chen et al. | Preserving motion-tolerant contextual visual saliency for video resizing | |
JP6511950B2 (en) | Image processing apparatus, image processing method and program | |
WO2012153744A1 (en) | Information processing device, information processing method, and information processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SK PLANET CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, SU-BIN;SHIN, HYOUNG-CHUL;HAN, JU-HYEUN;AND OTHERS;REEL/FRAME:037287/0538 Effective date: 20151110 |
|
AS | Assignment |
Owner name: SK PLANET CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SK PLANET CO., LTD.;REEL/FRAME:048446/0289 Effective date: 20190225 Owner name: ELEVEN STREET CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SK PLANET CO., LTD.;REEL/FRAME:048446/0289 Effective date: 20190225 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |