KR20130068182A - Methods of extracting saliency region and apparatuses for using the same - Google Patents

Methods of extracting saliency region and apparatuses for using the same Download PDF

Info

Publication number
KR20130068182A
KR20130068182A KR1020110134674A KR20110134674A KR20130068182A KR 20130068182 A KR20130068182 A KR 20130068182A KR 1020110134674 A KR1020110134674 A KR 1020110134674A KR 20110134674 A KR20110134674 A KR 20110134674A KR 20130068182 A KR20130068182 A KR 20130068182A
Authority
KR
South Korea
Prior art keywords
variation
segment
importance
stereo image
calculate
Prior art date
Application number
KR1020110134674A
Other languages
Korean (ko)
Inventor
엄기문
김찬
이현
방건
이응돈
신홍창
정원식
허남호
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020110134674A priority Critical patent/KR20130068182A/en
Publication of KR20130068182A publication Critical patent/KR20130068182A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0085Motion estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE: A method of extracting a salient region and an apparatus using the same are provided to detect a salient region in a scene for maintenance of depth perception when generating a multi-view image, thereby improving depth perception of a stereo image. CONSTITUTION: A method of extracting a salient region by a salient region determining apparatus comprises the steps of: segmenting a stereo image(S130); producing a sparse disparity(S150); producing an average disparity of each segment(S160); normalizing and storing the average disparity of each segment(S165); producing an average motion of each segment(S170); and producing a saliency of each segment(S180). [Reference numerals] (AA) Input right and left stereo images; (S100) Extract a salient region and calculate variation between salient regions; (S110) Calculate illumination-based low-resolution stereo image variation; (S120) Calculate the movement between frames for each right and left image; (S130) Divide a stereo image and extract a segment; (S140) Combine variations; (S145) Remove an outlier; (S150) Calculate and store a sparse variation; (S160) Calculate an average variation of a segment; (S165) Normalize and store an average variation of each segment; (S170) Calculate, normalize and store an average movement of each segment; (S180) Calculate an overall importance of each segment

Description

How to Determine Scene Critical Areas and Devices Using Such Methods {METHODS OF EXTRACTING SALIENCY REGION AND APPARATUSES FOR USING THE SAME}

The present invention relates to a method for determining a scene important region and a device using the method, and to a method and apparatus for determining a region of an image.

In order to efficiently search for desired data from multimedia such as images, a method of automatically segmenting a region or extracting an object of interest has to be studied first. This problem is attracting the attention of many researchers because it is technically difficult but the most important threshold-based object extraction part using the Saliency Map.

There are several ways to extract important objects from an image. In this paper, we study how to extract the important objects using visual salient attention.

In the human cognitive process, a lot of information from sensory organs is transmitted to the brain through nerves, but humans select and detect only a small part of these signals. This is called attention. The visual attention theory is a theory that selects only meaningful features from numerous images input through the human visual system and concentrates attention on a specific object to perform faster and more processing. The research on selective attention of the brain is actively conducted in the fields of biology, cognitive engineering, and computer vision. In particular, the Saliency Map based on this is mainly used to extract important objects of interest by binarizing images and dividing them into objects and non-objects.

It is a first object of the present invention to provide a method for determining an important region in a stereo image.

In addition, a second object of the present invention is to provide an apparatus for performing a method for determining an important region in a stereo image.

According to an aspect of the present invention, there is provided a method for determining an important region according to an aspect of the present invention. The method may include extracting a segment by dividing a stereo image and calculating a variation and a motion size of the stereo image. Can be. The method for determining a critical region may further include calculating a shift importance based on the segment of the extracted image and the variation of the stereo image, and calculating a motion importance based on the segment of the extracted image and the motion size. have. The method for determining a critical region may further include calculating an overall importance of a segment of the image based on the variation importance and the movement importance. The calculating of the importance of variation based on the segment of the extracted image and the variation of the stereo image may include removing an outlier and calculating an average variation value for each segment based on the rare variation. The critical region determination method may further include performing a merge between segments by calculating similarity between adjacent segments and updating the importance of the segment based on the merge result. The calculating of the disparity and the motion size of the stereo image may include calculating a disparity between feature points in the stereo image, converting the stereo image to a low resolution, and calculating a light flow of the converted low resolution stereo image to calculate the disparity of the low resolution stereo image. It may include the step of calculating. The calculating of the disparity and the motion magnitude of the stereo image may further include combining the disparity between the feature points and the disparity of the low resolution stereo image.

The apparatus for determining a critical region according to another aspect of the present invention for achieving the second object of the present invention includes a importance calculator and a stereo calculator for classifying stereo images into segments and calculating segment importance and segment importance. And a first segment updater configured to update the segments of the current frame by merging the segments based on the similarity value between the segments of the current frame of the image. The apparatus for determining a critical region includes a second segment updater configured to update the importance of the previous frame to the importance of the current frame by comparing the importance value of the current frame updated by the first segment updater with the importance value of the previous frame. can do. The importance calculator may include a segment extractor that divides the stereo image into a plurality of parts, a variation calculator that calculates an average deviation value for each segment based on the segment divided by the segment extractor, and a segment that is divided by the segment extractor. The apparatus may further include a motion calculator configured to calculate a motion size of each segment based on the motion. The importance calculator calculates a motion importance based on a variation importance calculation unit that calculates a shift importance level based on the average variation value of each segment calculated by the variation calculator and a motion size by segment calculated by the motion calculation unit. The apparatus may further include a movement importance calculator. The importance calculator may further include an overall importance calculator that calculates an overall importance of the current frame based on the variation importance and the movement importance. The disparity calculator may further include a feature point extracting unit extracting a feature point from the stereo image, a down sampling unit to downsample the stereo image into a low resolution image, and an optical flow calculator calculating a light flow from the low resolution image. The disparity calculator may further include a disparity value calculator that calculates a disparity between the feature points based on the feature points extracted by the feature point extractor and calculates a disparity in the low resolution image based on the light flow calculated by the light flow calculator. . The variation calculator may further include an outlier unit for removing an abnormal value among the variation values, and a sparse variation calculator for calculating the average value for each segment by averaging the variation of the rare variation values among the variation values calculated by the variation value calculator. Can be.

As described above, according to the method for determining a scene important region according to the embodiment of the present invention and an apparatus using the method, the stereo image of a stereo image is detected by detecting a critical region of a scene that is important for maintaining a 3D effect when generating a multiview image from the stereo image. Can improve.

1 is a flowchart illustrating a method for calculating importance of each segment according to an exemplary embodiment of the present invention.
2 is a flowchart illustrating a method of calculating importance for each segment according to another embodiment of the present invention.
3 is a flowchart briefly illustrating a method for calculating importance of each segment according to an exemplary embodiment of the present invention.
4 is a block diagram illustrating a segment importance calculator according to an exemplary embodiment of the present invention.
5 is a block diagram illustrating an importance calculator according to an exemplary embodiment of the present invention.
6 is a conceptual diagram illustrating a variation calculator according to an exemplary embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the following description of the embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present disclosure rather unclear.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . In addition, the description of "including" a specific configuration in the present invention does not exclude a configuration other than the configuration, and means that additional configurations can be included in the practice of the present invention or the technical scope of the present invention.

The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

In addition, the components shown in the embodiments of the present invention are shown independently to represent different characteristic functions, which does not mean that each component is composed of separate hardware or software constituent units. That is, each constituent unit is included in each constituent unit for convenience of explanation, and at least two constituent units of the constituent units may be combined to form one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform a function. The integrated embodiments and separate embodiments of the components are also included within the scope of the present invention, unless they depart from the essence of the present invention.

In addition, some of the components are not essential components to perform essential functions in the present invention, but may be optional components only to improve performance. The present invention can be implemented only with components essential for realizing the essence of the present invention, except for the components used for the performance improvement, and can be implemented by only including the essential components except the optional components used for performance improvement Are also included in the scope of the present invention.

1 is a flowchart illustrating a method for calculating importance of each segment according to an exemplary embodiment of the present invention.

The stereoscopic image generating method according to an exemplary embodiment of the present invention relates to a method for detecting a critical region, which is an important scene for maintaining a 3D effect when generating a multiview image.

Referring to FIG. 1, a variation between feature points in a stereo image is calculated (step S100).

By receiving a stereo image, a feature point extraction method such as a scale invariant feature transform (SIFT) method may be applied to the stereo image to calculate a feature point, and calculate a variation between the calculated feature points. Method for extracting the feature point can be used in various ways and such embodiments are included in the scope of the present invention.

The variation is calculated in the low resolution stereo image (step S110).

A variation of the low resolution stereo image may be calculated by converting the received stereo image into a low resolution image and calculating an optical flow between the generated low resolution stereo images.

The motion magnitude is calculated by calculating a motion vector between two frames having consecutive identical views in the stereo image (step S120).

The motion size may be achieved by the above-described optical flow calculation method or simply a technique for calculating a continuous interframe difference.

The stereo image is divided (step S130).

Splits the input stereo image. As an image segmentation technique for segmenting an image, a mean shift method may be used.

Image segmentation may be performed on both the left and right images, or only on the reference image.

The variation between the low resolution stereo images and the variation between the feature points calculated by calculating the optical flow are combined and the outliers are removed (steps S140 and S145).

A sparse variation is calculated (step S150).

If there are pixels having sparse variation, the average variation for each segment may be calculated by averaging the variation of these pixels, and the calculated variation may be normalized and stored. In other words, the average variation for each segment is calculated by using the image value divided by the image segmentation and the segment extraction and the calculated variation value (step S160).

The calculated segment variation may be stored as the variation importance and used to calculate the overall importance of the segment of the stereo image.

The calculated average variation for each segment may be normalized and stored (step S165). The calculated average variation value may be defined and stored as variation saliency. At this time, since the area that is relatively ahead is likely to be noticed by the viewer, the greater the difference in variation, the higher the importance assigned. The calculated average motion for each segment is calculated (step S170).

The average motion for each segment may be calculated based on the image segment value calculated in step S130 and the motion size calculated in step S120, and the calculated average motion value for each segment may be normalized and stored. The average motion size for each segment may be stored as motion importance. When the movement or the color change is large due to the movement of the camera or the object motion, it is determined that the viewer is likely to be noticed, and the movement importance may be allocated in proportion to the movement size. That is, the motion importance may be allocated based on the calculated average motion value.

The importance of each segment is calculated on the basis of the normalized segmental mean variation value and the normalized segmental mean motion value (step S180).

The overall importance may be calculated using Equation 1 below.

&Quot; (1) "

Figure pat00001

Figure pat00002
Indicates the importance of variation by segment
Figure pat00003
Indicates the importance of movement. Segment variation and movement importance are weighted
Figure pat00004
Total importance value based on
Figure pat00005
Can be calculated. The weight can be set to any value. Equation 1 for calculating the overall importance may be an arbitrary one, and various importance calculation methods for calculating variation importance and segment motion importance, which are different from Equation 1, may be used.

2 is a flowchart illustrating a method of calculating importance for each segment according to another embodiment of the present invention.

Referring to FIG. 2, merging between segments is performed by performing similarity calculation between adjacent segments (step S200).

Segment merging may be performed by performing similarity calculation between adjacent segments based on the calculated importance value for each segment. For example, two segments may be combined into one segment if the average difference in difference between adjacent segments is below a certain threshold or the average motion size difference is less than a certain threshold.

The importance value for each segment is updated for each merged segment (step S210).

In step S200, the importance value for each segment may be calculated again for the merged segments.

The difference between the frame importance and the motion importance is calculated (step S220), and the importance of each frame segment is updated (step S230).

If the difference between the mean frames between the previous frame and the current frame is calculated and the difference is less than a certain threshold, or if the difference in average motion size between the previous frame and the current frame is less than a certain threshold, the importance of that segment of the current frame Replace with the importance of the same location segment.

The segment maximum / minimum importance is updated (step S230).

The maximum / minimum importance in the segment may be updated, normalized, and stored based on the importance value for each segment of the current frame determined in operation S220.

3 is a flowchart briefly illustrating a method for calculating importance of each segment according to an exemplary embodiment of the present invention.

Referring to FIG. 3, a segment is extracted by dividing an input stereo image (step S300).

The input stereo image is divided into a plurality of segments using an image segmentation method.

The variation of the input stereo image is calculated (step S310).

As described above with reference to FIG. 1, a stereo image is input and a feature point of the image is extracted to calculate a variation, and a light flow-based low resolution stereo image variation value may be calculated. Based on the two variation calculation methods, the average variation for each segment can be calculated, normalized and stored. If there is a pixel having sparse variation among the pixels in the segment, the average variation for each segment may be calculated by averaging the variation of these pixels, and the calculated average variation for each segment may be defined as the variation importance.

The motion magnitude of the input stereo image is calculated (step S320).

A motion size is calculated by calculating a motion vector between two frames in the input stereo image. The importance of motion for each segment may be allocated based on the calculated motion magnitude value.

The total importance is calculated by calculating the variation importance of each segment of the image and the importance of movement of each segment (step S330).

The importance of variation of each segment of the image is calculated based on the segment value of the image segmented in step S310 and the variation value of the stereo image calculated in step S300, and the importance of motion of each segment of the image is calculated in segment S310. A value and a motion size of the stereo image calculated through the step S320 may be calculated.

Similarity is calculated between adjacent segments to merge between segments, and the importance of each segment is updated based on the merge result (step S340).

If the difference in average variation of adjacent segments is below a certain threshold, or the difference in average motion size is below a certain threshold, two segments are merged into one segment, and the importance of each segment is updated by calculating the importance of each segment again based on the merged result. do.

The segment importance value between the current frames is adjusted using the segment importance value of the previous frame (step S350).

To maintain the temporal continuity of the importance between two consecutive frames, calculate the difference in variation importance between the previous frame and the current frame so that the difference is less than a certain threshold, respectively, or the importance of motion between the previous frame and the current frame is less than a certain threshold. In this case, the importance of the corresponding segment of the current frame can be replaced with the importance of the co-located segment of the previous frame, and this is repeated for all segments.

In this way, the temporal continuity of the importance between frames can be maintained.

4 is a block diagram illustrating a segment importance calculator according to an exemplary embodiment of the present invention.

Referring to FIG. 4, the segment importance calculator may include an importance calculator 400, a first segment updater 420, a second segment updater 440, and a memory 460.

The importance calculator 400 may classify an image into segments and calculate a total importance value by calculating variation importance of each segment and motion importance of each segment. The overall importance value may be a value obtained by multiplying the variation importance value and the movement importance value by a predetermined weight value as in Equation 1 described above. The importance calculator may further include a disparity calculator, a segment extractor, a motion calculator, a disparity importance calculator, a motion importance calculator, a memory, and the like.

The first segment updater 420 may calculate a similar degree between segments in the current frame. Segments having similar values may be merged based on the calculated similarity value for each segment. The merged segment may have a new importance value updated through the importance calculator.

The second segment updater 440 may compare the importance value of the frame updated by the first segment updater 420 with the importance value of the previous frame. When the importance value of the previous frame and the current frame is compared and the difference is less than or equal to a predetermined threshold, the importance of the previous frame may be replaced with the importance of the current frame. In this way it is possible to maintain temporal continuity of importance between frames.

The memory 460 may store the importance value of each segment of the frame.

5 is a block diagram illustrating an importance calculator according to an exemplary embodiment of the present invention.

Referring to FIG. 5, the importance calculator variance calculator 510, the segment extractor 500, the motion calculator 520, the variation importance calculator 530, the movement importance calculator 540, and the overall importance calculator 550, a memory 560.

The segment extractor 500 may divide the image into a plurality of parts with respect to the left and right images or the reference image by using an average shift technique.

The disparity calculator 510 may calculate the disparity value using a method of extracting the disparity by applying a feature extraction method in a stereo image or extracting the disparity in a light flow-based low resolution image. The variation calculator includes a feature point extractor, a downsampling unit, an optical flow calculator, a variation value calculator, an outlier remover, and the like. The variation calculator 510 may calculate a variation value of the segment based on the segment extracted by the segment extractor 500.

The motion calculator 520 may calculate a motion size for each segment based on the segments extracted by the segment extractor 500.

The variation importance calculator 530 may calculate the variation importance value based on the average variation value for each segment calculated by the variation calculator 510.

The motion importance calculator 540 may calculate the average motion size for each segment based on the motion magnitude value calculated by the interframe motion calculator and calculate the motion importance value.

The overall importance calculator 550 may calculate an overall importance value for each segment based on the variation importance value calculated by the variation importance calculator 530 and the movement importance value calculated by the movement importance calculator 540.

The memory 560 may store an average variation value for each segment, an average motion magnitude value for each segment, and an overall importance value for each segment.

6 is a conceptual diagram illustrating a variation calculator according to an exemplary embodiment of the present invention.

Referring to FIG. 6, the variation calculator includes a feature point extractor 600, a down sampling unit 610, an optical flow calculator 620, a variation value calculator 630, an outlier remover 640, and a sparse variation calculator. 650 may include.

The feature point extractor 600 may extract a feature point by applying a feature point extraction technique such as a scale invariant feature transform (SIFT) to a given stereo image. The feature points extracted by the feature point extractor 600 may be transmitted to the variation value calculator 630 to calculate the variation between the feature points.

The down sampling unit 610 may downsample the input stereo image to make a low resolution image. The optical flow calculator 620 may calculate the optical flow between the low resolution stereo images calculated by the down sampling unit 610. The optical flow calculated by the optical flow calculator may be transmitted to the variation value calculator 630 to calculate the variation between the feature points.

As described above, the variation value calculator 630 transmits the feature points extracted by the feature point extractor 600 to the variation value calculator to calculate the variation between the feature points, and the optical flow calculated by the optical flow calculator is transmitted to the variation value calculator. The variation between feature points can be calculated.

The outlier remover 640 may remove the outlier in order to generate a highly accurate shift.

The sparse variation calculator 650 may calculate the average variation for each segment by averaging the variation of the pixels when there are pixels having sparse variation values present in the segment.

It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims. It will be possible.

Claims (15)

Extracting a segment by dividing a stereo image; And
And calculating a variation and a motion size of the stereo image.
The method of claim 1,
Calculating a variation importance based on the segment of the extracted image and the variation of the stereo image; And
And calculating a motion importance based on the segment of the extracted image and the motion size.
The method of claim 2,
And calculating the overall importance of the segment of the image based on the variation importance and the movement importance.
The method of claim 2, wherein the calculating of the importance of the variation is based on the segment of the extracted image and the variation of the stereo image.
Removing the outliers; And
Computing the average variation value for each segment based on the rare variation.
The method of claim 1,
Calculating the similarity between adjacent segments to perform intersegment merging; And
And updating the importance of the segment based on the merging result.
The method of claim 1, wherein the calculating of the variation and the motion size of the stereo image comprises:
Calculating a variation between feature points in the stereo image; And
And converting the stereo image into a low resolution and calculating a light flow of the converted low resolution stereo image to calculate a variation of the low resolution stereo image.
The method of claim 6, wherein the calculating of the disparity and the motion size of the stereo image comprises:
And combining the variation between the feature points and the variation of the low resolution stereo image.
An importance calculator configured to classify the stereo image into segments and calculate variation importance of each segment and movement importance of each segment;
And a first segment updater configured to update the segments of the current frame by merging the segments based on similarity values between segments of the current frame of the stereo image.
The apparatus of claim 8, wherein the critical region determination apparatus comprises:
And a second segment updater configured to update the importance of the previous frame to the importance of the current frame by comparing the importance value of the current frame and the importance value of the previous frame updated by the first segment updater.
The method of claim 8, wherein the importance calculation unit,
A segment extractor which divides the stereo image into a plurality of parts;
A variation calculator configured to calculate an average variation value for each segment based on the segment divided by the segment extractor; And
And a motion calculator configured to calculate a motion size for each segment based on segments segmented by the segment extractor.
The method of claim 10, wherein the importance calculation unit,
A variation importance calculation unit calculating a variation importance level based on the average variation value for each segment calculated by the variation calculation unit; And
And a motion importance calculation unit configured to calculate a motion importance level based on the motion size for each segment calculated by the motion calculation unit.
The method of claim 11, wherein the importance calculation unit,
And an overall importance calculation unit configured to calculate an overall importance of the current frame based on the variation importance and the movement importance.
The method of claim 10, wherein the variation calculator,
A feature point extractor for extracting feature points from the stereo image;
A down sampling unit for down sampling the stereo image to produce a low resolution image; And
And an optical flow calculator configured to calculate an optical flow in the low resolution image.
The method of claim 13, wherein the variation calculator,
And a variation value calculator configured to calculate a variation between the feature points based on the feature points extracted by the feature point extractor and calculate a variation in the low resolution image based on the light flow calculated by the light flow calculator.
The method of claim 14, wherein the variation calculator,
An outlier part which removes an abnormal value among the variation values; And
And a rare variation calculator for averaging the variation of the rare variation values among the variation values calculated by the variation value calculator to calculate an average value for each segment.
KR1020110134674A 2011-12-14 2011-12-14 Methods of extracting saliency region and apparatuses for using the same KR20130068182A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110134674A KR20130068182A (en) 2011-12-14 2011-12-14 Methods of extracting saliency region and apparatuses for using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110134674A KR20130068182A (en) 2011-12-14 2011-12-14 Methods of extracting saliency region and apparatuses for using the same

Publications (1)

Publication Number Publication Date
KR20130068182A true KR20130068182A (en) 2013-06-26

Family

ID=48863862

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110134674A KR20130068182A (en) 2011-12-14 2011-12-14 Methods of extracting saliency region and apparatuses for using the same

Country Status (1)

Country Link
KR (1) KR20130068182A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682599A (en) * 2016-12-15 2017-05-17 浙江科技学院 Stereo image visual saliency extraction method based on sparse representation
CN107423765A (en) * 2017-07-28 2017-12-01 福州大学 Based on sparse coding feedback network from the upper well-marked target detection method in bottom
CN108470176A (en) * 2018-01-24 2018-08-31 浙江科技学院 A kind of notable extracting method of stereo-picture vision indicated based on frequency-domain sparse

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682599A (en) * 2016-12-15 2017-05-17 浙江科技学院 Stereo image visual saliency extraction method based on sparse representation
CN106682599B (en) * 2016-12-15 2020-04-17 浙江科技学院 Sparse representation-based stereo image visual saliency extraction method
CN107423765A (en) * 2017-07-28 2017-12-01 福州大学 Based on sparse coding feedback network from the upper well-marked target detection method in bottom
CN108470176A (en) * 2018-01-24 2018-08-31 浙江科技学院 A kind of notable extracting method of stereo-picture vision indicated based on frequency-domain sparse
CN108470176B (en) * 2018-01-24 2020-06-26 浙江科技学院 Stereo image visual saliency extraction method based on frequency domain sparse representation

Similar Documents

Publication Publication Date Title
JP6439820B2 (en) Object identification method, object identification device, and classifier training method
JP2016095849A (en) Method and device for dividing foreground image, program, and recording medium
KR20200060194A (en) Method of predicting depth values of lines, method of outputting 3d lines and apparatus thereof
JP2016085742A (en) Foreground image diving method and foreground image dividing device
KR101082046B1 (en) Method and apparatus for converting 2D images to 3D images
Yang et al. All-in-focus synthetic aperture imaging
EP2781099B1 (en) Apparatus and method for real-time capable disparity estimation for virtual view rendering suitable for multi-threaded execution
KR101537559B1 (en) Device for detecting object, device for detecting object for vehicle and method thereof
KR101699014B1 (en) Method for detecting object using stereo camera and apparatus thereof
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
KR20130068182A (en) Methods of extracting saliency region and apparatuses for using the same
Mukherjee et al. A hybrid algorithm for disparity calculation from sparse disparity estimates based on stereo vision
KR20160085708A (en) Method and apparatus for generating superpixels for multi-view images
WO2015198592A1 (en) Information processing device, information processing method, and information processing program
KR101511315B1 (en) Method and system for creating dynamic floating window for stereoscopic contents
JP2014110020A (en) Image processor, image processing method and image processing program
KR102240570B1 (en) Method and apparatus for generating spanning tree,method and apparatus for stereo matching,method and apparatus for up-sampling,and method and apparatus for generating reference pixel
Danciu et al. Improved contours for ToF cameras based on vicinity logic operations
JP6668740B2 (en) Road surface estimation device
JP2016004382A (en) Motion information estimation device
Erofeev et al. Automatic logo removal for semitransparent and animated logos
Nasonov et al. Edge width estimation for defocus map from a single image
KR101505360B1 (en) A disparity search range estimation apparatus and method
KR101278636B1 (en) Object detection method for converting 2-dimensional image to stereoscopic image
Mun et al. Guided image filtering based disparity range control in stereo vision

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination