CA2674104A1 - Method and graphical user interface for modifying depth maps - Google Patents
Method and graphical user interface for modifying depth maps Download PDFInfo
- Publication number
- CA2674104A1 CA2674104A1 CA 2674104 CA2674104A CA2674104A1 CA 2674104 A1 CA2674104 A1 CA 2674104A1 CA 2674104 CA2674104 CA 2674104 CA 2674104 A CA2674104 A CA 2674104A CA 2674104 A1 CA2674104 A1 CA 2674104A1
- Authority
- CA
- Canada
- Prior art keywords
- color
- pixel
- depth map
- image
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention relates to a method and a graphical user interface for modifying a depth map for a digital monoscopic color image. The method includes interactively selecting a region of the depth map based on color of a target region in the color image, and modifying depth values in the thereby selected region of the depth map using a depth modification rule. The color-based pixel selection rules for the depth map and the depth modification rule selected based on one color image from a video sequence may be saved and applied to automatically modify depths maps of other color images from the same sequence.
Claims (19)
1. A computer-implemented method for modifying a depth map of a two-dimensional color image for enhancing a 3D image rendered therefrom, comprising:
A) obtaining a first color image and a depth map associated therewith containing depth values for pixels of the first color image;
B) displaying at least one of the first color image and the depth map on a computer display;
C) selecting a depth adjustment region (DAR) in the depth map for modifying depth values therein by performing the steps of:
a) receiving a first user input identifying a first pixel color within a range of colors of a target region in the first color image;
b) upon receiving a second user input defining a pixel selection rule for selecting like-coloured pixels based on the first pixel color, using said pixel selection rule for identifying a plurality of the like-coloured pixels in the first color image;
c) displaying a region visualization image (RVI) representing pixel locations of the plurality of like-coloured pixels;
d) repeating steps (b) and (c) to display a plurality of different region visualization images corresponding to a plurality of different pixel selection rules for selection by a user; and e) identifying a region in the depth map corresponding to a user selected region visualization image from the plurality of different region visualization images, and adopting said region in the depth map as the DAR;
D) generating a modified depth map by modifying the depth values in the DAR
using a selected depth modification rule.
A) obtaining a first color image and a depth map associated therewith containing depth values for pixels of the first color image;
B) displaying at least one of the first color image and the depth map on a computer display;
C) selecting a depth adjustment region (DAR) in the depth map for modifying depth values therein by performing the steps of:
a) receiving a first user input identifying a first pixel color within a range of colors of a target region in the first color image;
b) upon receiving a second user input defining a pixel selection rule for selecting like-coloured pixels based on the first pixel color, using said pixel selection rule for identifying a plurality of the like-coloured pixels in the first color image;
c) displaying a region visualization image (RVI) representing pixel locations of the plurality of like-coloured pixels;
d) repeating steps (b) and (c) to display a plurality of different region visualization images corresponding to a plurality of different pixel selection rules for selection by a user; and e) identifying a region in the depth map corresponding to a user selected region visualization image from the plurality of different region visualization images, and adopting said region in the depth map as the DAR;
D) generating a modified depth map by modifying the depth values in the DAR
using a selected depth modification rule.
2. The method of claim 1, further comprising providing one or more GUI tools for displaying the first color image, the region visualization image, and the depth map on the computer screen, and for receiving the first and second user inputs.
3. The method of claim 2, wherein step (b) comprises:
b1) obtaining n color component values for the first pixel color, said n color component values defining a pixel color in a selected color space, wherein n>=2; and, b2) applying user defined ranges of the n color component values about the values of respective color components obtained in step (b1) to identify the like-coloured pixels;
and, wherein receiving the second user input comprises receiving parameter values defining the user defined ranges of the n color component values.
b1) obtaining n color component values for the first pixel color, said n color component values defining a pixel color in a selected color space, wherein n>=2; and, b2) applying user defined ranges of the n color component values about the values of respective color components obtained in step (b1) to identify the like-coloured pixels;
and, wherein receiving the second user input comprises receiving parameter values defining the user defined ranges of the n color component values.
4. The method of claim 3, wherein the pixel selection rule comprises an instruction to perform a user selected image editing operation, and wherein step (b) further includes:
applying, in response to the second user input, a user-selected image editing operation upon the first color image, wherein the image editing operation includes at least one of:
a colour space conversion of the first color image;
a modification of a colour component histogram;
a modification of the histogram of the pixel intensities; and, a color correction operation on a color component of the first color image;
applying, in response to the second user input, a user-selected image editing operation upon the first color image, wherein the image editing operation includes at least one of:
a colour space conversion of the first color image;
a modification of a colour component histogram;
a modification of the histogram of the pixel intensities; and, a color correction operation on a color component of the first color image;
5. The method of claim 2, wherein the selected depth modification rule comprises at least one of:
adjusting pixel values of the depth map within the DAR by a same value or in a same proportion;
assigning a same new pixel value to each pixel within the DAR; and, applying a gradient to pixel values of the depth map within the DAR.
adjusting pixel values of the depth map within the DAR by a same value or in a same proportion;
assigning a same new pixel value to each pixel within the DAR; and, applying a gradient to pixel values of the depth map within the DAR.
6. The method of claim 2, wherein step (D) comprises:
D1) applying at least two different candidate depth modification rules to modify the depth map at depth map locations defined by the selected region visualization image to obtain at least two different candidate depth maps;
D2) displaying at least one of: the at least two different candidate depth maps on a computer display, or two different 3D images rendered therewith; and, D3) utilizing a user selected candidate depth map as the modified depth map for rendering the enhanced 3D image therewith, and adopting one of the at least two different candidate depth modification rules corresponding to the user selected candidate depth map as the selected depth modification rule.
D1) applying at least two different candidate depth modification rules to modify the depth map at depth map locations defined by the selected region visualization image to obtain at least two different candidate depth maps;
D2) displaying at least one of: the at least two different candidate depth maps on a computer display, or two different 3D images rendered therewith; and, D3) utilizing a user selected candidate depth map as the modified depth map for rendering the enhanced 3D image therewith, and adopting one of the at least two different candidate depth modification rules corresponding to the user selected candidate depth map as the selected depth modification rule.
7. The method of claim 6, further comprising a GUI tool for displaying a candidate depth map.
8. The method of claim 2, wherein step (b) further comprises excluding pixels of a second region in the first color image from the plurality of like-coloured pixels.
9. The method of claim 2, further comprising defining a third region in the first color image encompassing the target region, wherein pixel locations of the like-coloured pixels in step (b) are determined while excluding pixels outside of the third region.
10. The method of claim 2, wherein the first color image corresponds to one frame in a video sequence of frames representing a scene, and the method further comprises:
saving one of the plurality of the different color selection rules as selected by the user, and the selected depth modification rule obtained based on the first image in computer readable memory; and, applying the saved selected color selection and depth modification rules to modify depth values of like-coloured pixels of other frames in the video sequence.
saving one of the plurality of the different color selection rules as selected by the user, and the selected depth modification rule obtained based on the first image in computer readable memory; and, applying the saved selected color selection and depth modification rules to modify depth values of like-coloured pixels of other frames in the video sequence.
11. The method of claim 1, wherein step (b) comprises displaying the depth map in the form of a grey-scale image having pixel intensity values representing the depth values of respective pixels of the first colour image.
12. The method of claim 1, wherein step (a) comprises receiving the first user input identifying a user selected pixel within the target region in the first color image, identifying a pixel color of the user selected pixel, and adopting said pixel color as the first color.
13. A method for modifying depth maps for 2D color images for enhancing 3D
images rendered therewith, comprising:
a) selecting a first color image from a video sequence of color images and obtaining a depth map associated therewith, wherein said video sequence includes at least a second color image corresponding to a different frame from a same scene and having a different depth map associated therewith;
b) selecting a first pixel color in the first color image within a target region;
c) determining pixel locations of like-coloured pixels of the first color image using one or more color selection rules, the like-coloured pixels having a pixel color the same as the first pixel color or in a specified color tolerance range thereabout; and, d) applying a selected depth modification rule to modify the depth map of the first color image at depth map locations corresponding to the pixel locations of the like-coloured pixels to obtain a modified depth map of the first color image;
e) applying the one or more color selection rules and the selected depth modification rule to identify like-coloured pixels in the second color image of the video sequence and to modify the depth map of the second color image at depth map locations corresponding to the pixel locations of the like-coloured pixels in the second color image to obtain a modified depth map of the second color image; and, f) outputting the first and second color images and the modified depth maps associated therewith for rendering an enhanced video sequence of 3D images;
and, wherein the one or more color selection rules and the selected depth modification rule are obtained based on the first color image.
images rendered therewith, comprising:
a) selecting a first color image from a video sequence of color images and obtaining a depth map associated therewith, wherein said video sequence includes at least a second color image corresponding to a different frame from a same scene and having a different depth map associated therewith;
b) selecting a first pixel color in the first color image within a target region;
c) determining pixel locations of like-coloured pixels of the first color image using one or more color selection rules, the like-coloured pixels having a pixel color the same as the first pixel color or in a specified color tolerance range thereabout; and, d) applying a selected depth modification rule to modify the depth map of the first color image at depth map locations corresponding to the pixel locations of the like-coloured pixels to obtain a modified depth map of the first color image;
e) applying the one or more color selection rules and the selected depth modification rule to identify like-coloured pixels in the second color image of the video sequence and to modify the depth map of the second color image at depth map locations corresponding to the pixel locations of the like-coloured pixels in the second color image to obtain a modified depth map of the second color image; and, f) outputting the first and second color images and the modified depth maps associated therewith for rendering an enhanced video sequence of 3D images;
and, wherein the one or more color selection rules and the selected depth modification rule are obtained based on the first color image.
14. The method of claim 13 wherein the first region in the depth map corresponds to a target object, which depth in the scene depicted in the video sequence is to be modified.
15. The method of claim 14, wherein the one or more color selection rules are adaptively defined based on the first color image using the steps of:
c1) determining n color component values for the first pixel color, said n color component values defining the pixel color in a selected color space, wherein n>=2;
c2) for each of the n color components, selecting a range of color component values about the value of said color component of the first pixel color;
c3) displaying a region visualization image indicating pixel locations of the like-coloured pixels for comparison with the target object in the first color image or in the depth map thereof;
c4) repeating steps (c1) and (c2) until at least a portion of the region visualization image is substantially congruent with the object.
c1) determining n color component values for the first pixel color, said n color component values defining the pixel color in a selected color space, wherein n>=2;
c2) for each of the n color components, selecting a range of color component values about the value of said color component of the first pixel color;
c3) displaying a region visualization image indicating pixel locations of the like-coloured pixels for comparison with the target object in the first color image or in the depth map thereof;
c4) repeating steps (c1) and (c2) until at least a portion of the region visualization image is substantially congruent with the object.
16. The method of claim 1, further comprising:
displaying a plurality of grey scale images representing individual color components of the first color image rendered in a plurality of different color spaces for selection by the user;
generating the depth map based on one or more of the grey scale images selected by the user.
displaying a plurality of grey scale images representing individual color components of the first color image rendered in a plurality of different color spaces for selection by the user;
generating the depth map based on one or more of the grey scale images selected by the user.
17. The method of claim 1, further comprising generating the depth map based on a grey scale image representing one or more color components of the first color image using the steps of:
displaying on a computer display a plurality of grey scale images representing individual color components of the first color image rendered in a plurality of different color spaces;
selecting one of the grey scale images for generating the depth map therefrom.
displaying on a computer display a plurality of grey scale images representing individual color components of the first color image rendered in a plurality of different color spaces;
selecting one of the grey scale images for generating the depth map therefrom.
18. The method of claim 1, wherein the first color image corresponds to one frame in a video sequence of frames representing a scene, and wherein the method further comprises:
saving one of the plurality of the different pixel selection rules as selected by the user in computer readable memory; and, applying the saved pixel selection rule to identify like-coloured pixels in specified regions of color images corresponding to other frames from the sequence.
saving one of the plurality of the different pixel selection rules as selected by the user in computer readable memory; and, applying the saved pixel selection rule to identify like-coloured pixels in specified regions of color images corresponding to other frames from the sequence.
19. The method of claim 18, further comprising specifying a rule for defining the specified regions within each frame relative to a position of the identified like-coloured pixels within said frame.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12986908P | 2008-07-25 | 2008-07-25 | |
US61/129,869 | 2008-07-25 |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2674104A1 true CA2674104A1 (en) | 2010-01-25 |
CA2674104C CA2674104C (en) | 2012-03-13 |
Family
ID=41611029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA 2674104 Expired - Fee Related CA2674104C (en) | 2008-07-25 | 2009-07-24 | Method and graphical user interface for modifying depth maps |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2674104C (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103198486A (en) * | 2013-04-10 | 2013-07-10 | 浙江大学 | Depth image enhancement method based on anisotropic diffusion |
CN111033569A (en) * | 2017-08-18 | 2020-04-17 | 三星电子株式会社 | Apparatus for editing image using depth map and method thereof |
CN113591640A (en) * | 2021-07-20 | 2021-11-02 | 湖南三一华源机械有限公司 | Road guardrail detection method and device and vehicle |
CN115348435A (en) * | 2019-07-26 | 2022-11-15 | 谷歌有限责任公司 | Geometric fusion of multiple image-based depth images using ray casting |
-
2009
- 2009-07-24 CA CA 2674104 patent/CA2674104C/en not_active Expired - Fee Related
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103198486A (en) * | 2013-04-10 | 2013-07-10 | 浙江大学 | Depth image enhancement method based on anisotropic diffusion |
CN103198486B (en) * | 2013-04-10 | 2015-09-09 | 浙江大学 | A kind of depth image enhancement method based on anisotropy parameter |
CN111033569A (en) * | 2017-08-18 | 2020-04-17 | 三星电子株式会社 | Apparatus for editing image using depth map and method thereof |
CN111033569B (en) * | 2017-08-18 | 2024-02-13 | 三星电子株式会社 | Apparatus for editing image using depth map and method thereof |
CN115348435A (en) * | 2019-07-26 | 2022-11-15 | 谷歌有限责任公司 | Geometric fusion of multiple image-based depth images using ray casting |
CN115348435B (en) * | 2019-07-26 | 2023-11-24 | 谷歌有限责任公司 | Geometric fusion of multiple image-based depth images using ray casting |
CN113591640A (en) * | 2021-07-20 | 2021-11-02 | 湖南三一华源机械有限公司 | Road guardrail detection method and device and vehicle |
CN113591640B (en) * | 2021-07-20 | 2023-11-17 | 湖南三一华源机械有限公司 | Road guardrail detection method and device and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CA2674104C (en) | 2012-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8213711B2 (en) | Method and graphical user interface for modifying depth maps | |
US9286665B2 (en) | Method for dynamic range editing | |
CN102761766B (en) | Method for depth map generation | |
KR101427607B1 (en) | Multi-primary conversion | |
CN105138317B (en) | Window display processing method and device for terminal device | |
CN101360250B (en) | Immersion method and system, factor dominating method, content analysis method and parameter prediction method | |
US8477135B2 (en) | Method and apparatus for volume rendering using depth weighted colorization | |
JP5037311B2 (en) | Color reproduction system and method | |
CN105185352B (en) | The edge method of modifying and edge decorating device of image | |
KR101377733B1 (en) | -up-scaling | |
CN105574918A (en) | Material adding method and apparatus of 3D model, and terminal | |
JP6222939B2 (en) | Unevenness correction apparatus and control method thereof | |
EP3262630B1 (en) | Steady color presentation manager | |
CA2674104A1 (en) | Method and graphical user interface for modifying depth maps | |
JP5862635B2 (en) | Image processing apparatus, three-dimensional data generation method, and program | |
KR101279576B1 (en) | Method for generating panorama image within digital image processing apparatus | |
KR101958263B1 (en) | The control method for VR contents and UI templates | |
TWI356394B (en) | Image data generating device, image data generatin | |
JP4359662B2 (en) | Color image exposure compensation method | |
CN103514593B (en) | Image processing method and device | |
US9721328B2 (en) | Method to enhance contrast with reduced visual artifacts | |
KR101893793B1 (en) | Methdo and apparatus for photorealistic enhancing of computer graphic image | |
JP5050141B2 (en) | Color image exposure evaluation method | |
WO2012096065A1 (en) | Parallax image display device and parallax image display method | |
JP4696669B2 (en) | Image adjustment method and image adjustment apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed |
Effective date: 20160725 |