WO2013054462A1 - ユーザーインタフェース制御装置、ユーザーインタフェース制御方法、コンピュータプログラム、及び集積回路 - Google Patents
ユーザーインタフェース制御装置、ユーザーインタフェース制御方法、コンピュータプログラム、及び集積回路 Download PDFInfo
- Publication number
- WO2013054462A1 WO2013054462A1 PCT/JP2012/005109 JP2012005109W WO2013054462A1 WO 2013054462 A1 WO2013054462 A1 WO 2013054462A1 JP 2012005109 W JP2012005109 W JP 2012005109W WO 2013054462 A1 WO2013054462 A1 WO 2013054462A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- graphic
- user interface
- image
- subject
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/183—On-screen display [OSD] information, e.g. subtitles or menus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/356—Image reproducers having separate monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0092—Image segmentation from stereoscopic image signals
Definitions
- the present invention relates to user interface technology, and more particularly to display of a user interface when processing a stereoscopic image.
- Patent Document 1 discloses a technique for arranging and synthesizing various graphics such as balloons and characters on a stereoscopic image captured by a camera or the like. Specifically, in the technique disclosed in Patent Document 1, the relative size and the front-rear relationship between a plurality of graphics are determined according to the depth information (depth map) of the stereoscopic image at the position where the graphics are arranged. This gives a three-dimensional visual effect.
- Patent Document 1 does not provide a means for the user to specify graphic depth information. In this case, there is a problem that the depth at which the graphic is arranged cannot be determined by the user when processing the photograph, and the processing desired by the user cannot be realized.
- the processing operation is not necessarily performed in a state where the stereoscopic image is displayed stereoscopically, and the processing operation is performed in a state where the viewpoint images constituting the stereoscopic image are displayed in a planar view.
- the depth information of the graphic to be synthesized can be easily specified even when the viewpoint image is displayed in a planar view. Means are required.
- the present invention provides a user interface control device, a user interface control method, a computer program for controlling a user interface, and a user interface control device that realize a GUI capable of easily setting the depth of a graphic when a graphic is combined with a stereoscopic image. And an integrated circuit.
- a user interface control device provides a user interface for setting a depth in a depth direction in which a graphic is arranged when a graphic is combined with a stereoscopic image.
- the interface control apparatus when a graphic is arranged on one viewpoint image constituting a stereoscopic image, plane position specifying means for specifying a range occupied by the graphic, and the one viewpoint image in the specified range Viewpoint image depth acquisition means for acquiring the depth of the subject shown in the image, an option indicating the acquired depth, and a display means for presenting an option indicating another depth at which a graphic different from the acquired depth can be arranged It is characterized by providing.
- the user interface control device provides a user with an option indicating the depth at which a graphic can be arranged from the distribution of depth information at the position where the graphic is arranged, with the configuration described in the means for solving the problem. So the graphic depth can be set easily.
- the user interface control device can set the depth by selecting an option, the display is combined regardless of whether the display mode at the time of image processing is stereoscopic display or planar display. Graphic depth information can be set easily.
- FIG. The figure which shows the hardware constitutions of the smart phone which has a function as a user interface control apparatus which concerns on this embodiment.
- FIG. The figure which shows the object of the stereo image made into a process target in this embodiment.
- the figure which shows the example of a display of the display 200 Graphic showing graphic drawing range and center coordinates (A) The figure which shows the example in which the depth of a graphic part is further set ahead of two subjects, (b) The figure which shows the example in which the depth of a graphic part is set to the same depth as the subject on the near side, (c) The figure which shows the example in which the depth of a graphic part is set to the intermediate
- the figure which shows the depth distribution of the viewpoint image in the arrangement position of the graphic part Display example of depth setting pop-up menu including multiple options
- the figure explaining the depth adjustment of the graphic part after selecting the depth setting menu The flowchart which shows the flow of the depth information generation process in the depth information calculation part 103.
- the figure which shows the flow of the depth setting pop-up menu display processing The flowchart which shows the flow of the process which extracts the arrangement depth candidate of a graphic part Flowchart showing the flow of the graphic part depth adjustment process after selecting the depth setting menu
- the figure which shows the pixel shift for calculating a parallax from the determined depth information Flow chart showing the flow of graphic parts composition processing Display example of the depth setting pop-up menu including two options
- (A) is a figure which shows arrangement
- (b) is a figure which shows a viewpoint image.
- (A) Screen example for displaying subject image extracted from image based on depth distribution by color-coding
- (b) Screen example for displaying subject image extracted from image with number Flowchart showing the flow of stereoscopic image processing in the user interface control device 300
- the present invention relates to a user interface control device, a user interface control method, and a computer for controlling a user interface that provide a GUI capable of easily setting a depth for displaying a graphic after synthesis when the graphic is synthesized with a stereoscopic image. It is an object to provide a program and an integrated circuit.
- a user interface control apparatus is a user interface control apparatus that provides a user interface for setting a depth in a depth direction in which a graphic is arranged when a graphic is combined with a stereoscopic image.
- the plane position specifying means for specifying the range occupied by the graphic, and the depth of the subject reflected in the one viewpoint image in the specified range are set.
- a viewpoint image depth acquisition unit to acquire, an option indicating the acquired depth, and a display unit that presents an option indicating another depth at which a graphic different from the acquired depth can be arranged.
- the user since the user is provided with an option indicating the depth at which the graphic can be arranged based on the distribution of the depth information at the position where the graphic is arranged, the user can select the graphic by selecting one of the options. Can be set easily.
- the depth can be set by selecting an option, the depth information of the graphic to be synthesized can be easily set regardless of whether the display mode at the time of image processing is stereoscopic display or planar display. Can do.
- the second aspect of the present invention is the first aspect, wherein the option indicating the acquired depth indicates the depth of the object located closest to the subject in the range occupied by the graphic.
- the option indicating the depth may be configured to indicate a depth closer to the front than the depth of the subject positioned closest to the front.
- the user can easily select whether to combine the graphics so that the graphics are pasted on the subject or to arrange the graphics so that the graphics are arranged in front of the subject. .
- the presenting means may be further configured to present an option indicating an intermediate depth between the two subject depths.
- the composition in which the graphic is arranged between two subjects having different depths is performed by the user. Can be easily selected.
- a receiving unit that receives a selection of any of the plurality of options, and the graphic until receiving a determination instruction from a user after receiving the selection of the options. If the graphic is enlarged and displayed at the time when the determination instruction is received, the depth on the near side than the depth indicated by the selected option is displayed. If the graphic is reduced and displayed when the determination instruction is received, the depth behind the selected option is determined as the graphic placement depth. It may be configured to further include a depth determining means.
- the configuration of the fourth aspect of the present invention it is possible to adjust not only the depth indicated by a plurality of options but also the graphic arrangement depth further to the front or back from the depth indicated by each option.
- positioning depth of a graphic can be raised, a user's convenience can be improved.
- the graphic is displayed while changing the size, but it is not necessary to perform the stereoscopic display. Therefore, the graphic depth can be easily set even in an environment where the image cannot be stereoscopically displayed.
- the depth determining means is a subject positioned on the near side of the two subjects when an option indicating an intermediate depth between the two subjects is selected.
- the depth of the image is associated with the size of the graphic displayed by the expansion / contraction display means most enlarged, and the depth of the subject located on the far side of the two subjects is displayed as the size of the graphic displayed by the expansion / contraction display means most reduced.
- the depth corresponding to the size of the graphic at the time when the determination instruction is received may be determined.
- the depth width between two subjects can be expressed in relation to the graphic display size. Therefore, the user can intuitively set the graphic arrangement depth between the two subjects.
- a sixth aspect of the present invention is the fourth aspect, wherein the depth determining means is arranged so that if any subject exists behind the depth indicated by the selected option in the range occupied by the graphic, The depth according to the size of the graphic at the time when the determination instruction is received may be determined by associating the depth of the positioned subject with the size of the graphic displayed by the expansion / contraction display means the smallest.
- the depth range from the depth indicated by the selected option to the subject on the back side can be expressed in relation to the graphic display size.
- the user can intuitively set the graphic arrangement depth from the depth indicated by the selected option to the back subject.
- the seventh aspect of the present invention is the fourth aspect according to the fourth aspect, and when the depth determination means has any subject in front of the depth indicated by the selected option in the range occupied by the graphic, The depth of the subject to be positioned is associated with the size of the graphic that is displayed by the expansion / contraction display means.
- the selected The depth corresponding to the size of the graphic at the time when the determination instruction is received is determined by associating the predetermined depth closer to the depth indicated by the option with the size of the graphic displayed by the expansion / contraction display means the largest. It may be configured.
- the depth width can be expressed in relation to the graphic display size.
- an eighth aspect of the present invention is the shift amount acquisition according to the fourth aspect, wherein a parallax for generating a stereoscopic effect at the determined depth is calculated, and the shift amount is acquired by converting the parallax into the number of pixels.
- a graphic combined with the range specified by the plane position specifying means for the one viewpoint image, and the range specified by the plane position specifying means for the other viewpoint image constituting the stereoscopic image You may comprise further the image composition means to synthesize
- the viewpoint image depth acquisition unit acquires the depth of the subject by stereo matching between the one viewpoint image and the other viewpoint image constituting the stereoscopic image. You may comprise.
- a stereoscopic image for which depth information such as a depth map is not prepared in advance can be processed.
- an area dividing means for dividing the one viewpoint image into a plurality of areas that differ from each other by a depth exceeding a threshold when the stereoscopic display is performed.
- An area presenting means for presenting the plurality of areas, and an area receiving means for receiving a selection of any of the presented areas, wherein the planar position specifying means includes at least a part of the selected area.
- the range occupied by the graphic may be specified.
- the user can easily specify the plane position where the graphic is arranged by presenting the user with the area divided for each depth in the viewpoint image displayed in plan view. .
- the area presenting means may be configured to present a plurality of divided areas by displaying adjacent areas in different colors.
- the region presentation unit may be configured to present a plurality of divided regions by adding different texts to the regions and displaying the regions.
- the thirteenth aspect of the present invention is the tenth aspect according to the tenth aspect, in the classification of the one viewpoint image by the area classification means, the edge or the intersection of the edges where the luminance changes sharply between the pixels of the one viewpoint image.
- the boundary of each region is specified by extraction, and the depth of each pixel obtained by stereo matching between the one viewpoint image and the other viewpoint image constituting the stereoscopic image is used as the depth for stereoscopic display. It may be configured.
- an edge is generated at the boundary between subjects in a viewpoint image that is captured so that a plurality of subjects partially overlap.
- FIG. 1 is a diagram illustrating a hardware configuration of a smartphone having a function as a user interface control device according to the present embodiment.
- the smartphone shown in FIG. 1 includes a camera 10, a speaker 20, a GPS 30, a sensor 40, a touch panel 50, a microphone 60, a recording medium 70, a processing unit 100, and a display 200.
- the camera 10 is a stereo camera that captures a stereoscopic image composed of two viewpoint images.
- the captured stereoscopic image is recorded on the recording medium 70.
- the recording medium 70 is a readable / writable nonvolatile recording medium built in the smartphone, and is realized by a hard disk, a semiconductor memory, or the like.
- the processing unit 100 includes, for example, a memory such as a RAM and a processor such as a CPU, and executes functions recorded on the recording medium 70 by the CPU, thereby controlling functions such as calling and photographing and processing of stereoscopic images. To do.
- the function as the user interface control device according to the present embodiment is also realized by executing the program recorded in the recording medium 70 in the processing unit 100.
- FIG. 2 is a diagram showing a configuration of the user interface control device according to the present embodiment.
- This user interface control device provides a GUI that supports a user's processing operation for processing a stereoscopic image composed of two viewpoint images, and is used by being incorporated in various electric devices.
- a device incorporating the user interface control device there is a general-purpose computer such as a PC (Personal Computer; personal computer or personal computer), a communication terminal such as a PDA (Personal Digital Assistance), a tablet, or a mobile phone in addition to a smartphone.
- PC Personal Computer
- PDA Personal Digital Assistance
- the user interface control device includes an operation input receiving unit 101, a control unit 102, a depth information calculation unit 103, a graphic information acquisition unit 105, a depth information analysis unit 106, a depth setting presentation unit 107, a stereo image.
- a generation unit 108 and an output unit 109 are included.
- the program corresponding to each of the units 109 is loaded into the RAM in the processing unit 100 and executed by the CPU in the processing unit 100, which is realized by hardware resources (cooperation of the CPU and the program on the RAM).
- the depth information storage unit 104 is realized by a part of the recording area of the recording medium 70.
- the operation input receiving unit 101 has a function of receiving a user operation input via a pointing device such as a touch panel or a mouse.
- user operations accepted in this embodiment include a drag operation for placing a graphic to be retouched on a photo, a click operation for selecting an item or state pointed to by a pointing device, and a plurality of options on the screen.
- the operation input receiving unit 101 functions as a receiving unit by receiving a click operation for selecting one of the options when displayed.
- template images of graphic parts 2a to 2d corresponding to various graphics to be combined with the left-eye image 1, which is one viewpoint image constituting the stereoscopic image, and the stereoscopic image are displayed.
- a graphic part display unit 2 to be displayed, a pointer 3 indicating the pointing position of the pointing device, and the like are displayed.
- an operation of placing a graphic part on a photograph is performed on the graphic parts 2a to 2d displayed on the graphic part display unit 2. This is realized by dragging any graphic part from the inside and dropping it at an arbitrary position in the image 1 for the left eye.
- Control unit 102 has a function of controlling the processing of the present embodiment according to the input content of the operation input receiving unit 101.
- the depth information calculation unit 103 realizes part of the function of the viewpoint image depth acquisition unit by generating depth information (depth map) indicating the position of the subject in the depth direction from the stereo image for each pixel of the left-eye image. . Specifically, first, a corresponding point search is performed for each pixel between the left-eye image and the right-eye image constituting the stereo image. Then, the distance in the depth direction of the subject is calculated from the positional relationship between the corresponding points of the left-eye image and the right-eye image based on the principle of triangulation.
- depth information depth map
- the depth information is a grayscale image in which the depth of each pixel is represented by 8-bit luminance, and the depth information calculation unit 103 converts the calculated distance in the depth direction of the subject into 256 gradation values from 0 to 255. To do.
- a small region is set around the point of interest, a region-based matching method that is performed based on the shading pattern of pixel values in that region, and features such as edges are extracted from the image, and between the features
- feature-based matching for associating any method may be used.
- the stereo image is an image obtained by capturing the scene from different viewpoints, and in the first embodiment, image data of a stereoscopic image captured by the camera 10 and recorded on the recording medium 70 is used.
- the stereo image is not limited to a real image, and may be CG (Computer Graphics) created assuming different virtual viewpoints.
- image processing will be described for a stereoscopic image obtained by photographing a scene where one person stands in front of a bus with a stereo camera.
- the depth information storage unit 104 is realized by a part of the recording area of the recording medium 70 and has a function of storing the depth information calculated by the depth information calculation unit 103 in the recording area of the recording medium 70.
- the graphic information acquisition unit 105 has a function as a plane position specifying unit that acquires the coordinates of the area occupied by the graphic arranged by the user on the left-eye image.
- a pointer instruction when a graphic is dropped in the XY coordinate system of the left-eye image with the upper left corner of the left-eye image as the origin The position is acquired as center coordinates (x g , y g ) for placing the graphic part, and the upper left corner coordinates (x 1 , y 1 ) and lower right corner coordinates (x 2 , y) of the rectangular frame surrounding the graphic part shown in FIG. 2 ) is calculated as the arrangement range occupied by the graphic part in the XY coordinate system described above.
- the graphic information acquisition unit 105 holds the relative values of the center coordinates, the upper left corner coordinates, and the lower right corner coordinates for each graphic part, and the upper left corner coordinates (x 1 , y 1 ) and the lower right corner coordinates (x 2 , y 2 ) can be calculated easily by using the coordinates of the pointer indication position and this relative value to obtain the graphic part arrangement range.
- the depth information analysis unit 106 has a function of obtaining depth information of the image for the left eye in the arrangement range of the graphic part and a depth so as to present the user with options indicating the depth at which the graphic part can be arranged by the relative position to the subject. And a function for instructing the setting presentation unit 107.
- the depth information analysis unit 106 reads the depth information of the image for the left eye in the graphic part arrangement range calculated by the graphic information acquisition unit 105 from the recording medium 70 via the depth information storage unit 104, and stores the depth information. By detecting a subject existing in the arrangement range by analysis, it functions as a viewpoint image depth acquisition means, and further determines a relative position where a graphic part can be arranged with respect to the detected subject.
- the depth distribution of the image for the left eye is analyzed with pixels (hereinafter referred to as a horizontal pixel group) that continue in the horizontal direction through the graphic part center coordinates (x g , y g ) in the arrangement range of the graphic part,
- a horizontal pixel group pixels that continue in the horizontal direction through the graphic part center coordinates (x g , y g ) in the arrangement range of the graphic part.
- the depth information analysis unit 106 selects the depth further before the shallowest depth between (x 1 , y g ) and (x 2 , y g ).
- the same depth as the shallow depth and the average depth of two adjacent pixels that differ in depth by exceeding the threshold Th are set as depth positions at which graphic parts can be placed, and options corresponding to each are presented to the depth setting presentation unit 107. Instruct.
- the positions in the depth direction at which the graphic parts can be arranged are roughly divided into depths 4a ahead of the person who is the foremost object as shown in FIG. 6A, as shown in FIG. 6B.
- the depth 4b is the same as that of the person, and the depth 4c is intermediate between the person and the bus as shown in FIG.
- the depth information analysis unit 106 determines that there are two objects in the graphic part arrangement range because the depth changes beyond the threshold Th at the boundary between the person and the bus in the left-eye image. and, further the front depth than the depth at x 1 on the front side and the depth 4a shown in (a) of FIG. 6, the depth 4b indicating the depth at x 1 on the front side in FIG. 6 (b), the front side
- the average depth of the depth at x 1 and the depth at x 2 on the back side is set to a depth 4c shown in FIG. 6C, and the depth at which three graphic parts can be arranged is determined.
- the depth information analysis unit 106 sets the intermediate depth between these subjects for each set of two subjects whose depths are before and after.
- the depth setting presentation unit 107 is instructed to add a corresponding option as a possible depth.
- the graphic part when the graphic part is arranged so that the entire graphic part overlaps the range where any one subject image exists in the image for the left eye displayed on the display 200, it appears in the image even if the depth of the graphic part is behind the subject. Since there is no meaning and there is no meaning, the relative position to the subject on which the graphic part can be placed is two positions, the same depth as the subject and a depth closer to the subject. In such a case, the depth information analysis unit 106 instructs the depth setting presentation unit 107 to present options corresponding to the two depths.
- the depth setting presentation unit 107 includes a GUI presentation unit 111, an expansion / contraction display unit 112, and a depth determination unit 113.
- the depth setting presentation unit 107 controls the GUI for setting the depth information of the graphic part, and determines the depth at which the graphic part is arranged. Has a function to determine.
- the GUI presentation unit 111 functions as a presentation unit, receives an instruction from the depth information analysis unit 106, generates a GUI image including the instructed option, and notifies the output unit 109 to draw as a pop-up menu.
- the arrangement range of the flower-shaped graphic parts includes the head of the person and the bus behind it, and the depth information analysis unit 106 is closest to the front.
- the person who is the subject is instructed to select “front”, “paste”, and “rear” as the graphic part placement depth options.
- the depth setting presentation unit 107 Based on this instruction, the depth setting presentation unit 107 generates a GUI image of a pop-up menu including three options “front”, “attach”, and “back”.
- the depth before the predetermined value is lower than the shallowest depth of the left eye image in the graphic part placement range.
- the shallowest depth of the left-eye image in the graphic part placement range is a graphic so that the graphic part does not sink into the subject. This means that it has been selected as the part placement depth.
- the item of “rearward” is selected from the menu, the middle of the two subjects, the human head that is the foremost subject existing in the graphic part placement range, and the bus behind it is displayed. It means that the depth is selected as the placement depth of the graphic part.
- the expansion / contraction display unit 112 receives an instruction to determine the graphic part arranged in the left-eye image on the display 200 by a click operation as shown in FIG. It has a function as an expansion / contraction display means for instructing the output unit 109 to perform drawing while changing the size repeatedly until input.
- the graphic part is displayed in the range of 2 to 1/2 times the original size displayed on the graphic part display unit 2 at the graphic part center coordinates (x g , y g ). Repeated enlargement and reduction are displayed.
- the depth determination unit 113 sets the depth corresponding to the selected option as the provisional depth position, and the graphic part at the time of the determination operation It has a function as a depth determination means for determining the final depth after adjusting the provisional depth position according to the display size.
- the graphic part is displayed in its original size, assigned to the depth corresponding to the option selected in the menu, and the graphic part is displayed in a double-scaled display with the selected depth in the menu.
- the graphic part is displayed in a double-scaled display with the selected depth in the menu.
- the stretchable display unit 112 has an original size to 1/2 times the original size.
- the graphic part is repeatedly enlarged and reduced within the range of.
- the depth beyond the selected depth is the depth that sinks into the subject, so the stretchable display unit 112 has an original size to an original size.
- the graphic part is repeatedly enlarged and reduced within the range of 2 times.
- the graphic part is repeatedly enlarged and reduced in the range of 2 times the original size to 1/2 times the original size to determine the depth.
- the unit 113 assigns the position nearer than the selection depth in the menu by a predetermined depth to the state where the graphic part is enlarged and displayed twice, and the depth is set according to the enlargement / reduction ratio at the time of the determination operation. It may be calculated.
- the stereo image generation unit 108 has a shift amount acquisition unit 114 and an image synthesis unit 115 inside, and based on the arrangement depth of the graphic parts determined by the depth setting presentation unit 107, combines the graphic parts with the photograph with parallax. And a function of generating a left-eye image and a right-eye image obtained by combining graphic parts.
- the shift amount acquisition unit 114 has a function as a shift amount acquisition unit that calculates a parallax for generating a stereoscopic effect of the arrangement depth of the graphic parts, and acquires the shift amount by converting the calculated parallax into the number of pixels. Have.
- the image synthesis unit 115 synthesizes the graphic part with the graphic part arrangement range for the left-eye image, and horizontally aligns the graphic part arrangement range with the shift amount calculated by the shift amount acquisition unit 114 for the right-eye image. By synthesizing the graphic part in the moved range, it functions as an image synthesizing means for generating a stereoscopic image by synthesizing the graphic part.
- the output unit 109 is a driver that controls display on the display 200, and the left-eye image at the time of processing, the graphic part, the GUI image instructed from the depth setting presentation unit 107, and the graphic part are combined in the stereo image generation unit 108. A stereoscopic image or the like is displayed on the display.
- FIG. 10 is a flowchart showing the flow of the depth information generation process.
- the depth information calculation unit 103 first acquires the captured left-eye image and right-eye image (step S1). Next, the depth information calculation unit 103 searches the right eye image for pixels corresponding to the pixels constituting the left eye image (step S2). Then, the depth information calculation unit 103 calculates the distance in the depth direction of the subject based on the triangulation principle from the positional relationship between corresponding points of the left-eye image and the right-eye image (step S3).
- the stereo matching process including the above steps S2 and S3 is performed on all the pixels constituting the left-eye image.
- the depth information calculation unit 103 obtains information on the distance in the depth direction of the subject obtained by the process of step S3. Is quantized 8-bit (step S4). Specifically, the calculated distance in the depth direction of the subject is converted into 256 gradation values from 0 to 255, and a grayscale image in which the depth of each pixel is represented by 8-bit luminance is generated. The grayscale image generated in this way is recorded in the depth information storage unit 104 as depth information.
- FIG. 11 is a flowchart showing the flow of the depth setting pop-up menu display process executed in response to the graphic part placement operation by the user.
- the graphic information acquisition unit 105 displays the coordinates where the user placed the graphic part on the left-eye image.
- the graphic part arrangement range centering on the arrangement coordinates is calculated (step S13).
- the arrangement range is calculated as upper left corner coordinates and lower right corner coordinates of a rectangular area surrounding the graphic part as shown in FIG.
- the depth information analysis unit 106 reads out the depth information in the horizontal pixel group passing through the center (x g , y g ) of the arrangement range of the graphic parts from the depth information storage unit 104 (step S14). ) Based on the read depth information, a process of extracting a graphic part placement depth candidate is executed (step S15).
- the number L of options indicating the graphic part arrangement depth candidates is determined and further included in the graphic part arrangement range.
- Each depth of the subject is recorded on the recording medium 70.
- the GUI presentation unit 111 Based on the processing result of the graphic part arrangement depth candidate extraction process in step S15, the GUI presentation unit 111 generates a pop-up menu including the determined L choices and presents it to the user (step S16).
- the L options are associated with depths as follows.
- the “paste” option is associated with the depth closest to the subject recorded in the recording medium 70, and further than the depth associated with the “paste” option.
- the “front” option is associated with the depth on the near side by a predetermined depth. Further, two average depths are calculated from the front side of the depth of the (L ⁇ 1) subjects recorded on the recording medium 70, and the calculated L ⁇ 2 average depths are sequentially set to “back 1” and “back 2”. ... Corresponds to the option of “rear L-2”.
- the display position of the pop-up menu is assumed to be arranged at the upper left corner of the image for the left eye by default as shown in FIG. 8, but when the superimposition position of the pop-up menu to be displayed overlaps the position where the graphic is arranged.
- the pop-up menu is moved to a position that does not overlap the subject.
- FIG. 12 is a flowchart showing details of the processing for extracting the graphic part arrangement depth candidates in step S15 of FIG.
- the depth information analysis unit 106 first initializes a variable L for managing the number of options with 2 (step S21), and sets a variable n for managing the search coordinates to the upper left corner of the graphic part arrangement range. is initialized to x 1 is the x-coordinate values of the coordinates (step S22).
- the depth information analysis unit 106 repeats the loop from step S23 to step S27.
- step S23 the coordinates (n, y g) depth D n and the coordinates (n, y g) of the left-eye image in the predetermined number of pixels w (e.g., 5 pixels) only right coordinates (n + w, y g) It is determined whether or not the absolute value [D n ⁇ D n + w ] of the difference from the depth D n + w of the left-eye image exceeds the threshold Th.
- [D n ⁇ D n + w ] exceeds the threshold Th (step S23: Yes)
- the number of choices L is incremented (step S24), and the value of the left-eye image depth D n at the search coordinates is set to the depth of the subject. Is recorded on the recording medium 70 (step S25).
- n be updated with n + w (step S26), the variable n after the update and determines whether more than x 2 and x-coordinate value of the lower right corner coordinates of the arrangement range of the graphic part (step S27).
- step S27 If the variable n in step S27 does not exceed x 2, repeating the loop process from step S23, if exceeded, it terminates the extraction of placement depth candidate graphics parts.
- the search pixel width w is not limited to the above-described five pixels, and any value suitable for detecting a subject in an image can be used. However, in an image obtained by photographing two people lined up side by side at the same depth, if a small value such as one pixel is used as the search width w, even a slight background between the two people is detected as a graphic part placement candidate. Therefore, there is a possibility that a meaningless option in image processing is presented to the user.
- the search width w is increased, there is a possibility that an area that gradually changes continuously from a shallow depth to a deep depth, such as a wall photographed from an oblique direction, is detected as a plurality of subjects for each search width w. Therefore, when the search width w is increased in this way, it is preferable to use a large value for the depth threshold Th according to the search width w.
- the depth distribution is analyzed with a horizontal pixel group passing through the center (x g , y g ) of the arrangement range of the graphic part.
- the analysis of the depth distribution for extracting the part arrangement depth candidates may be performed on another horizontal pixel group within the graphic part arrangement range or a pixel group continuous in the vertical direction. Further, a plurality of horizontal pixel groups and vertical pixel groups within the arrangement range of the graphic parts may be analyzed for depth distribution.
- FIG. 13 is a flowchart showing the flow of the graphic part depth adjustment process executed in response to an option selection operation from the pop-up menu.
- the depth determination unit 113 acquires the depth associated with the option selected by the user (step S31).
- Step S32 After the expansion / contraction display unit 112 superimposes and displays the graphic part on the arrangement range of the graphic part (step S32), the graphic part is repeatedly enlarged and reduced with the coordinates where the graphic part is arranged as a center to update and display the image. (Step S33).
- the size of the enlarged / reduced graphic part is related to adjust the depth of the graphic part to the near side as the size increases, and to the back side as the size decreases, and when the graphic part is displayed at the desired size.
- the depth determination unit 113 corrects the depth associated with the option acquired in step S31 according to the graphic part display size at the time of the determination operation (step S34).
- Step S41 is a loop process waiting for a user to select a pop-up menu option.
- the expansion / contraction display unit 112 initializes the enlargement flag with ON (step S42). After the flag initialization, the expansion / contraction display unit 112 repeatedly executes the loop processing of steps S43 to S50.
- Step S43 is a determination of whether or not the enlargement flag is set to ON. If the enlargement flag is set to ON (step S43: Yes), is the graphic part displayed in the maximum size (twice the original size of the graphic part displayed in the graphic part display unit 2 in FIG. 4)? It is determined whether or not (step S44). If the graphic part is not displayed at the maximum size (step S44: No), the graphic part display is updated by increasing the enlargement ratio of the graphic part by 10% (step S45), and the graphic part is displayed at the maximum size. If this is the case (step S44: Yes), the enlargement flag is set to OFF (step S46). After the processing of step S45 and step S46, it is determined in step S50 whether there is an input of a determination operation by the user.
- step S43: No it is determined whether or not the graphic part is displayed in the minimum size (1/2 times the original size) (step S47). To do. If the graphic part is not displayed at the minimum size (step S47: No), the graphic part display is updated (step S48) by reducing the reduction ratio of the graphic part by 5%, and the graphic part is displayed at the minimum size. If this is the case (step S47: Yes), the enlargement flag is set to ON (step S49). After the processing of step S48 and step S49, it is determined in step S50 whether there is an input of a determination operation by the user.
- step S50 If it is determined in step S50 that the user does not input a determination operation (step S50: No), the process is repeated from step S43.
- step S50 when there is an input of the determination operation by the user (step S50: Yes), the depth determination unit 113 acquires the graphic part display size at the time of the determination operation (step S51), and the depth according to the size. Is determined (step S52). Specifically, if the graphic part display size acquired in step S51 is larger than the graphic part displayed in the graphic part display unit 2 of FIG. 4, it is acquired in step S31 of the flowchart shown in FIG. The depth corrected to the near side in proportion to the enlargement ratio rather than the depth corresponding to the option is determined as the graphic part placement depth. On the other hand, if the graphic part display size acquired in step S51 is reduced more than the graphic part displayed in the graphic part display unit 2, the depth in proportion to the reduction ratio is greater than the depth acquired in step S31. The corrected depth is determined as the graphic part placement depth.
- FIG. 15 is a flowchart showing a flow of processing for generating a stereoscopic image obtained by synthesizing graphic parts based on the depth of the graphic parts determined by the depth setting presentation unit 107.
- the shift amount acquisition unit 114 acquires the depth of the graphic part determined by the depth setting presentation unit 107 (step S61).
- the image synthesizing unit 115 synthesizes the graphic parts with the arrangement range of the graphic parts of the left-eye image, and generates a left-eye image after synthesis (step S62).
- the shift amount acquisition unit 114 calculates the pixel shift amount from the depth of the graphic part determined by the depth setting presentation unit 107 (step S63), and the graphic part by the pixel shift amount calculated by the image composition unit 115 in step S63.
- the graphic part is synthesized with the right-eye image at the coordinates where the arrangement range is shifted, and the synthesized right-eye image is generated (step S64).
- FIG. 16 is a diagram illustrating a relationship between the arrangement depth of the graphic parts and the pixel shift amount.
- the stereoscopic effect includes a pop-out effect (a pop-up stereoscopic view) and a retraction effect (a pull-down stereo view).
- FIG. 16A shows a pixel shift in the case of a pop-up stereo view.
- 16 (b) shows a pixel shift in the case of retracted stereoscopic viewing.
- Px is the horizontal shift amount
- L-View-Point is the left eye pupil position
- R-View-Point is the right eye pupil position
- L-Pixel is the left eye pixel
- R-Pixel is the right eye pixel
- e is Interpupillary distance
- H is the height of the display screen
- W is the horizontal width of the display screen
- S is the distance from the viewer to the display screen
- Z is the distance from the viewer to the imaging point, that is, the placement depth of the graphic parts .
- the straight line connecting the left eye pixel L-pixel and the left eye pupil L-view-point is the line of sight of the left eye pupil L-view-point
- the straight line connecting the right eye pixel R-Pixel and the right eye pupil R-View-Point is the right eye pupil R- View-point line of sight, realized by switching between translucent and light-shielding with 3D glasses, and parallax barriers using parallax barriers, lenticular lenses, and the like.
- Px is a negative value
- the width W of the display screen is the width W of the display screen.
- 701 in FIG. 16 is a diagram illustrating pixel shift when the viewer jumps out and performs stereoscopic viewing. By performing such pixel shift on all the pixels constituting the left-eye image, a right-eye image corresponding to the left-eye image can be generated.
- a specific calculation formula for the shift amount in the horizontal direction will be described.
- the pixel shift in the case of pop-up stereoscopic vision is as follows.
- the subject distance Z can be obtained from the arrangement depth of the graphic parts.
- the average value for adult males is 6.4 cm.
- the distance S from the viewer to the display screen is 3H because the optimum viewing distance is generally three times the height of the display screen.
- the length per pixel in the horizontal direction is the horizontal width of the display screen divided by the horizontal width of the display screen.
- the number of pixels K and the length per pixel in the vertical direction are the height H of the display screen / the number L of pixels in the vertical direction of the display screen. One inch is 2.54 cm. Therefore, when the horizontal shift amount Px is shown in units of pixels,
- the horizontal shift amount Px can be calculated based on the above mathematical formula.
- the horizontal shift amount in the horizontal direction is a specific calculation method of the pixel shift amount in the horizontal direction.
- FIG. 17 is a flowchart showing details of the graphic part synthesis process executed in steps S62 and S64. Here, a case where a graphic part is combined with a left-eye image will be described.
- the composition positions x and y are initialized with the upper left corner coordinates (x 1 , y 1 ) of the graphic part arrangement range (step S71), and then the loop process of steps S72 to S78 is executed. To do.
- Step S72 is a determination as to whether or not the depth D (x, y) of the left-eye image at the coordinates (x, y) is behind the graphic part arrangement depth d. If the depth D (x, y) of the left-eye image at the coordinates (x, y) is deeper than the graphic part arrangement depth d (step S72: Yes), the left-eye image at the coordinates (x, y) The pixel of the image is overwritten with the pixel of the graphic part (step S73).
- step S73 After overwriting the pixels of the left-eye image in step S73, and when the depth D (x, y) of the left-eye image at the coordinates (x, y) is on the near side of the graphic part arrangement depth d ( In step S72: No), the x coordinate of the composite position is incremented (step S74), and the x coordinate of the composite position after the change is the x coordinate value of the lower right corner coordinates (x 2 , y 2 ) of the graphic part arrangement range. , X 2 is determined (step S75).
- step S75: No If the x coordinate of the new composite position does not exceed x 2 (step S75: No), the process is repeated from step S72 for the new composite position, and if the x coordinate of the new composite position exceeds x 2 (step S75: Yes), the x-coordinate of the combined position again initialized with x 1 (step S76), (step S77 after incrementing the y coordinate of the synthetic position), the y-coordinate of the combined position after the change, the graphic part It is determined whether or not the y coordinate value of the lower right corner coordinates (x 2 , y 2 ) of the arrangement range, y 2, is exceeded (step S78).
- step S78: No If the y coordinate of the new composite position does not exceed y 2 in step S78 (step S78: No), the process is repeated from step S72 for the new composite position, and the x coordinate of the new composite position exceeds x 2 . If so (step S78: Yes), since the image composition has been completed for all the pixels in the graphic part arrangement range, the graphic part composition process is terminated.
- the possibility of graphic placement is determined from the distribution of depth information at the position where the user places the graphic part, and depth options are provided to the user regarding the setting of the depth direction. By doing so, it is possible to easily set the arrangement depth of the graphic parts.
- the graphic part is repeatedly enlarged / reduced to display a graphic part, and waits for a further determination operation by the user.
- the graphic part display size at the time of the determination operation further from the depth indicated by the option, or Adjust the placement depth of the graphic parts in the back.
- the pop-up menu for selecting the arrangement depth of the graphic parts, as shown in FIG. 8, when there are two subjects, a person and a bus behind it, in the arrangement range of the graphic parts,
- the number of options in the pop-up menu is not necessarily three.
- the pop-up menu when the graphic part is arranged so as to overlap only the head of the person, the pop-up menu is configured to present two options “front” and “paste”. May be.
- the pop-up menu may be configured to include four or more options in addition to “front”, “attach”, and “back”.
- the graphic parts can be arranged at various depths such as the front and rear of each subject, and the options presented to the user are diverse. In such a case, it takes time and effort for the user to select an option indicating a desired depth.
- the user interface control apparatus extracts subjects having different depths from depth map data obtained by stereo matching of two viewpoint images, and emphasizes the extracted subject in a viewpoint image displayed in plan view. This makes it easy for the user to specify the plane position where the graphic part is placed. In addition, by receiving designation of a subject close to the depth at which the graphic part is desired to be placed from the highlighted subject, the options are narrowed down and presented to the user.
- FIG. 19 is a diagram illustrating a configuration of the user interface control device 300 according to the second embodiment.
- the user interface control device 300 includes an operation input reception unit 201, a graphic superimposition control unit 202, a depth information calculation unit 203, a depth information analysis unit 205, a graphic information acquisition unit 206, a depth setting presentation unit 207, a stereo image generation unit 208, and an output.
- the function of the unit 1201 is recorded in advance in the recording medium 70 shown in FIG. 1 as a program, for example.
- a hardware resource is obtained by loading a program corresponding to each of the stereo image generating unit 208, the output unit 209, and the region dividing unit 1201 from the recording medium 70 to the RAM in the processing unit 100 and executing it by the CPU in the processing unit 100. This is realized by (cooperation between the CPU and the program on the RAM).
- the depth information storage unit 204 is realized by a part of the recording area of the recording medium 70.
- elements other than the operation input reception unit 201, the graphic information acquisition unit 206, the depth setting presentation unit 207, and the region division unit 1201 are the user interface according to the first embodiment illustrated in FIG. This is the same as that of the control device, and a description thereof is omitted in this embodiment.
- the operation input reception unit 201, the graphic information acquisition unit 206, the depth setting presentation unit 207, and the region division unit 1201 will be described.
- the area dividing unit 1201 has a function as area dividing means for dividing the left-eye image into a plurality of subject areas in accordance with the luminance distribution and depth information distribution of the stereoscopic image. Specifically, when the luminance is compared with the surrounding pixels in the left-eye image, an edge portion where the luminance exceeds the predetermined threshold and changes sharply is detected. The area dividing unit 1201 classifies the left-eye image for each area surrounded by the edges, reads out the depth information of the left-eye image from the recording device, and sets the depths on both sides of the edge to a predetermined threshold. If it is different, the area surrounded by the edge is determined to be the subject area.
- a left-eye image as shown in FIG. 20B is obtained.
- the left eye image is detected as a different area by using an appropriate luminance threshold in the area dividing unit 1201.
- the depth information of the image for the left eye is read from the recording device for these areas, and the depths of the areas 11a, 12a, and 13a are compared with the depths of the adjacent areas, respectively.
- 13a is determined to be the subject area.
- the detected coordinate information of each subject area is recorded on the recording medium 70 via the depth information storage unit 204.
- the depth setting presentation unit 207 includes the GUI presentation unit 211, the expansion / contraction display unit 212, and the depth determination unit 213.
- the area presentation unit 214 is provided inside.
- the region presenting unit 214 has a function as a region presenting unit that presents subjects having different depths to the user in the left-eye image displayed on the display. Specifically, the area presentation unit 214 displays each subject area detected by the area division unit 1201 with different patterns and colors as shown in the areas 11b, 12b, and 13b shown in FIG. It is presented to the user by displaying on the screen.
- the area is presented by combining a text such as a number in each area and displaying it, or by emphasizing the edge portions of the areas 11c, 12c, and 13c determined as subjects.
- Various methods for assisting user recognition of an area determined to be a subject can be used, such as a method for processing and displaying an image.
- the operation input receiving unit 201 selects any one of the subjects with different depths presented by the region presenting unit 214 as described above. It has a function as an area receiving means for receiving an operation to perform.
- the graphic information acquisition unit 206 functions as a plane position specifying unit that acquires the coordinates of the arrangement range occupied by the graphic part on the left-eye image displayed on the display 200.
- the method for specifying the arrangement range is different from that of the graphic information acquisition unit 105.
- the graphic information acquisition unit 105 calculates the arrangement range of the graphic part from the coordinates where the user dropped the graphic part on the left-eye image, but the graphic information acquisition unit 206 is selected by the operation received by the operation input reception unit 201.
- the coordinate range of the graphic part is calculated using the coordinate of the center of the subject area as the center coordinate (x g , y g ) of the graphic part.
- the region dividing unit 1201 detects the region of the subject from the image for the left eye using the luminance and depth information of the image, and the region presenting unit 214 displays on the display.
- the region presenting unit 214 displays on the display.
- a pattern with a different pattern is superimposed and displayed on each detected subject area (step S81). The user can select any of the areas painted differently as shown in FIG. 21A and designate which subject the graphic part is to be superimposed on.
- the area presenting unit 214 removes the pattern of the pattern displayed superimposed on each subject area. Thereafter, the graphic part is drawn on the selected subject area (step S84).
- the operation of designating the placement position of the graphic part by the user dropping the graphic part on the image for the left eye in the first embodiment can be substituted.
- the stereoscopic image processing can be continued by determining the arrangement depth of the graphic part by the same processing as the processing procedure shown in and after step S12 in FIG. 11 (step S84).
- the placement position of the graphic parts can be selected in units of areas, so it is possible to handle various compositions and types of photos, and further improve user convenience.
- this invention is not limited to said embodiment. The following cases are also included in the present invention.
- One aspect of the present invention may be an application execution method disclosed by the processing procedure described in each embodiment. Further, the present invention may be a computer program including program code that causes a computer to operate according to the processing procedure.
- One aspect of the present invention can also be implemented as an LSI that controls the user interface control device described in each of the above embodiments.
- Such an LSI can be realized by integrating the functional blocks included in the user interface control devices 100 and 300 shown in FIGS. These functional blocks may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
- LSI is used, but depending on the degree of integration, it may be called IC, system LSI, super LSI, or ultra LSI.
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- a function for specifying a range occupied by the graphic, and a function for acquiring the depth of a subject in the one viewpoint image in the specified range A function for presenting an option indicating the acquired depth and an option indicating another depth at which a graphic different from the acquired depth can be arranged, a function for accepting a selection from among a plurality of options, After receiving the selection, the function of repeatedly changing the size of the graphic until receiving a decision instruction from the user, and when the graphic is enlarged and displayed at the time of receiving the decision instruction, the selected option The depth in front of the indicated depth is determined as the depth at which the graphic is placed, and when the determination instruction is received, the graphic is reduced and displayed.
- the function of determining the depth behind the depth indicated by the selected option as the depth to place the graphic, and calculating the parallax for causing the stereoscopic effect at the determined depth A function that acquires the shift amount by converting it to the number of pixels and the amount of shift from the range specified for the other viewpoint image that composes the stereoscopic image by combining the graphic with the range specified for one viewpoint image
- the function of presenting the area and the function of receiving selection of any of the presented areas can be realized by an integrated circuit, a dedicated circuit, or the like as described above. Further, each of the above functions can be realized by cooperation of a processor and a program on a memory.
- the corresponding point search is performed in units of pixels.
- the present invention is not necessarily limited to this case.
- the corresponding point search may be performed in pixel block units (for example, 4 ⁇ 4 pixels, 16 ⁇ 16 pixels).
- the depth information is converted into a grayscale image in which the distance in the depth direction of the subject is converted into 256 gradation values from 0 to 255, and the depth of each pixel is represented by 8-bit luminance.
- the distance in the depth direction of the subject may be converted into 128 gradation values from 0 to 127.
- the graphic parts are superimposed on the right-eye image with a parallax added based on the arrangement position of the graphic parts with respect to the left-eye image, but in the reverse order, the right-eye image
- the graphic part may be superimposed on the right-eye image with parallax on the basis of the arrangement position of the graphic part with respect to. In this case, it is preferable to display the right-eye image on the display when receiving the designation of the placement position of the graphic part by the user.
- Embodiment 1 a case has been described in which a stereo image composed of a set of a left-eye image and a right-eye image having the same resolution is acquired.
- the present invention is not necessarily limited to this case.
- the left-eye image and the right-eye image may be images having different resolutions. It is possible to generate depth information by searching for corresponding points by performing resolution conversion processing between images with different resolutions, and generate high-resolution stereo images by performing pixel shift processing on high-resolution images. can do. Since generation processing of depth information with heavy processing can be performed with a low-resolution image size, processing can be reduced. In addition, a part of the imaging device can be a low-performance imaging device, and cost reduction can be achieved.
- (G) In the first embodiment, information on the type number X of the display device, the aspect ratio m: n, and the resolution of the display screen (the number of pixels in the vertical direction L and the number of pixels in the horizontal direction K) are transferred to the external display.
- the present invention is not necessarily limited to this case.
- the viewer may input information such as the display device type number X, aspect ratio m: n, and display screen resolution (vertical pixel number L, horizontal pixel number K).
- the distance S from the viewer to the display screen is set to three times the height H of the display screen (3H) and the pixel shift amount is calculated has been described. It is not limited.
- the distance S from the viewer to the display screen may be calculated by a distance sensor such as a TOF (Time Of Flight) type sensor.
- the inter-pupil distance e is the average value of adult males and the pixel shift amount is calculated.
- the present invention is not necessarily limited to this case.
- a camera may be installed on the display device, and the interpupillary distance may be calculated from a face image acquired by the camera. Further, it may be determined whether the viewer is an adult, a child, a man, or a woman, and the interpupillary distance e may be calculated accordingly.
- the region is divided from the luminance distribution and the depth information distribution when the region of the subject is divided.
- the region dividing method is not limited to this.
- the region may be divided only from the distribution of depth information.
- the region may be divided by extracting as feature points that are edges (locations where luminance changes sharply) or intersections of the edges.
- Edge detection may be performed by obtaining a luminance difference (first derivative) between pixels and calculating edge strength from the difference. Note that feature points may be extracted by other edge detection methods.
- the user is instructed to display the GUI menu as a means for allowing the user to select the depth arrangement.
- the background, subject, and foreground are displayed in color at predetermined intervals in order, and are roughly arranged at any position by inputting an operation such as a button press when the depth desired by the user is displayed in color. You may make it select. Even when the background, the subject, and the foreground are colored and displayed in order, if they are hidden by the subject, no meaningless option for placing graphic parts is displayed.
- the depth information calculation unit 103 may generate the depth information by measuring the distance of each subject using a distance sensor such as a TOF (Time Of Flight) type distance sensor. Further, depth information may be acquired together with a monocular image from an external network, server, recording medium, or the like. Further, the acquired monocular image may be analyzed to generate depth information. Specifically, the image is first divided into a set of pixels called “superpixels” that have very homogeneous attributes such as color and brightness, and this superpixel is compared with adjacent superpixels to analyze changes in texture gradation and other factors. To estimate the distance of the subject.
- the monocular image may be image data captured by an imaging device such as a monocular camera. Further, the image is not limited to a real image, and may be CG (Computer / Graphics) or the like.
- the menu of the GUI is displayed, but the display position is determined as the default display position at the upper left corner, but the display position is not limited to this position.
- the object may be moved and displayed at a position where the subject is not hidden, and may be arranged so as not to overlap the graphic part arrangement position.
- the stereoscopic image is divided into subject areas, but the subject area is divided based on the luminance and depth information without considering what the subject is.
- an object is detected using some person / object recognition technology, and combined with the area division method as described in (k) and the distribution of depth information used in the second embodiment.
- Region division may be performed.
- a number may be assigned to the divided area, the number may be superimposed on the subject, and the subject on which the graphic part is superimposed may be selected by allowing the user to select the number.
- the person recognition function may be used to recognize the person's area, and the area may be divided so that the graphic parts can be easily superimposed on the person.
- the flower mark is taken as an example of the graphic part.
- a graphic part that is a part of the human body
- face recognition is performed.
- the graphic parts may be superimposed so as to be arranged at an appropriate position on the face.
- the flower mark is taken as an example of the graphic part.
- the graphic part is a speech balloon
- the face recognition function is used to search the mouth area and the speech balloon It is also possible to arrange so that the front end of the subject comes to the face and the face that is the subject is not hidden.
- a PC and a tablet that can easily set the arrangement depth of the graphic parts when synthesizing the graphic parts to the stereoscopic image and process the stereoscopic image It can be applied to applications such as smartphones and mobile phones. It is also particularly useful for retouching applications.
Abstract
Description
本発明は、立体視画像にグラフィックを合成する際に、合成後にグラフィックを表示するための深度を容易に設定可能なGUIを提供するユーザーインタフェース制御装置、ユーザーインタフェース制御方法、ユーザーインタフェースを制御するコンピュータプログラム、及び集積回路を提供することを目的とする。
を更に備えるよう構成してもよい。
(実施形態1)
図1は、本実施形態に係るユーザーインタフェース制御装置としての機能を有するスマートフォンのハードウェア構成を示す図である。図1に示すスマートフォンは、カメラ10、スピーカ20、GPS30、センサ40、タッチパネル50、マイク60、記録媒体70、処理部100、ディスプレイ200を有している。
操作入力受付部101は、タッチパネル、マウス等のポインティングデバイスを介して入力されるユーザの操作を受け付ける機能を有する。
制御部102は、本実施例の処理を操作入力受付部101の入力内容に従って制御する機能を有する。
深度情報算出部103は、ステレオ画像から被写体の奥行き方向の位置を示す深度情報(デプスマップ)を左目用画像の画素毎に生成することで、視点画像深度取得手段の機能の一部を実現する。具体的には、まずステレオ画像を構成する左目用画像・右目用画像間の各画素について対応点探索を行う。そして、左目用画像と右目用画像の対応点の位置関係から、三角測量の原理に基づき、被写体の奥行き方向の距離を算出する。深度情報は、各画素の奥行きを8ビットの輝度で表したグレースケール画像であり、深度情報算出部103は、算出した被写体の奥行き方向の距離を0~255までの256階調の値に変換する。なお、対応点探索には、注目点の周りに小領域を設け、その領域中の画素値の濃淡パターンに基づいて行う領域ベースマッチング手法と、画像からエッジなど特徴を抽出し、その特徴間で対応付けを行う特徴ベースマッチングの2つに大きく大別されるが、何れの手法を用いてもよい。なお、ステレオ画像は、異なる視点から被写界を撮像して得られる画像であり、本実施形態1ではカメラ10により撮像され記録媒体70に記録された立体視画像の画像データを用いる。ステレオ画像は、実写画像に限らず、異なる仮想視点を想定して作成したCG(Computer Graphics)等であってもよい。
深度情報保存部104は、記録媒体70の記録領域の一部により実現され、深度情報算出部103が算出した深度情報を記録媒体70の記録領域に保存する機能を有する。
グラフィック情報取得部105は、ユーザが配置したグラフィックが左目用画像上で占める領域の座標を取得する平面位置特定手段としての機能を有する。
深度情報解析部106は、グラフィックパーツの配置範囲における左目用画像の深度情報を取得する機能と、グラフィックパーツを配置可能な深度を被写体との相対位置により示す選択肢を、ユーザに提示するように深度設定提示部107に指示する機能とを持つ。
深度設定提示部107は、GUI提示部111、伸縮表示部112、深度決定部113を内部に有し、グラフィックパーツの奥行き情報を設定するためのGUIを制御して、グラフィックパーツを配置する深度を決定する機能を有する。
ステレオ画像生成部108は、シフト量取得部114、画像合成部115を内部に有し、深度設定提示部107で決定したグラフィックパーツの配置深度に基づいて、視差をつけてグラフィックパーツを写真に合成し、グラフィックパーツを合成した左目用画像、及び右目用画像を生成する機能をもつ。
出力部109は、ディスプレイ200の表示を制御するドライバであり、加工作業時の左目画像、グラフィックパーツ、深度設定提示部107から指示されたGUI画像、及びステレオ画像生成部108においてグラフィックパーツを合成した立体視画像等をディスプレイに表示させる。
<動作>
続いて、上記構成を備えるユーザーインタフェース制御装置の動作について説明する。
まず、深度情報算出部103による深度情報生成処理について説明する。図10は、深度情報生成処理の流れを示すフローチャートである。
図11は、ユーザによるグラフィックパーツ配置操作に応じて実行される深度設定ポップアップメニュー表示処理の流れを示すフローチャートである。
図12は、図11のステップS15において、グラフィックパーツの配置深度の候補を抽出する処理の詳細を示すフローチャートである。
図13は、ポップアップメニューからの選択肢選択操作に応じて実行されるグラフィックパーツ深度調節処理の流れを示すフローチャートである。
図15は、深度設定提示部107によって決定されたグラフィックパーツの深度に基づいて、グラフィックパーツを合成した立体視画像を生成する処理の流れを示すフローチャートである。
(実施形態2)
実施の形態1にかかるユーザーインタフェース制御装置では、ディスプレイに平面視表示した左目用画像上で、グラフィックパーツを配置する平面位置がポインティングデバイス等により指定されると、左目用画像のグラフィックパーツの配置範囲に存在する被写体の深度を考慮して、被写体より前方や後方の深度、あるいは被写体と同じ深度等をグラフィックパーツの配置可能な深度として選択肢により提示した。
(補足)
なお、上記の実施の形態に基づいて説明してきたが、本発明は上記の実施の形態に限定されないことはもちろんである。以下のような場合も本発明に含まれる。
(r)上記実施の形態1では、グラフィックパーツとして花のマークを例に挙げたが、グラフィックパーツが吹き出しの場合には、顔認識機能を利用して、口の領域の探索し、口に吹き出しの先端がくるように、かつ、被写体である顔を隠さないように配置してもよい。
20 スピーカ
30 GPS
40 センサ
50 タッチパネル
60 マイク
70 記録媒体
100 処理部
101、201 操作入力受付部
102 制御部
103、203 深度情報算出部
104、204 深度情報保存部
105、206 グラフィック情報取得部
106、205 深度情報解析部
107、207 深度設定提示部
108、208 ステレオ画像生成部
109、209 出力部
111 GUI提示部
112 伸縮表示部
113 深度決定部
114 シフト量取得部
115 画像合成部
200 ディスプレイ
202 グラフィック重畳制御部
211 GUI提示部
212 伸縮表示部
213 深度決定部
214 領域提示部
300 ユーザーインタフェース表示装置
1201 領域分割部
Claims (16)
- 立体視画像にグラフィックを合成する際に、グラフィックを配置する奥行き方向の深度を設定するためのユーザーインタフェースを提供するユーザーインタフェース制御装置であって、
立体視画像を構成する一方の視点画像上にグラフィックを配置した場合に、グラフィックが占める範囲を特定する平面位置特定手段と、
前記特定した範囲において、前記一方の視点画像に写る被写体の深度を取得する視点画像深度取得手段と、
前記取得した深度を示す選択肢と、前記取得した深度とは異なるグラフィックを配置可能な他の深度を示す選択肢とを提示する提示手段と
を備えることを特徴とするユーザーインタフェース制御装置。 - 前記取得した深度を示す選択肢は、前記グラフィックが占める範囲に写る被写体で最も手前に位置するものの深度を示し、
前記グラフィックを配置可能な他の深度を示す選択肢は、前記最も手前に位置する被写体の深度よりも更に手前側の深度を示す
ことを特徴とする請求項1記載のユーザーインタフェース制御装置。 - 前記一方の視点画像の前記特定した範囲に、前記最も手前に位置する被写体に加えて他の被写体が存在し、且つ、2つの被写体間で深度が閾値を超えて異なる場合、提示手段は更に、前記2つの被写体の深度の中間の深度を示す選択肢を提示する
ことを特徴とする請求項2記載のユーザーインタフェース制御装置。 - 前記複数の選択肢のうちの何れかの選択を受け付ける受付手段と、
前記選択肢の選択を受け付けた後に、ユーザからの決定指示を受けるまで前記グラフィックのサイズを繰り返し変化させて表示する伸縮表示手段と、
前記決定指示を受けた時点でグラフィックを拡大して表示していた場合、前記選択された選択肢の示す深度よりも手前側の深度を、グラフィックを配置する深度に決定し、前記決定指示を受けた時点でグラフィックを縮小して表示していた場合、前記選択された選択肢の示す深度よりも奥側の深度を、グラフィックを配置する深度に決定する深度決定手段と
を更に備えることを特徴とする請求項3記載のユーザーインタフェース制御装置。 - 深度決定手段は、2つの被写体の深度の中間の深度を示す選択肢が選択された場合に、前記2つの被写体のうち手前側に位置する被写体の深度を、伸縮表示手段が最も拡大して表示したグラフィックのサイズに対応付け、前記2つの被写体のうち奥側に位置する被写体の深度を、伸縮表示手段が最も縮小して表示したグラフィックのサイズに対応付けることで、前記決定指示を受けた時点でのグラフィックのサイズに応じた深度を決定する
ことを特徴とする請求項4記載のユーザーインタフェース制御装置。 - 深度決定手段は、前記グラフィックが占める範囲において前記選択された選択肢の示す深度より奥側に何れかの被写体が存在する場合、当該奥側に位置する被写体の深度を、伸縮表示手段が最も小さく表示したグラフィックのサイズに対応付けることで、前記決定指示を受けた時点でのグラフィックのサイズに応じた深度を決定する
ことを特徴とする請求項4記載のユーザーインタフェース制御装置。 - 深度決定手段は、前記グラフィックが占める範囲において前記選択された選択肢の示す深度より手前側に何れかの被写体が存在する場合、当該手前側に位置する被写体の深度を、伸縮表示手段が最も大きく表示したグラフィックのサイズに対応付け、前記グラフィックが占める範囲において前記選択された選択肢の示す深度より手前側に被写体が存在しない場合、前記選択された選択肢の示す深度よりも手前側の所定の深度を、伸縮表示手段が最も大きく表示したグラフィックのサイズに対応付けることで、前記決定指示を受けた時点でのグラフィックのサイズに応じた深度を決定する
ことを特徴とする請求項4記載のユーザーインタフェース制御装置。 - 前記決定した深度で立体視効果を生じさせるための視差を算出し、当該視差を画素数に変換することでシフト量を取得するシフト量取得手段と、
前記一方の視点画像に対して前記平面位置特定手段で特定した範囲にグラフィックを合成し、立体視画像を構成する他方の視点画像に対して前記平面位置特定手段で特定した範囲から前記シフト量だけ水平方向にシフトした範囲にグラフィックを合成する画像合成手段と
を更に備えることを特徴とする請求項4記載のユーザーインタフェース制御装置。 - 視点画像深度取得手段は、立体視画像を構成する前記一方の視点画像と他方の視点画像とのステレオマッチングによって、前記被写体の深度を取得する
ことを特徴とする請求項1記載のユーザーインタフェース制御装置。 - 立体視表示する際の深度が隣接する領域と閾値を超えて異なる複数の領域に、前記一方の視点画像を区分する領域区分手段と、
区分した複数の領域を提示する領域提示手段と、
提示した領域の何れかの選択を受ける領域受付手段と
をさらに備え、
前記平面位置特定手段は、前記選択された領域の少なくとも一部を含むように、前記グラフィックが占める範囲を特定する
ことを特徴とする請求項1記載のユーザーインタフェース制御装置。 - 前記領域提示手段は、隣接する領域を異なる色で表示することにより、区分した複数の領域を提示する
ことを特徴とする請求項10記載のユーザーインタフェース制御装置。 - 前記領域提示手段は、各領域に異なるテキストを付加して表示することにより、区分した複数の領域を提示する
ことを特徴とする請求項10記載のユーザーインタフェース制御装置。 - 領域区分手段による前記一方の視点画像の区分では、前記一方の視点画像の画素間で輝度が鋭敏に変化しているエッジまたはエッジの交点の抽出により各領域の境界を特定し、
立体視表示する際の深度には、立体視画像を構成する前記一方の視点画像と他方の視点画像とのステレオマッチングによって得られる各画素の深度を用いる
ことを特徴とする請求項10記載のユーザーインタフェース制御装置。 - 立体視画像にグラフィックを合成する際に、グラフィックを配置する奥行き方向の深度を設定するためのユーザーインタフェースを制御するユーザーインタフェース制御方法であって、
立体視画像を構成する一方の視点画像上にグラフィックを配置した場合に、グラフィックが占める範囲を特定する平面位置特定ステップと、
前記特定した範囲において、前記一方の視点画像に写る被写体の深度を取得する視点画像深度取得ステップと、
前記取得した深度を示す選択肢と、前記取得した深度とは異なり、グラフィックを配置可能な他の深度を示す選択肢とを提示する提示ステップと
を含むことを特徴とするユーザーインタフェース制御方法。 - 立体視画像にグラフィックを合成する際に、グラフィックを配置する奥行き方向の深度を設定するためのユーザーインタフェースを制御するコンピュータプログラムであって、
立体視画像を構成する一方の視点画像上にグラフィックを配置した場合に、グラフィックが占める範囲を特定する平面位置特定ステップと、
前記特定した範囲において、前記一方の視点画像に写る被写体の深度を取得する視点画像深度取得ステップと、
前記取得した深度を示す選択肢と、前記取得した深度とは異なり、グラフィックを配置可能な他の深度を示す選択肢とを提示する提示ステップと
をコンピュータに実行させることを特徴とするコンピュータプログラム。 - 立体視画像にグラフィックを合成する際に、グラフィックを配置する奥行き方向の深度を設定するためのユーザーインタフェースを提供するユーザーインタフェース制御装置の集積回路であって、
立体視画像を構成する一方の視点画像上にグラフィックを配置した場合に、グラフィックが占める範囲を特定する平面位置特定手段と、
前記特定した範囲において、前記一方の視点画像に写る被写体の深度を取得する視点画像深度取得手段と、
前記取得した深度を示す選択肢と、前記取得した深度とは異なり、グラフィックを配置可能な他の深度を示す選択肢とを提示する提示手段と
を備えることを特徴とする集積回路。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012800020451A CN103168316A (zh) | 2011-10-13 | 2012-08-10 | 用户界面控制装置、用户界面控制方法、计算机程序以及集成电路 |
US13/808,145 US9791922B2 (en) | 2011-10-13 | 2012-08-10 | User interface control device, user interface control method, computer program and integrated circuit |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011226092 | 2011-10-13 | ||
JP2011-226092 | 2011-10-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013054462A1 true WO2013054462A1 (ja) | 2013-04-18 |
Family
ID=48081534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/005109 WO2013054462A1 (ja) | 2011-10-13 | 2012-08-10 | ユーザーインタフェース制御装置、ユーザーインタフェース制御方法、コンピュータプログラム、及び集積回路 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9791922B2 (ja) |
JP (1) | JPWO2013054462A1 (ja) |
CN (1) | CN103168316A (ja) |
WO (1) | WO2013054462A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015084150A (ja) * | 2013-10-25 | 2015-04-30 | セイコーエプソン株式会社 | 頭部装着型表示装置および頭部装着型表示装置の制御方法 |
JP2015158743A (ja) * | 2014-02-21 | 2015-09-03 | キヤノン株式会社 | 表示制御装置及び表示制御方法 |
US11054650B2 (en) | 2013-03-26 | 2021-07-06 | Seiko Epson Corporation | Head-mounted display device, control method of head-mounted display device, and display system |
JP7428687B2 (ja) | 2021-11-08 | 2024-02-06 | 株式会社平和 | 遊技機 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10579904B2 (en) | 2012-04-24 | 2020-03-03 | Stmicroelectronics S.R.L. | Keypoint unwarping for machine vision applications |
GB2511526A (en) | 2013-03-06 | 2014-09-10 | Ibm | Interactor for a graphical object |
JP5834253B2 (ja) | 2013-03-27 | 2015-12-16 | パナソニックIpマネジメント株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
JP5849206B2 (ja) * | 2013-03-27 | 2016-01-27 | パナソニックIpマネジメント株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
US20150052460A1 (en) * | 2013-08-13 | 2015-02-19 | Qualcomm Incorporated | Method for seamless mobile user experience between outdoor and indoor maps |
CN104469338B (zh) * | 2013-09-25 | 2016-08-17 | 联想(北京)有限公司 | 一种控制方法和装置 |
KR20150101915A (ko) * | 2014-02-27 | 2015-09-04 | 삼성전자주식회사 | 3차원 gui 화면의 표시 방법 및 이를 수행하기 위한 디바이스 |
US20160165207A1 (en) * | 2014-12-03 | 2016-06-09 | Kabushiki Kaisha Toshiba | Electronic device, method, and computer program product |
KR102423175B1 (ko) | 2017-08-18 | 2022-07-21 | 삼성전자주식회사 | 심도 맵을 이용하여 이미지를 편집하기 위한 장치 및 그에 관한 방법 |
CN109542293B (zh) * | 2018-11-19 | 2020-07-31 | 维沃移动通信有限公司 | 一种菜单界面设置方法及移动终端 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004145832A (ja) * | 2002-08-29 | 2004-05-20 | Sharp Corp | コンテンツ作成装置、コンテンツ編集装置、コンテンツ再生装置、コンテンツ作成方法、コンテンツ編集方法、コンテンツ再生方法、コンテンツ作成プログラム、コンテンツ編集プログラム、および携帯通信端末 |
JP2005078424A (ja) * | 2003-09-01 | 2005-03-24 | Omron Corp | 写真シール作成装置および写真シール作成方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8213711B2 (en) * | 2007-04-03 | 2012-07-03 | Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada | Method and graphical user interface for modifying depth maps |
US20090219383A1 (en) * | 2007-12-21 | 2009-09-03 | Charles Gregory Passmore | Image depth augmentation system and method |
JP4955596B2 (ja) * | 2008-03-21 | 2012-06-20 | 富士フイルム株式会社 | 画像出力方法、装置およびプログラム |
WO2010064774A1 (ko) * | 2008-12-02 | 2010-06-10 | (주)엘지전자 | 3차원 영상신호 전송 방법과, 3차원 영상표시 장치 및 그에 있어서의 신호 처리 방법 |
KR101729023B1 (ko) * | 2010-10-05 | 2017-04-21 | 엘지전자 주식회사 | 휴대 단말기 및 그 동작 제어방법 |
-
2012
- 2012-08-10 CN CN2012800020451A patent/CN103168316A/zh active Pending
- 2012-08-10 US US13/808,145 patent/US9791922B2/en not_active Expired - Fee Related
- 2012-08-10 WO PCT/JP2012/005109 patent/WO2013054462A1/ja active Application Filing
- 2012-08-10 JP JP2012556728A patent/JPWO2013054462A1/ja not_active Ceased
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004145832A (ja) * | 2002-08-29 | 2004-05-20 | Sharp Corp | コンテンツ作成装置、コンテンツ編集装置、コンテンツ再生装置、コンテンツ作成方法、コンテンツ編集方法、コンテンツ再生方法、コンテンツ作成プログラム、コンテンツ編集プログラム、および携帯通信端末 |
JP2005078424A (ja) * | 2003-09-01 | 2005-03-24 | Omron Corp | 写真シール作成装置および写真シール作成方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11054650B2 (en) | 2013-03-26 | 2021-07-06 | Seiko Epson Corporation | Head-mounted display device, control method of head-mounted display device, and display system |
JP2015084150A (ja) * | 2013-10-25 | 2015-04-30 | セイコーエプソン株式会社 | 頭部装着型表示装置および頭部装着型表示装置の制御方法 |
JP2015158743A (ja) * | 2014-02-21 | 2015-09-03 | キヤノン株式会社 | 表示制御装置及び表示制御方法 |
US10244196B2 (en) | 2014-02-21 | 2019-03-26 | Canon Kabushiki Kaisha | Display control apparatus and display control method |
JP7428687B2 (ja) | 2021-11-08 | 2024-02-06 | 株式会社平和 | 遊技機 |
Also Published As
Publication number | Publication date |
---|---|
CN103168316A (zh) | 2013-06-19 |
JPWO2013054462A1 (ja) | 2015-03-30 |
US9791922B2 (en) | 2017-10-17 |
US20130293469A1 (en) | 2013-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013054462A1 (ja) | ユーザーインタフェース制御装置、ユーザーインタフェース制御方法、コンピュータプログラム、及び集積回路 | |
US11756223B2 (en) | Depth-aware photo editing | |
JP4966431B2 (ja) | 画像処理装置 | |
WO2020192706A1 (zh) | 物体三维模型重建方法及装置 | |
JP7387202B2 (ja) | 3次元顔モデル生成方法、装置、コンピュータデバイス及びコンピュータプログラム | |
CN111541907B (zh) | 物品显示方法、装置、设备及存储介质 | |
KR102461232B1 (ko) | 화상 처리 방법 및 장치, 전자 디바이스, 및 저장 매체 | |
JP2011509451A (ja) | 画像データのセグメント化 | |
JP7387434B2 (ja) | 画像生成方法および画像生成装置 | |
JP5755571B2 (ja) | 仮想視点画像生成装置、仮想視点画像生成方法、制御プログラム、記録媒体、および立体表示装置 | |
TWI502546B (zh) | 推擠一模型通過二維場景的系統、方法和電腦程式商品 | |
WO2015156149A1 (ja) | 画像処理装置および画像処理方法 | |
US9767580B2 (en) | Apparatuses, methods, and systems for 2-dimensional and 3-dimensional rendering and display of plenoptic images | |
US11043019B2 (en) | Method of displaying a wide-format augmented reality object | |
CN111652792B (zh) | 图像的局部处理、直播方法、装置、设备和存储介质 | |
US20130266209A1 (en) | Image processing apparatus and image processing method | |
KR101632514B1 (ko) | 깊이 영상 업샘플링 방법 및 장치 | |
JP5464129B2 (ja) | 画像処理装置および視差情報生成装置 | |
KR101451236B1 (ko) | 3차원 영상 변환 방법 및 그 장치 | |
US8817081B2 (en) | Image processing apparatus, image processing method, and program | |
KR20110082907A (ko) | 입체 영상 변환 방법 및 입체 영상 변환 장치 | |
WO2014001062A2 (en) | Device for generating a depth map | |
KR101626679B1 (ko) | 2d 영상으로부터 입체 영상을 생성하는 방법 및 이를 기록한 기록 매체 | |
EP3367328A1 (en) | A method, apparatus and computer program product for generating composite images with three-dimensional effects and reducing pole contraction lines | |
WO2012096065A1 (ja) | 視差画像表示装置及び視差画像表示方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2012556728 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13808145 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12839865 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12839865 Country of ref document: EP Kind code of ref document: A1 |