CN103168316A - User interface control device, user interface control method, computer program, and integrated circuit - Google Patents

User interface control device, user interface control method, computer program, and integrated circuit Download PDF

Info

Publication number
CN103168316A
CN103168316A CN2012800020451A CN201280002045A CN103168316A CN 103168316 A CN103168316 A CN 103168316A CN 2012800020451 A CN2012800020451 A CN 2012800020451A CN 201280002045 A CN201280002045 A CN 201280002045A CN 103168316 A CN103168316 A CN 103168316A
Authority
CN
China
Prior art keywords
depth
degree
image
user interface
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012800020451A
Other languages
Chinese (zh)
Inventor
箱田航太郎
山地治
绪方伸辅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Corp of America
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN103168316A publication Critical patent/CN103168316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/183On-screen display [OSD] information, e.g. subtitles or menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided is a user interface control device which achieves a GUI for which depth of graphics can be easily set when combining graphics with a stereoscopic image. The user interface control device is provided with a graphic information acquisition unit (105) which identifies a range occupied by graphics if the graphics are placed upon one perspective image configuring a stereoscopic image, a depth information analysis unit (106) which acquires the depth of a subject appearing in the one perspective image in the identified range, and a depth setting presentation unit (107) for presenting an option indicating the acquired depth and an option indicating another depth for which placing a graphic different from the acquired depth is possible.

Description

User interface control device, user interface control method, computer program and integrated circuit
Technical field
The present invention relates to user interface techniques, particularly relate to the demonstration of the user interface that adds man-hour of carrying out 3 D visual image.
Background technology
In recent years, utilize the 3 D visual image display technique of eyes nethike embrane aberration to receive much concern.It is following technology: because the people utilizes the difference of the retinal image of the retinal image of left eye and right eye to identify solid, so that audiovisual person's left eye and right eye have image (left eye with the image/right eye image) incident independently of parallax, make thus the object picture generation deviation that generates and feel depth on the nethike embrane of eyes.Be envisioned that following situation: can be not only digital camera to the equipment that stereoscopic photograph is photographed, and smart mobile phone etc. is increasing more and more from now on also.
Due to popularizing of such 3 D visual image display technique, carried out following experiment: except utilizing form of the so existing 3 D visual image of photography/reading, also provide the New Consumers that the 3 D visual image of photography is processed to experience.For example, following technology is disclosed: the various figures such as configuration in the 3 D visual image of the photographies such as camera, synthetic balloon, character in patent documentation 1.Particularly, in the disclosed technology of patent documentation 1, the depth information (depth map) of the locational 3 D visual image that configures according to figure decides relative size, the context between a plurality of figures, gives thus three-dimensional visual effect.
The prior art document
Patent documentation
Patent documentation 1: TOHKEMY 2009-230431 communique
Summary of the invention
The problem that invention will solve
But, in the disclosed technology of patent documentation 1, do not provide the unit of the depth information of user's assignment graph.At this, have following problem: the user can not determine after the processing photo, which type of degree of depth figure to be configured to, can not realize the processing that the user expects.
In addition, in the processing of 3 D visual image, may not process operation under stereoscopic vision shows the state of 3 D visual image, also sometimes process operation under the state that the visual point image that consists of 3 D visual image is shown by plane visual.For example, in the processing of the 3 D visual image under the environment that the display that can not stereoscopic vision shows image is only arranged as display device, even require a kind of visual point image also can easily be specified the unit of the depth information of the figure that will synthesize by the state that plane visual shows.
The present invention is in view of such problem, computer program and the integrated circuit of the user interface control device of the GUI of the degree of depth that its purpose is to provide realization can easily set figure when synthesising pattern in 3 D visual image, user interface control method, control user interface.
Be used for solving the means of problem
In order to reach above-mentioned purpose, the related user interface control device of a mode of the present invention provides user interface when synthesising pattern in 3 D visual image, this user interface is used for setting the degree of depth of the depth direction that configures figure, it is characterized in that, this user interface control device possesses: the planimetric position determining unit, in the situation that dispose figure on side's visual point image of formation 3 D visual image, determine the scope that figure is shared; The visual point image degree of depth obtains the unit, obtains the degree of depth of the subject in impinging upon described side's visual point image in the described scope of determining; And Tip element, the option and the option that other degree of depth that can configure figure different from the described degree of depth that obtains are shown of the described degree of depth that obtains are shown.
The invention effect
The formation that the related user interface control device of aforesaid way is put down in writing by the means that are used for the solution problem, distribution according to the locational depth information that configures figure, the user is provided the option that the degree of depth that can configure figure is shown, so can easily set the degree of depth of figure.
In addition, in the related user interface control device of aforesaid way, can utilize the selection of option to carry out the degree of depth and set, the display format that therefore adds man-hour with image is which during stereoscopic vision demonstrations/plane visual shows has nothing to do, and can easily set the depth information of synthetic figure.
Description of drawings
Fig. 1 illustrates to have the figure that the hardware as the smart mobile phone of the function of the related user interface control device of present embodiment consists of.
Fig. 2 is the figure that the formation of the related user interface control device of embodiment 1 is shown.
Fig. 3 is the figure that the subject of the stereo-picture that is made as in the present embodiment processing object is shown.
Fig. 4 is the figure that the display case of display 200 is shown.
Fig. 5 illustrates the scope of describing of figure and the figure of centre coordinate.
Fig. 6 (a) is the figure of the example of more the place ahead that 2 subjects are shown degree of depth of being set with figure shaped part (Graphic part), Fig. 6 (b) is illustrated in the figure of example that the degree of depth identical with the subject of side nearby is set with the degree of depth of figure shaped part, and Fig. 6 (c) is the figure of the example of the intermediate depth that is illustrated in 2 subjects degree of depth that is set with the figure shaped part.
Fig. 7 is the figure that the depth profile of the visual point image on the allocation position of figure shaped part is shown.
Fig. 8 is the display case that comprises the degree of depth setting pop-up menu of a plurality of options.
Fig. 9 is the figure of the depth adjustment of the figure shaped part after explanation selected depth setting menu.
Figure 10 is the process flow diagram that the flow process of the depth information generation processing in depth information calculating section 103 is shown.
Figure 11 illustrates the figure that the degree of depth is set the flow process of pop-up menu Graphics Processing.
Figure 12 is candidate's the process flow diagram of flow process of processing that the configurable deep of extraction figure shaped part is shown.
Figure 13 illustrates the process flow diagram that selected depth is set the flow process that the figure shaped part depth adjustment after menu processes.
Figure 14 is the process flow diagram that the details of step S33,34 processing are shown.
Figure 15 is the process flow diagram that the flow process of the processing that generates the 3 D visual image that has synthesized the figure shaped part is shown.
Figure 16 illustrates for calculate the figure of the pixel displacement of parallax according to the depth information that determines.
Figure 17 is the process flow diagram that the synthetic flow process of processing of figure shaped part is shown.
Figure 18 is the display case that comprises the degree of depth setting pop-up menu of 2 options.
Figure 19 is the figure that the formation of the related user interface control device 300 of embodiment 2 is shown.
Figure 20 (a) is the figure of the configuration of the subject when photography is shown, and Figure 20 (b) is the figure that visual point image is shown.
The picture example that Figure 21 (a) shows the shot object image that extracts from image with color distinctively based on depth profile, Figure 21 (b) is the picture example to the shot object image additional numbers ground demonstration of extracting from image.
Figure 22 is the process flow diagram that the flow process of the 3 D visual image processing in user interface control device 300 is shown.
Embodiment
(summary of a mode of invention)
The object of the present invention is to provide computer program and the integrated circuit of user interface control device that GUI is provided when synthesising pattern in 3 D visual image, user interface control method, control user interface, this GUI can easily set the degree of depth for display graphics after synthetic.
User interface is provided when the synthesising pattern in 3 D visual image as the user interface control device of the 1st mode of the present invention, this user interface is used for setting the degree of depth of the depth direction that configures figure, it is characterized in that, this user interface control device possesses: the planimetric position determining unit, in the situation that dispose figure on side's visual point image of formation 3 D visual image, determine the scope that figure is shared; The visual point image degree of depth obtains the unit, obtains the degree of depth of the subject in impinging upon described side's visual point image in the described scope of determining; And Tip element, the option and the option that other degree of depth that can configure figure different from the described degree of depth that obtains are shown of the described degree of depth that obtains are shown.
According to above-mentioned formation, because the user is provided based on the distribution of the locational depth information that configures figure the option that the degree of depth that can configure figure is shown, so the arbitrary option of user by selecting can easily be set the degree of depth of figure.In addition, can utilize the selection of option to carry out the degree of depth and set, therefore with manuscript as the time display format be which during stereoscopic vision demonstrations/plane visual shows has nothing to do, can easily set the depth information of synthetic figure.
In addition, the 2nd mode of the present invention can constitute in the 1st mode: the option that the described degree of depth that obtains is shown illustrates the degree of depth that is positioned at the subject before the most close in the subject that impinges upon in the shared scope of described figure, the option that can configure other degree of depth of described figure is shown the degree of depth than the degree of depth more close front side that is positioned at the most close described front subject is shown.
According to the formation of the 2nd mode of the invention described above, it is that mode to paste figure on subject is synthesized that the user can easily select, or to synthesize near the mode of front configuration figure than subject.
In addition, the 3rd mode of the present invention can constitute in the 2nd mode: in the described scope of determining of described side's visual point image, also there are other subjects except being positioned at the most close described front subject, and in the situation that between 2 subjects, the degree of depth exceeds threshold value, Tip element is also pointed out the option of the intermediate depth of the degree of depth that described 2 subjects are shown.
According to the formation of the 3rd mode of the invention described above, synthetic with the figure of the degree of depth that can select in the 2nd mode of invention, the user also can easily be chosen in and configure synthesizing of figure between 2 different subjects of the degree of depth.
In addition, the 4th mode of the present invention can constitute in the 3rd mode, also possesses:
Receiving unit is accepted any the selection in described a plurality of options; Flexible display unit after the selection of accepting described option, shows until till accepting to indicate from user's decision the size of described figure repeatedly with changing; And degree of depth determining means, in the situation that accept the time point of described decision indication, figure is amplified demonstration, to determine as configuring the degree of depth of figure than the degree of depth of the degree of depth shown in the option of described selection near the front side, in the situation that accept the time point of described decision indication, figure is dwindled demonstration, will be than the degree of depth shown in the option of described selection by the degree of depth of inboard degree of depth decision for the configuration figure.
Formation according to the 4th mode of the invention described above can not only be adjusted into the configurable deep of figure by the degree of depth shown in a plurality of options, and the configurable deep of figure can be adjusted into than before more close from the degree of depth shown in the respective option or by inner.Thus, can improve the setting degree of freedom of the configurable deep of figure, so user's convenience is improved.In addition, while make change in size carry out the demonstration of figure this moment, show but needn't carry out stereoscopic vision, even therefore also can easily set the degree of depth of figure under the environment that can not carry out to image the stereoscopic vision demonstration.
In addition, the 5th mode of the present invention can constitute in the 4th mode: degree of depth determining means is illustrating in the selecteed situation of option of intermediate depth of the degree of depth of 2 subjects, make and be positioned in described 2 subjects nearby that the degree of depth of the subject of side and the size that flexible display unit amplifies the figure of demonstration most are mapped, the degree of depth that is positioned at inboard subject in described 2 subjects and the size that flexible display unit dwindles the figure of demonstration most are mapped, determine thus and the corresponding degree of depth of size at the figure of accepting the time point that described decision indicates.
Formation according to the 5th mode of the invention described above can make the amplitude of 2 degree of depth between subject show with the display size of figure relatedly.Thus, the user can set the configurable deep of figure intuitively between 2 subjects.
In addition, the 6th mode of the present invention can constitute in the 4th mode: degree of depth determining means is in the shared scope of described figure and in the situation that have arbitrary subject than the degree of depth shown in the option of described selection by the inboard, the size of the figure that the degree of depth that is positioned at this inboard subject and flexible display unit show minimumly is mapped, determines thus the corresponding degree of depth of size with the figure of the time point of accepting described decision indication.
Formation according to the 6th mode of the invention described above can make the amplitude of the degree of depth of the subject from the degree of depth shown in selected option to the inboard show with the display size of figure relatedly.Thus, the user can be in the degree of depth shown in selected option to the configurable deep of setting intuitively figure between inboard subject.
in addition, the 7th mode of the present invention can constitute in the 4th mode: degree of depth determining means is in the shared scope of described figure and in the situation that have arbitrary subject than the degree of depth shown in the option of described selection near the front side, make and be positioned at this nearby the degree of depth of the subject of side and the size of the figure that flexible display unit shows maximumly are mapped, in the shared scope of described figure and in the situation that do not have subject than the degree of depth shown in the option of described selection near the front side, make than the degree of depth shown in the option of described selection and be mapped near the prescribed depth of front side and the size of the figure that flexible display unit shows maximumly, determine and accept thus the corresponding degree of depth of size of the figure of the time point that described decision is indicated.
According to the formation of the 7th mode of the invention described above, can make the degree of depth from the degree of depth shown in selected option to the subject of side nearby or show with the display size of figure from the degree of depth shown in selected option to the amplitude of the degree of depth of side prescribed depth nearby relatedly.Thus, the user can be in the degree of depth shown in selected option to the configurable deep of setting intuitively figure between the degree of depth of side nearby.
In addition, the 8th mode of the present invention can constitute in the 4th mode, and also possess: displacement is obtained the unit, calculates for produce the parallax of stereoscopic visual effect with the degree of depth of described decision, obtains displacement by this parallax is converted to pixel count; And image synthesis unit, to described side's visual point image synthesising pattern in the scope of being determined by described planimetric position determining unit, to the opposing party's visual point image of consisting of 3 D visual image from the scope determined by described planimetric position determining unit synthesising pattern in the scope of the described displacement of displacement in the horizontal direction.
According to the formation of the 8th mode of the invention described above, can generate in the degree of depth that determines the 3 D visual image of synthesising pattern.
In addition, the 9th mode of the present invention can constitute in the 1st mode: the visual point image degree of depth obtains a described side visual point image and the Stereo matching of the opposing party visual point image degree of depth that obtain described subject of unit by consisting of 3 D visual image.
According to the formation of the 9th mode of the invention described above, can will not prepare the 3 D visual image of depth map even depth information in advance as processing object.
In addition, the 10th mode of the present invention can constitute in the 1st mode, also possesses: regional division unit, and zone and the degree of depth of the degree of depth adjacency when described side's visual point image is divided into the stereoscopic vision demonstration exceed threshold value and different a plurality of zones; The zone Tip element, a plurality of zones that prompting is divided; And regional receiving unit, accept any the selection in the zone of prompting, described planimetric position determining unit determines in the mode of at least a portion in the zone that comprises described selection the scope that described figure is shared.
According to the formation of the 10th mode of the invention described above, by the zone that user's prompting is cut apart by the degree of depth in the visual point image that plane visual shows, thereby the user can easily configure the planimetric position of figure.For example, even to a plurality of subjects image of photographing near configuration, by pressing the scope of area unit option and installment figure, thereby become easy with respect to the appointment of the overlapping figure of which subject.
In addition, the 11st mode of the present invention can constitute in the 10th mode: described regional Tip element is pointed out a plurality of zones of division by the zone of different color demonstration adjacency.
In addition, the 12nd mode of the present invention can constitute in the 10th mode: described regional Tip element is by showing a plurality of zones of pointing out division to the additional different text in each zone.
According to the formation of the 11st mode of the invention described above or the formation of the 12nd mode of the present invention, has the effect of visually more easily grasping the zone that in visual point image, the degree of depth is different.
In addition, the 13rd mode of the present invention can constitute in the 10th mode: in the division based on described side's visual point image of regional division unit, determine by the sharp edge that changes of brightness between the pixel that is extracted in described side's visual point image or the intersection point at edge the border that each is regional, carry out the degree of depth that the degree of depth of stereoscopic vision when showing used each pixel that the Stereo matching of described side's visual point image by the formation 3 D visual image and the opposing party's visual point image obtains.
Formation according to the 13rd mode of the invention described above, in the visual point image that local overlapping mode is made a video recording with a plurality of subjects, border between subject produces the edge, therefore in the situation that the edge is made as the border in zone, can specify the subject of overlay configuration figure by specifying arbitrary zone.
Below use accompanying drawing that the embodiment as the user interface control device of a mode of the present invention is described.
(embodiment 1)
Fig. 1 illustrates to have the figure that the hardware as the smart mobile phone of the function of the related user interface control device of present embodiment consists of.Smart mobile phone shown in Figure 1 has camera 10, loudspeaker 20, GPS30, sensor 40, touch panel 50, microphone 60, recording medium 70, handling part 100, display 200.
Camera 10 is stereoscopic cameras that the 3 D visual image that is made of 2 visual point images is made a video recording.The 3 D visual image of making a video recording is recorded in recording medium 70.
Recording medium 70 is the nonvolatile recording mediums that can read and write that are built in smart mobile phone, utilizes the realizations such as hard disk, semiconductor memory.
Handling part 100 has processors such as the storeies such as RAM, CPU, by the program of executive logging in CPU in recording medium 70, thus the functions such as the photography of control conversation, 3 D visual image, processing.Function as the related user interface control device of present embodiment also realizes in the program of recording medium 70 by executive logging in handling part 100.
Fig. 2 is the figure that the formation of the related user interface control device of present embodiment is shown.Provide support user's that the 3 D visual image that is made of 2 visual point images is processed the GUI of process operation of this user interface control device is encased in various electrical equipments and is utilized.As the equipment of the user interface control device of packing into, except smart mobile phone, also has PC(Personal Computer; Personal computer or PC) etc. general computing machine, PDA(Personal Digital Assistance: personal digital assistant), the communication terminal such as flat board or mobile phone.
As shown in Figure 2, the user interface control device constitutes and comprises operation input receiving portion 101, control part 102, depth information calculating section 103, graphical information obtaining section 105, depth information analysis unit 106, the degree of depth and set prompting part 107, stereo-picture generating unit 108, efferent 109.
These operation input receiving portions 101, control part 102, depth information calculating section 103, graphical information obtaining section 105, depth information analysis unit 106, the degree of depth are set the function described later of prompting part 107, stereo-picture generating unit 108, efferent 109 as for example program is pre-recorded in recording medium shown in Figure 1 70.In the present embodiment, in handling part 100, will with operation input receiving portion 101, control part 102, depth information calculating section 103, graphical information obtaining section 105, depth information analysis unit 106, the degree of depth set prompting part 107, stereo-picture generating unit 108, efferent 109 respectively corresponding program be loaded into the interior RAM of handling part 100, carried out by the CPU in handling part 100, thereby utilize hardware resource (cooperation of the program on CPU and RAM) to realize.
In above-mentioned example, be illustrated with the example that constitutes that pre-recorded program in recording medium 70 is loaded into the RAM in handling part 100 and carried out by the CPU in handling part 100, but said procedure can pre-recorded RAM in handling part 100.The formation of pre-recorded said procedure in the RAM in handling part 100 adopting in the situation that needn't record said procedure in recording medium 70.
In addition, depth information preservation section 104 utilizes the part of the posting field of recording medium 70 to realize.
<operation input receiving portion 101>
Operation input receiving portion 101 has the function of accepting by user's operation of the indicating equipments such as touch panel, mouse input.
Particularly, user's operation of accepting in the present embodiment has the drag operation of the figure that the configuration comparison film repairs, the project of selecting to utilize the indicating equipment indication, the clicking operation of state etc., select the clicking operation of arbitrary option when showing a plurality of option by being received on picture, thereby operation input receiving portion 101 is carried out function as receiving unit.
As shown in Figure 4, show on display 200: as the left eye image 1 of the side's visual point image that consists of 3 D visual image; Figure shaped part display part 2 shows the template image with the various figure graph of a correspondence 2a~2d of section that are synthesized to 3 D visual image; And pointer 3 etc., the indicating positions of indicating equipment is shown, for example, the operation of arrangement plan shaped part realizes by dragging arbitrary figure shaped part and it is landed with the position arbitrarily of image 1 at left eye from the figure shaped part 2a~2d that is shown in figure shaped part display part 2 in photo.
<control part 102>
Control part 102 has the function of controlling the processing of the present embodiment according to the input content of operation input receiving portion 101.
<depth information calculating section 103>
Depth information calculating section 103 is by being generated the depth information (depth map) of the position of the depth direction that subject is shown with the pixel of image by left eye by stereo-picture, thereby realize that the visual point image degree of depth obtains the part of the function of unit.Particularly, at first the left eye that consists of stereo-picture is carried out the corresponding point retrieval with image/right eye with each pixel between image.And, use the position relationship of the corresponding point of image with image and right eye according to left eye, calculate the distance of the depth direction of subject based on the principle of triangulation.Depth information is to represent the progressive series of greys image of the depth of each pixel with 8 brightness, and depth information calculating section 103 is converted to the distance of the depth direction of the subject calculated the value of 0~255 256 gray scales.In addition, the corresponding point retrieval roughly is divided into following 2 kinds: the zonule is set around focus, substantially mates gimmick based on the zone that the gradation pattern of the pixel value in this zone is carried out; And extract the feature such as edge from image, carrying out characteristic of correspondence between this feature mates gimmick substantially, can use arbitrary gimmick.In addition, stereo-picture is from different viewpoints, to be made a video recording in the visual field and the image that obtains, uses in present embodiment 1 and utilizes camera 10 to make a video recording and be recorded in the view data of the 3 D visual image of recording medium 70.Stereo-picture is not limited to the actual photographed image, can be the different imaginary viewpoint of hypothesis and the CG(Computer Graphics that makes: computer graphical) etc.
In the present embodiment, as shown in Figure 3, to utilize stereoscopic camera, 1 people is stood in 3 D visual image that the sight before motorbus photographs as the processing of object description image.
<depth information preservation section 104>
Depth information preservation section 104 utilizes the part of the posting field of recording medium 70 to realize having the function that the depth information that depth information calculating section 103 is calculated is stored in the posting field of recording medium 70.
<graphical information obtaining section 105>
Graphical information obtaining section 105 has as obtaining the function of user configured figure in left eye planimetric position determining unit of the coordinate in shared zone on image.
Particularly, in the situation that receive the figure configuration operation in operation input receiving portion 101, take left eye with the upper left corner of image as the left eye of initial point with in the X-Y coordinate system of image, obtain pointer indicating position when making figure landing (Drop) as the centre coordinate (x of arrangement plan shaped part g, y g), calculate the upper left corner coordinate (x of the rectangle frame of encirclement figure shaped part shown in Figure 5 1, y 1) and lower right corner coordinate (x 2, y 2) as figure shaped part shared configuration scope in above-mentioned X-Y coordinate system.
In addition, 105 pairs of each figure shaped parts of graphical information obtaining section keep the relative value of centre coordinate and upper left corner coordinate and lower right corner coordinate, upper left corner coordinate (x 1, y 1) and lower right corner coordinate (x 2, y 2) calculate coordinate and the relative value thereof of using pointer indicating position, thereby can obtain easily the configuration scope of figure shaped part.
<depth information analysis unit 106>
Depth information analysis unit 106 has following function: obtain left eye in the configuration scope of figure shaped part with the depth information of image; And indicated depth sets prompting part 107 and makes it to user's prompt options, and the relative position of this option utilization and subject illustrates the degree of depth of energy arrangement plan shaped part.
Particularly, depth information analysis unit 106 is read left eye the configuration scope of the figure shaped part of being calculated by graphical information obtaining section 105 with the depth information of image by depth information preservation section 104 from recording medium 70, utilize the parsing of depth information to detect the subject that is present in the configuration scope, thereby obtain the unit as the visual point image degree of depth and carry out function, and determine the relative position of energy arrangement plan shaped part for the subject that detects.
In the detection of subject, at first, be used in the configuration scope of figure shaped part by figure shaped part centre coordinate (x g, y g) and in the horizontal direction continuous pixel (hereinafter referred to as the horizontal pixel group) resolve left eye with the depth profile of image, in the situation that the degree of depth between the pixel of adjacency exceeds threshold value Th and different, be judged to be to clip and exceed threshold value Th and there are 2 different subjects in the degree of depth different position.Be judged to be in the situation that have subject in the configuration scope of figure shaped part, depth information analysis unit 106 is with (x 1, y g) to (x 2, y g) between than the most shallow degree of depth degree of depth before more close, the degree of depth identical from the most shallow degree of depth and exceed threshold value Th and the mean depth of 2 pixels of the adjacency that the degree of depth is different as the position of depth that can the arrangement plan shaped part, indicated depth is set prompting part 107 prompting and each self-corresponding options.
For example, consider following situation: at the left eye that is shown in display 200 with in image, to have the mode of the part of figure shaped part to be configured at the location overlap that is equivalent to as the people's of subject head.In this case, consider can arrangement plan the position of depth direction of shaped part roughly be divided into: as Fig. 6 (a) than being positioned at the most close front personage degree of depth 4a more on the front as subject; As Fig. 6 (b) with figure picture with degree of depth 4b; And as Fig. 6 (c) the intermediate depth 4c of personage and motorbus.
In depth information analysis unit 106, as shown in Figure 7, exceed threshold value Th ground and change impinging upon the boundary degree of depth of left eye with the personage in image and motorbus, therefore be judged to be and have 2 subjects in the configuration scope of figure shaped part, will be than the x of side nearby 1The degree of depth of the degree of depth before more close be made as the degree of depth 4a shown in (a) of Fig. 6, will be than the x of side nearby 1The degree of depth be made as the degree of depth 4b shown in Fig. 6 (b), the nearly x of front side 1The degree of depth and inboard x 2The mean depth of the degree of depth be made as the degree of depth 4c shown in Fig. 6 (c) decide can the arrangement plan shaped part the degree of depth.
In addition, in the situation that the configuration range detection to 3 of a figure shaped part above subject, depth information analysis unit 106 by the degree of depth in the group of 2 subjects of front and back with the intermediate depth of these subjects as the degree of depth that can the arrangement plan shaped part, indicated depth is set prompting part 107 and is carried out appending of corresponding option.
In addition, the left eye that is shown in display 200 with image in in the situation that overlappingly in the scope that any shot object image exists have the mode of whole figure shaped part to be configured, even the degree of depth of figure shaped part being located at the behind of subject can not appear in image yet, nonsensical, therefore can arrangement plan shaped part and relative position subject become the degree of depth identical with subject and more close than subject before these 2 positions of the degree of depth.Under these circumstances, depth information analysis unit 106 indicated depths are set 2 options that degree of depth difference is corresponding of prompting part 107 promptings and this.
<degree of depth setting prompting part 107>
The degree of depth is set prompting part 107 and is had GUI prompting part 111, flexible display part 112, degree of depth determination section 113 in inside, has the GUI that controls the depth information that is used for setting figure shaped part, the function that determines the degree of depth of arrangement plan shaped part.
Function is carried out as Tip element in GUI prompting part 111, accepts the indication from depth information analysis unit 106, generates the GUI image that comprises indicated option, notifies efferent 109 in the mode that is depicted as pop-up menu.In example shown in Figure 8, the left eye that is shown in picture with image in the configuration scope of figure shaped part of flower shape comprise personage's head and motorbus behind thereof, indicate from depth information analysis unit 106, make with respect to before the most close as be made as the option of the configurable deep of figure shaped part for the personage of subject for " the place ahead ", " stickup " and " rear ".Based on this indication, the degree of depth set prompting part 107 generate by " the place ahead ", the GUI image of the pop-up menu of " stickups " and " rear " these 3 options formations.In the situation that receive the operation that for example is disposed at the project in " the place ahead " from such menu setecting, refer in the configuration scope of figure shaped part that the degree of depth than the more close setting of the most shallow degree of depth of left-eye image is selected as the configurable deep of figure shaped part.In addition, in the situation that from the project of menu setecting " stickup " (degree of depth identical with subject before the most close), the configurable deep that the most shallow degree of depth that refers to left-eye image in the configuration scope of figure shaped part is selected as the figure shaped part makes the figure shaped part not enter subject.In addition, in the situation that be disposed at the project at " rear " from menu setecting, the intermediate depth that refers to be present in personage's head of the conduct of configuration scope of the figure shaped part subject before the most close and these 2 subjects of motorbus behind thereof is selected as the configurable deep of figure shaped part.
Flexible display part 112 has the function as following flexible display unit: in the situation that the user selects arbitrary option from pop-up menu, as shown in Figure 9, indication efferent 109, make and describe to be disposed at left eye with the figure shaped part of image, until input user's decision indication by clicking operation till with repeatedly making change in size on display 200.In the flexible demonstration of this figure shaped part, at figure shaped part centre coordinate (x g, y g) on, repeatedly amplify, dwindle ground display graphics section in the scope of 2 times to 1/2 times of the life size that is shown in figure shaped part display part 2.
Degree of depth determination section 113 has the function as following degree of depth determining means: in the situation that repeatedly carry out the figure shaped part amplification, dwindle demonstration during be transfused to user's decision operation, the degree of depth that will be corresponding with selected option is as tentative depth position, and the figure shaped part display size when determining operation correspondingly determines the final degree of depth of adjusting has been carried out in tentative depth position.
At length, by will with the state assignment of life size display graphics section with by the degree of depth corresponding to the selected option of menu, the figure shaped part is amplified the state assignment of 2 times of demonstrations in the degree of depth that is arranged in the subject close more front than the selected depth of menu, the state assignment of the figure shaped part being dwindled 1/2 times of demonstration is being positioned at than the degree of depth of the degree of depth corresponding with menu by inboard subject, thereby the display size of figure shaped part and configurable deep are mapped.According to this correspondence, the zoom in/out ratio of the figure shaped part during by the decision operation is calculated the final degree of depth of arrangement plan shaped part.
But, in the situation that be disposed at the project in " the place ahead " from menu setecting, than selected depth near the front subject that do not exist, therefore flexible display part 112 amplifies, dwindles ground display graphics section repeatedly in the scope of 1/2 times of life size~life size.In addition, in the situation that from menu setecting " stickup ", so-calledly become by the inboard degree of depth degree of depth that sinks in subject than selected depth, therefore flexible display part 112 amplifies, dwindles ground display graphics section repeatedly in the scope of 2 times of life size~life size.
In addition, in the situation that from menu setecting " the place ahead ", also repeatedly amplify, dwindle ground display graphics section in the scope of 1/2 times of 2 times~life size of life size, degree of depth determination section 113 will be assigned as near the position of front side prescribed depth the state that the figure shaped part is amplified 2 times of demonstrations than the selected depth in menu, can calculate the degree of depth by the zoom in/out ratio when determining operation.
Like this, the variation of the demonstration size by making the figure shaped part shows with the configurable deep of figure shaped part relatedly, sets thereby the user can carry out intuitively the degree of depth.
<stereo-picture generating unit 108>
Stereo-picture generating unit 108 has displacement obtaining section 114 in inside, image synthesizes section 115, has following function: based on set the configurable deep of the figure shaped part of prompting part 107 decisions by the degree of depth, with parallax ground, the figure shaped part is synthesized in photo, generates the left eye that synthesized the figure shaped part with image and right eye image.
Displacement obtaining section 114 has the function that obtains the unit as following displacement: by calculating the parallax for the stereoscopic visual effect of the configurable deep of generation figure shaped part, the parallax of calculating is converted to pixel count, thereby obtains displacement.
The synthetic section 115 of image is as carrying out function as the hypograph synthesis unit: to left eye image composite diagram shaped part in the configuration scope of figure shaped part, right eye with image composite diagram shaped part in the configuration scope that makes the figure shaped part moves in the horizontal direction the scope of the displacement of being calculated by displacement obtaining section 114, has been synthesized the 3 D visual image of figure shaped part thereby generate.
<efferent 109>
Efferent 109 is drivers of the demonstration of control display device 200, the left-eye image when making display show the processing operation, figure shaped part, the 3 D visual image etc. of being set the GUI image of prompting part 107 indications and synthesized the figure shaped part in stereo-picture generating unit 108 by the degree of depth.
It is more than the explanation to the formation of user interface control device.
<action>
Then, the action of the user interface control device that possesses above-mentioned formation described.
<depth information generation processing>
At first, the depth information based on depth information calculating section 103 being generated processing describes.Figure 10 illustrates the process flow diagram that depth information generates the flow process of processing.
As shown in the drawing, at first depth information calculating section 103 obtains left eye image, the right eye image (step S1) of photography.Then, depth information calculating section 103 is used the pixel (step S2) corresponding to pixel of image with image retrieval and formation left eye from right eye.And depth information calculating section 103 is used the position relationship of the corresponding point of image with image and right eye according to left eye, calculate the distance (step S3) of the depth direction of subject based on the principle of triangulation.The Stereo matching that the formation left eye carries out being made of above step S2 and step S3 with all pixels of image is processed.To after consisting of left eye and processing with the Stereo matching of all pixel ending step S2, step S3 of image, depth information calculating section 103 makes 8 quantizations of information (step S4) of distance of the depth direction of the subject that obtains in the processing of step S3.Particularly, the distance of the depth direction of the subject calculated is converted to the value of 0~255 256 gray scales, generates the progressive series of greys image that represents the depth of each pixel with 8 brightness.The progressive series of greys image that generates like this is recorded in depth information preservation section 104 as depth information.
More than that the depth information based on depth information calculating section 103 is generated the explanation of processing.
<degree of depth setting pop-up menu Graphics Processing>
Figure 11 illustrates the process flow diagram that the degree of depth of carrying out according to user's figure shaped part configuration operation is set the flow process of pop-up menu Graphics Processing.
Set in the pop-up menu Graphics Processing in the degree of depth, at first, when receiving figure shaped part configuration operation in operation input receiving portion 101 (step S11), graphical information obtaining section 105 obtain the user left eye with image on the coordinate (step S12) of arrangement plan shaped part, calculate to configure the configuration scope (step S13) of the figure shaped part centered by coordinate.At this, upper left corner coordinate and the lower right corner coordinate of the rectangular area of configuration scope conduct encirclement figure shaped part are as shown in Figure 5 calculated.
When the coordinate time of calculating the configuration scope, the center (x that depth information analysis unit 106 is read by the configuration scope of figure shaped part from depth information preservation section 104 g, y g) the depth information (step S14) of horizontal pixel group, carry out the processing (step S15) based on the candidate of the configurable deep of the extraction of depth information figure shaped part of reading.
In the figure of step S15 shaped part configurable deep candidate extraction is processed, as use Figure 12 illustrates in the back, decision illustrates candidate's the option of the configurable deep of figure shaped part and counts L, and is included in L-1 subject depth registration separately in the configuration scope of figure shaped part in recording medium 70.Based on the result that the figure shaped part configurable deep candidate extraction in this step S15 is processed, GUI prompting part 111 generates and comprises L option determining at interior pop-up menu and point out the user (step S16).
In the generation based on the pop-up menu of GUI prompting part 111, particularly, L option is mapped with the degree of depth according to following.At first, be recorded in figure shaped part configurable deep candidate extraction is processed in the degree of depth of subject of recording medium 70 recently that the option of the degree of depth of front side and " stickups " is mapped, the degree of depth that is mapped than the option with " stickups " the further degree of depth of close front side prescribed depth and the option in " the place ahead " is mapped.And, calculate from the mean depth of every 2 of the nearly front of the degree of depth of L-1 subject being recorded in recording medium 70, make L-2 mean depth calculating in order with " rear 1 ", " rear 2 " ... the option of " rear L-2 " is mapped.
At this, the display position that is made as pop-up menu is disposed at the upper left corner that left eye is used image as shown in Figure 8 acquiescently, but in the situation that the location overlap of the lap position of the pop-up menu that shows and configuration figure, be made as pop-up menu is moved to and the nonoverlapping position of subject.
More than the degree of depth to be set the explanation of pop-up menu Graphics Processing.
The processing of<figure shaped part configurable deep candidate extraction>
Figure 12 is the process flow diagram that is illustrated in the details of the candidate's of the configurable deep of extraction figure shaped part processing in the step S15 of Figure 11.
In figure shaped part configurable deep candidate extraction was processed, depth information analysis unit 106 was at first with the 2 variables L initialization (step S21) that make the management options number, was used as the x of x coordinate figure of upper left corner coordinate of the configuration scope of figure shaped part 1Make the variable n initialization (step S22) of management retrieval coordinate.
After initialization of variable, depth information analysis unit 106 circulation of step S23~step S27 is repeatedly processed.
In step S23, judge coordinate (n, y g) on left eye with the depth D of image nWith from coordinate (n, y g) to the right determined pixel for example count w(, 5 pixels) coordinate (n+w, y g) on left eye with the depth D of image n+wThe absolute value [ D of difference n-D n+wWhether exceed threshold value Th.At [ D n-D n+wExceed in the situation of threshold value Th (step S23: be), make option count L and increase progressively (step S24), with the left eye picture depth D on the retrieval coordinate nValue as the depth registration of subject in recording medium 70(step S25).
At [ D n-D n+wBe in the following situation of threshold value Th (step S23: no), and in step S25 with D nValue recorded as the degree of depth that is mapped with option after, with the n+w variable n(step S26 of new management retrieval coordinate more), judge whether variable n after upgrading exceeds the x as the x coordinate figure of the lower right corner coordinate of the configuration scope of figure shaped part 2(step S27).
If variable n does not exceed x in step S27 2, repeatedly carry out circular treatment from step S23, if exceed, finish the configurable deep candidate's of figure shaped part extraction.
In addition, be not limited to above-mentioned 5 pixels as the pixel wide w that retrieves, can use to be suitable for detecting the arbitrary value that impinges upon the subject in image.But, in the image that 2 personages transversely arranged with same depth are photographed, in the situation that use the little value such as 1 pixel as retrieval width w, small background between might 2 people also detects the configuration candidate into the figure shaped part, and to user's prompting nonsensical option in image processing.Perhaps, in the situation that increase retrieval width w, might be with the zone that slowly changes continuously from the shallow degree of depth to the dark degree of depth, be a plurality of subjects such as detecting by retrieval width w from the wall of vergence direction photography etc.Therefore, in the situation that increase like this retrieval width w, preferably also use large value according to retrieving the threshold value Th of width w to the degree of depth.
In addition, in the present embodiment, for the candidate of the configurable deep of extracting the figure shaped part, with the center (x of the configuration scope by the figure shaped part g, y g) the horizontal pixel group resolve depth profile, but be used for extracting the figure shaped part configurable deep the candidate depth profile parsing can with in the configuration scope of figure shaped part other the horizontal pixel group or in the vertical direction continuous pixel groups as object.And, can be with a plurality of horizontal pixel groups in the configuration scope of figure shaped part, the vertical pixel group analysis object as depth profile.
It is more than the explanation of processing based on the figure shaped part configurable deep candidate extraction of depth information analysis unit 106.
The processing of<figure shaped part depth adjustment>
Figure 13 is the process flow diagram that illustrates according to the flow process of the figure shaped part depth adjustment processing of selecting operation to carry out from the option of pop-up menu.
In figure shaped part depth adjustment was processed, degree of depth determination section 113 was obtained the degree of depth (step S31) that the option selected with the user is mapped.
Then, flexible display part 112 after overlapping display graphics section (step S32), centered by the coordinate of arrangement plan shaped part, amplifies, dwindles ground update displayed image (step S33) with the figure shaped part repeatedly in the configuration scope of figure shaped part.
The size of the figure shaped part that this zoom in/out shows is mapped with more large more past nearby side, more little more past mode of regulating the degree of depth of figure shaped part inboardly, when the time point input that shows to the size of hoping in the figure shaped part as the user determines operation, figure shaped part display size when degree of depth determination section 113 operates according to decision is proofreaied and correct the degree of depth (step S34) that is mapped with the option of obtaining in step S31.
At this, use the details of flowchart text step S33 shown in Figure 14,34 processing.
Step S41 is the circular treatment that the selection of wait user's pop-up menu option operates.When receiving the selection operation (step S41: be), flexible display part 112 use are opened to make to amplify and are indicated initialization (step S42).After the sign initialization, flexible display part 112 is the circular treatment of execution in step S43~step S50 repeatedly.
Step S43 amplifies the judgement whether sign is set as unlatching.Be set as unlatching (step S43: be) in the situation that amplify sign, whether the process decision chart shaped part shows (step S44) with full-size (the figure shaped part that is shown in figure shaped part display part 2 of Fig. 4 full-sized 2 times).If the figure shaped part does not show (step S44: no) with full-size, the magnification with the figure shaped part improves 10%, upgrade the demonstration (step S45) of figure shaped part, if the figure shaped part shows (step S44: be) with full-size, will amplify sign and be set as and close (step S46).After the processing of step S45, step S46, determine whether the input of user's decision operation in step S50.
On the other hand, be set as and close (step S43: no) in the situation that amplify sign in the judgement of step S43, whether the process decision chart shaped part shows (step S47) with minimum dimension (full-sized 1/2 times).If the figure shaped part does not show (step S47: no) with minimum dimension, make the minification of figure shaped part reduce by 5%, upgrade the demonstration (step S48) of figure shaped part, if the figure shaped part shows (step S47: be) with minimum dimension, will amplify sign and be set as unlatching (step S49).After the processing of step S48, step S49, determine whether the input of user's decision operation in step S50.
In the judgement of step S50, in the situation that there is no the input (step S50: no) of user's decision operation, repeatedly process from step S43.
In the judgement of step S50, in the situation that the input (step S50: be) of user's decision operation is arranged, degree of depth determination section 113 is obtained the figure shaped part display size (step S51) when determining operation, determines the degree of depth (step S52) according to size.Particularly, if the figure shaped part display size of obtaining in step S51 amplifies than the figure shaped part that is shown in figure shaped part display part 2 of Fig. 4, itself and magnification will be proofreaied and correct pro rata as determine to be figure shaped part configurable deep than the degree of depth that is mapped with option that obtains near the degree of depth of front side in the step S31 of process flow diagram shown in Figure 13.Otherwise, if the figure shaped part display size of obtaining in step S51 dwindles than the figure shaped part that is shown in figure shaped part display part 2, itself and minification will be proofreaied and correct pro rata as determine to be figure shaped part configurable deep than the degree of depth that obtains by the inboard degree of depth in step S31.
It is more than the explanation that the figure shaped part depth adjustment of setting prompting part 107 based on the degree of depth is processed.
<stereo-picture generation processing>
Figure 15 illustrates the process flow diagram of flow process that generates the processing of the 3 D visual image that has synthesized the figure shaped part based on the degree of depth of being set the figure shaped part that prompting part 107 determines by the degree of depth.
At first, displacement obtaining section 114 obtains the degree of depth (step S61) of being set the figure shaped part of prompting part 107 decisions by the degree of depth.The synthetic section 115 of image left eye with the configuration scope of the figure shaped part of image in the composite diagram shaped part, generate left eye after synthetic with image (step S62).
Then, displacement obtaining section 114 is calculated pixel displacement amount (step S63) according to the degree of depth of being set the figure shaped part of prompting part 107 decisions by the degree of depth, the synthetic section 115 of image on the coordinate of the pixel displacement amount that the configuration scope displacement that makes the figure shaped part is calculated in step S63 at right eye with composite diagram shaped part in image, generate right eye after synthetic with image (step S64).
At this, use Figure 16 that the method for calculating the pixel displacement amount according to the degree of depth of figure shaped part is described.Figure 16 illustrates the configurable deep of figure shaped part and the figure of pixel displacement the relationship between quantities.Stereoscopic visual effect has the stereoscopic visual effect (emersion stereoscopic vision) that brings the emersion effect and brings the stereoscopic visual effect of recessed effect (recessed stereoscopic vision), Figure 16 (a) illustrates the pixel displacement in the situation of emersion stereoscopic vision, and Figure 16 (b) illustrates the pixel displacement in the situation of recessed stereoscopic vision.In these figure, Px illustrates the displacement to horizontal direction, L-View-Point illustrates the pupil of left eye position, and R-View-Point illustrates the pupil of right eye position, and L-Pixel illustrates the left eye pixel, R-Pixel illustrates the right eye pixel, e illustrates interocular distance, and H illustrates the height of display frame, and W illustrates the transverse width of display frame, S illustrates from looking the hearer to the distance of display frame, and Z illustrates from looking the hearer to the distance of imaging point, the i.e. configurable deep of figure shaped part.The straight line that connects left eye pixel L-pixel and pupil of left eye L-view-point is the sight line of pupil of left eye L-view-point, the straight line that connects right eye pixel R-Pixel and pupil of right eye R-View-Point is the sight line of pupil of right eye R-View-Point, utilizes switching based on the printing opacity/shading of 3D glasses, disparity barrier, realizes with the parallax barrier of biconvex lens etc.
In addition, the Px that right eye pixel R-pixel, left eye pixel L-pixel is in the situation of the position relationship in the display frame 701 of Figure 16 (a) is made as negative value, and the Px that right eye pixel R-pixel, left eye pixel L-pixel is in the situation of the position relationship in the display frame 702 of Figure 16 (b) is made as positive value.
At first, the height H of display frame, the transverse width W of display frame are considered.When considering that display frame is the situation of televisor of X-type, the model number of televisor is with cornerwise length (inch) expression of picture, so counts between the transverse width W of height H, display frame of X, display frame X in the model of televisor 2=H 2+ W 2Relation set up.In addition, the transverse width W of the height H of display frame, display frame uses aspect ratio m:n to be expressed as W:H=m:n.According to above-mentioned relational expression, the height H of the display frame shown in Figure 16 (a) and (b) represents with following numerical expression 1,
[numerical expression 1]
H = m 2 m 2 + n 2 X
The transverse width W of display frame represents with following numerical expression 2,
[numerical expression 2]
W = n 2 m 2 + n 2 X
Can calculate according to value and the aspect ratio m:n that the model of televisor is counted X.In addition, the model of the televisor information of counting X, aspect ratio m:n is used the value that obtains by the negotiation with external display.It is more than the explanation of relation of the transverse width W of height H, display frame to display frame.Then, the displacement of horizontal direction described.
At first, the situation of emersion stereoscopic vision described.701 of Figure 16 illustrates to look the figure that the hearer carries out the pixel displacement in the situation of emersion stereoscopic vision.By carrying out such pixel displacement to consisting of left eye with all pixels of image, thereby can generate with left eye with right eye image corresponding to image.Below the concrete calculating formula of the displacement of horizontal direction is described.
For the pixel displacement in the situation of emersion stereoscopic vision, with reference to Figure 16 (a), according to by pupil of left eye L-view-point, pupil of right eye R-View-Point, these 3 triangles that consist of of imaging point with by left eye pixel L-pixel, right eye pixel R-pixel, these 3 leg-of-mutton similarity relations that consist of of imaging point, in the situation that look the horizontal direction that the hearer do not tilt displacement Px, subject apart from Z, from look the hearer to display frame apart from S, interocular distance e, the relation of following numerical expression 3 is set up.
[numerical expression 3]
Px = e ( 1 - S Z ) [ cm ]
Obtaining according to the configurable deep of figure shaped part apart from Z of subject.In addition, interocular distance e adopts the mean value 6.4cm of adult male.In addition, generally be made as 3 times of height of display frame to the viewing distance apart from the best of S of display frame from looking the hearer, so be made as 3H.
At this, in the situation that the pixel count longitudinally of display frame is made as L, the horizontal pixel count of display frame is made as K, the length of horizontal every 1 pixel is the horizontal pixel count K of the transverse width W ÷ display frame of display frame, and the length of every 1 pixel is the L of pixel count longitudinally of the height H ÷ display frame of display frame longitudinally.In addition, 1 inch is 2.54cm.Therefore, when the displacement Px of horizontal direction is shown with pixel unit, be following numerical expression 4.
[numerical expression 4]
Px = e 2.54 ( 1 - S Z ) × K W [ pixel ]
In addition, the information of the resolution of display frame (pixel count L, horizontal pixel count K longitudinally) is used the value that obtains by the negotiation with external display.Like this, can go out based on above-mentioned Mathematic calculating the displacement Px of horizontal direction.In the situation that the recessed stereoscopic vision shown in Figure 16 (b) is also set up with the relation that above-mentioned explanation is same.More than the concrete computing method of the pixel displacement amount of horizontal direction.
In addition, in the synthetic processing of figure shaped part, according to the degree of depth of figure shaped part, the subject that originally impinged upon in 3 D visual image of the part of figure shaped part is hidden sometimes.In performed synthetic processing of figure shaped part, subject and the contextual synthetic of figure shaped part considering originally to impinge upon in such 3 D visual image are necessary in the step S62 of Figure 15 and step S64.Figure 17 is the process flow diagram that is illustrated in the synthetic details of processing of figure shaped part performed in step S62 and step S64.At this, explanation is synthesized to the figure shaped part situation that left eye is used image.
In the synthetic processing of figure shaped part, at first use upper left corner coordinate (x1, y1) of the configuration scope of figure shaped part to make synthesising position x, y initialization (step S71), the then circular treatment of execution in step S72~step S78.
Step S72 be left eye on coordinate (x, y) with the depth D (x, y) of image whether than the configurable deep d of figure shaped part by inboard judgement.If the left eye on coordinate (x, y) with the depth D (x, y) of image than the configurable deep d of figure shaped part by inboard (step S72: be), rewrite left eye on coordinate (x, y) with the pixel (step S73) of image with the pixel of figure shaped part.
In the situation that rewrite in step S73 left eye with after the pixel of image and the left eye on coordinate (x, y) with the depth D (x, y) of image than the configurable deep d of figure shaped part near front side (step S72: no), make the x coordinate of synthesising position increase progressively (step S74), judge whether the x coordinate of synthesising position after changing exceeds the lower right corner coordinate (x of the configuration scope of figure shaped part 2, y 2) x coordinate figure x 2(step S75).
If the x coordinate of new synthesising position does not exceed x 2(step S75: no) processes new synthesising position from step S72 repeatedly, if the x coordinate of new synthesising position exceeds x 2(step S75: be) uses x 1Make the x coordinate initialization again (step S76) of synthesising position, after the y coordinate that makes synthesising position increases progressively (step S77), judge whether the y coordinate of synthesising position after changing exceeds the lower right corner coordinate (x of the configuration scope of figure shaped part 2, y 2) y coordinate figure y 2(step S78).
If the y coordinate of new synthesising position does not exceed y in step S78 2(step S78: no) processes new synthesising position from step S72 repeatedly, if the x coordinate of new synthesising position exceeds x 2(step S78: be) completes image to all pixels of the configuration scope of figure shaped part synthetic, finishes therefore that the figure shaped part is synthetic to be processed.
At this, illustrated the figure shaped part is synthesized to left eye with the situation of image, but in the situation that the figure shaped part is synthesized to the right eye image, made the upper left corner coordinate (x of the configuration scope of figure shaped part by use 1, y 1) and lower right corner coordinate (x 2, y 2) coordinate of the pixel displacement amount calculated in the step S63 of Figure 15 of displacement, carry out processing shown in Figure 17, thereby can generate the right eye image after synthetic.
More than that the stereo-picture based on stereo-picture generating unit 108 is generated the explanation of processing.
As mentioned above, according to present embodiment, the user carries out the judgement of the configurability of figure according to the distribution of the depth information of the position of arrangement plan shaped part, about the setting of depth direction, by the user being provided the option of the degree of depth, thereby can easily carry out the setting of the configurable deep of figure shaped part.
In addition, zoom in/out ground display graphics section repeatedly after selecting option waits for user's further decision operation, the figure shaped part display size when determining operation, with the configurable deep of figure shaped part be adjusted into more close from the degree of depth shown in option before or by inner.Thus, can improve the setting degree of freedom of the configurable deep of figure shaped part, so user's convenience is improved.
In addition, following example has been described: as the pop-up menu of the configurable deep of selecting the figure shaped part in present embodiment 1, as shown in Figure 8, in the situation that have personage and these 2 subjects of motorbus behind thereof in the configuration scope of figure shaped part, the example that to point out as the option of the configurable deep of figure shaped part for " the place ahead ", " stickup " and " rear " for the personage, but the option number of pop-up menu needn't be 3.For example can constitute: as shown in figure 18, in the situation that only be configured prompting " the place ahead " and " stickup " these 2 options in pop-up menu with the overlapping mode of personage's head to scheme shaped part.In addition, not only comprise " the place ahead ", " stickup " and " rear ", and can consist of pop-up menu in the mode that comprises 4 above options.
(embodiment 2)
In the related user interface control device of embodiment 1, when utilize indicating equipment etc. specify in left eye that on display, plane visual shows with image on during the planimetric position of arrangement plan shaped part, the degree of depth of the subject that consideration exists in the configuration scope of left eye with the figure shaped part of image, will than subject on the front, by the degree of depth at rear or the degree of depth identical with subject etc. as the degree of depth that can the arrangement plan shaped parts, utilize option to point out.
But, in the situation that a plurality of subject is present in narrow zone thick and fast, also sometimes be difficult to the context of each subject of visual identity in the visual point image that plane visual shows.In addition, in the situation that have a plurality of subjects in the configuration scope of figure shaped part, can be about various degree of depth arrangement plan shaped parts such as the place ahead of each subject, rears, also reach multinomial to the option of user's prompting.Under these circumstances, the user selects the option that desired depth is shown and bothers.
The related user interface control device of embodiment 2 extracts the different subject of the degree of depth from the depth map data that the Stereo matching by 2 visual point images obtains, emphasize that the subject of extracting is prompted to the user in the visual point image that plane visual shows, thus the easy planimetric position of specified configuration figure shaped part of user.In addition, by accepting the appointment close to the subject of the degree of depth of wanting the arrangement plan shaped part from the subject of emphasizing to show, thereby be prompted to the user with making the option refinement.
Figure 19 is the figure that the formation of the related user interface control device 300 of embodiment 2 is shown.
User interface control device 300 possesses operation input receiving portion 201, graphics overlay control part 202, depth information calculating section 203, depth information analysis unit 205, graphical information obtaining section 206, the degree of depth and sets prompting part 207, stereo-picture generating unit 208, efferent 209 and Region Segmentation section 1201.These operation input receiving portions 201, graphics overlay control part 202, depth information calculating section 203, depth information analysis unit 205, graphical information obtaining section 206, the degree of depth are set the function of prompting part 207, stereo-picture generating unit 208, efferent 209 and Region Segmentation section 1201 as for example program is pre-recorded in recording medium shown in Figure 1 70.
In the present embodiment, illustrate in the smart mobile phone of formation at Fig. 1, will with operation input receiving portion 201, graphics overlay control part 202, depth information calculating section 203, depth information analysis unit 205, graphical information obtaining section 206, the degree of depth set prompting part 207, stereo-picture generating unit 208, efferent 209 and Region Segmentation section 1201 respectively corresponding program be loaded into the RAM in handling part 100 and carried out by the CPU in handling part 100 from recording medium 70, utilize thus hardware resource (cooperation of the program on CPU and RAM) realization.
In above-mentioned example, be illustrated with the example that constitutes that pre-recorded program in recording medium 70 is loaded into the RAM in handling part 100 and carried out by the CPU in handling part 100, but said procedure can pre-recorded RAM in handling part 100.The formation of pre-recorded said procedure in the RAM in handling part 100 adopting in the situation that needn't record said procedure in recording medium 70.
In addition, depth information preservation section 204 utilizes the part of the posting field of recording medium 70 to realize.
The inscape that the related user interface control device of key element in the inscape that user interface control device 300 has except operation input receiving portion 201, graphical information obtaining section 206, the degree of depth are set prompting part 207, Region Segmentation section 1201 and embodiment 1 shown in Figure 2 has is same, and description thereof is omitted in the present embodiment.Below operation input receiving portion 201, graphical information obtaining section 206, the degree of depth set prompting part 207, Region Segmentation section 1201 describe.
Region Segmentation section 1201 has as the function as the lower area division unit: according to the Luminance Distribution of 3 D visual image and the distribution of depth information, be a plurality of subjects zones with the left eye image segmentation.Particularly, in the situation that left eye with in image with on every side pixel comparison brightness, the threshold value and the sharp edge part that changes that brightness are exceeded regulation partly detect.Region Segmentation section 1201 is by being divided the left eye image by the zone of such surrounded by edges, and read from pen recorder the depth information that left eye is used image, in the situation that clip the threshold value that the degree of depth of edge and both sides exceeds regulation, being judged to be by the zone of surrounded by edges is the subject zone.
For example, when take 3 chests 11 as shown in Figure 20 (a), 12,13 as subject when direction shown in dotted line is photographed, obtain the left eye image as Figure 20 (b).This left eye with image by using the well-lit threshold value in Region Segmentation section 1201, thereby regional 11a, 12a, 13a are detected as different zones.The depth information that left eye is used image is read from pen recorder in these zones, the degree of depth that makes regional 11a, 12a, 13a compares with the degree of depth in the zone of adjacency respectively, in the situation that exceed the threshold value of regulation and different, be judged to be regional 11a, 12a, 13a is the subject zone.The coordinate information in each subject zone of detecting like this is recorded in recording medium 70 by depth information preservation section 204.
The degree of depth set prompting part 207 except have and the GUI prompting part 211 of function that the GUI prompting part 111 of explanation in embodiment 1, flexible display part 112, degree of depth determination section 113 are same, flexible display part 212, degree of depth determination section 213, also have regional prompting part 214 in inside.Prompting part 214, zone has as the function as the lower area Tip element: with in image, be prompted to the different subject of user's degree of depth at the left eye that is shown in display.Particularly, regional prompting part 214 be by will mark and be shown in display with different patterns, color like that by each subject regional 11b, 12b, the 13b of zone as shown in Figure 21 (a) that Region Segmentation section 1201 is detected, thereby be prompted to the user.
In addition, the prompting in zone can use following user to the zone that is judged to be subject to identify the various gimmicks of assisting: as shown in Figure 21 (b), the gimmick that synthetic text such as numbering grade shows in each zone, the gimmick that looks like to show with the mode manuscript of the marginal portion of the regional 11c, the 12c that emphasize to be judged to be subject, 13c etc.
Operation input receiving portion 201 except user's operation of being accepted by operation input receiving portion 101 of explanation in embodiment 1, also have as the function of lower area receiving unit: accept the user and select any operation in the different subject of the degree of depth pointed out by regional prompting part 214 as mentioned above.
Graphical information obtaining section 206 is same with the graphical information obtaining section 105 of explanation in embodiment 1, carry out function as following planimetric position determining unit: obtain the figure shaped part at the coordinate of the left eye that is shown in display 200 shared configuration scope on image, but it is different from graphical information obtaining section 105 to configure method of determining range.In graphical information obtaining section 105, the user calculates the configuration scope of figure shaped part according to making the figure shaped part drop to left eye with the coordinate on image, but graphical information obtaining section 206 is with the coordinate of the central authorities by the selected subject of the operation zone accepted by the operation input receiving portion 201 centre coordinate (x as the figure shaped part g, y g), calculate the configuration scope of figure shaped part.
It is more than the explanation to the formation of the related user interface control device 300 of embodiment 2.
Then, processing describes to the 3 D visual image in user interface control device 300 with reference to Figure 22.
In the processing of the 3 D visual image in user interface control device 300, at first, Region Segmentation section 1201 uses the brightness of images and depth information from the zone of left eye with the image detection subject, and the pattern that regional prompting part 214 makes different patterns is presented in the left eye use image that is shown in display (step S81) with each subject region overlapping ground that detects.Any in the zone that the user can select to mark with different patterns as shown in Figure 21 (a) specified and made the figure shaped part overlapping with respect to which subject.
When operating in of the arbitrary subject of such selection zone is accepted in operation input receiving portion 201 (step S82), prompting part 214, zone is after the pattern of removing with the pattern of each subject region overlapping ground demonstration, with selected subject region overlapping ground depiction shaped part (step S84).
Can utilize before this step replace in embodiment 1 by the user make the figure shaped part drop to left eye with image on the operation of allocation position of assignment graph section.After this, the same processing for the treatment of step shown in utilization is later with the step S12 of Figure 11 decides the configurable deep (step S84) of figure shaped part, can proceed thus 3 D visual image processing.
As mentioned above, according to present embodiment, even in the situation that a plurality of subject is near configuring, make the figure shaped part judgement difficulty overlapping with respect to which subject, by importing based on the brightness of 3 D visual image, the Region Segmentation gimmick of depth information, also can select by area unit the allocation position of figure shaped part, so can be corresponding with the photo of various compositions, kind, and user's convenience be improved.
(replenishing)
In addition, be illustrated based on above-mentioned embodiment, but the present invention is not limited to above-mentioned embodiment certainly.Also be contained in the present invention in following situation.
(a) mode of the present invention can be the application executing method that discloses the treatment step that illustrates in each embodiment.In addition, can be to make to comprise to make computing machine with the computer program of the program code of described treatment step work.
(b) mode of the present invention also can be implemented as the LSI of the user interface control device of controlling the respective embodiments described above record.Such LSI can be by integrated realization of each functional module that Fig. 2, user interface control device 100 and 300 shown in Figure 19 are comprised.These functional modules are single chip individually, and can be to comprise part or all mode single chip.
Be made as LSI at this, still according to the difference of integrated level, also be sometimes referred to as IC, system LSI, super (super) LSI, super (ultra) LSI.
In addition, the gimmick of integrated circuit is not limited to LSI, can be realized by special circuit or general processor.Can utilize after making LSI and making can compiled program FPGA(Field Programmable Gate Array: field programmable gate array), can reconstitute the connection of the circuit unit of LSI inside, the configuration/processor of setting.
And, if the technology of utilizing the different technologies of semiconductor technology progress or derivation to be replaced into the integrated circuit of LSI occurs, certainly can use this technology to carry out the integrated of functional module and member.The application of biotechnology etc. may be arranged in this art.
Particularly can constitute following functional utilization integrated circuit, special circuit etc. realizes: in the situation that dispose figure on side's visual point image of formation 3 D visual image, determine the function of the scope that figure is shared; Obtain the function of the degree of depth of the subject in impinging upon described side's visual point image in the scope of determining; To option that the degree of depth that obtains is shown and the function that the option that can configure other degree of depth of figure different from the degree of depth that obtains is shown points out; Accept any the function of selection in a plurality of options; Repeatedly change in the size that makes figure after the selection of accepting option until accept from the function till user's decision indication; In the situation that accept to determine that the time point of indication amplifies demonstration with figure, to determine as configuring the degree of depth of figure than the degree of depth of the degree of depth shown in selected option near the front side, in the situation that accept the time point of decision indication, figure is dwindled demonstration, will lean on the inboard degree of depth to determine than the degree of depth shown in selected option be the function of the degree of depth of configuration figure; Calculate for produce the parallax of stereoscopic visual effect with the degree of depth that determines, by this parallax is converted to the function that pixel count is obtained displacement; To side's visual point image synthesising pattern in the scope of determining, to the opposing party's visual point image of consisting of 3 D visual image from the scope displacement in the horizontal direction determined the function of synthesising pattern in the scope of displacement; The zone of the degree of depth adjacency when one side's visual point image is divided into stereoscopic vision and shows and exceed threshold value and the function in different a plurality of zones; The function in a plurality of zones that prompting is divided; And accept any function of selection in the zone of prompting.In addition, also can constitute the cooperation that utilizes the program on processor and storer and realize above-mentioned each function.
(c) in above-mentioned embodiment 1, illustrated that according to pixels unit carries out the situation of corresponding point retrieval, but needn't be defined in this situation.For example, according to pixels module unit's (for example 4 * 4 pixels, 16 * 16 pixels) carries out the corresponding point retrieval.
(d) following situation has been described in above-mentioned embodiment 1: the value that the distance of the depth direction of subject is converted to 0~255 256 gray scales, generate depth information as represent the progressive series of greys image of the depth of each pixel with 8 brightness, still needn't be defined in this situation.For example, the distance of the depth direction of subject can be converted to the value of 0~127 128 gray scales.
(e) in above-mentioned embodiment 1, take scheme shaped part with respect to left eye with the allocation position of image as benchmark with parallax ground right eye with image on the overlay chart shaped part, but order can be opposite with it, take scheme shaped part with respect to right eye with the allocation position of image as benchmark with parallax ground right eye with image on the overlay chart shaped part.In this case, preferably when the appointment of the allocation position of the figure shaped part of accepting the user, show in advance the right eye image on display.
(f) in above-mentioned embodiment 1, following situation has been described: obtain the stereo-picture that is consisted of with the group of image with image and right eye by the left eye of equal resolution, but needn't be defined in this situation.For example, left eye can be resolution different image with right eye with image with image.Even between the different image of resolution, process by carrying out conversion of resolution, also can carry out the generation based on the depth information of corresponding point retrieval, process by high-resolution image being carried out pixel displacement, can generate high-resolution stereo-picture.Can process the generation processing of heavy depth information with the picture size of low resolution, therefore can realize alleviating of processing.In addition, the part of camera head can be made as the camera head of low performance, can realize cost degradation.
(g) following situation has been described in above-mentioned embodiment 1: the model that obtains display device by the negotiation with external display is counted the information of the resolution (pixel count L, horizontal pixel count K longitudinally) of X, aspect ratio m:n, display frame, but needn't be defined in this situation.The model that for example, can make audiovisual person input display device is counted the information etc. of the resolution (pixel count L, horizontal pixel count K longitudinally) of X, aspect ratio m:n, display frame.
(i) following situation has been described in above-mentioned embodiment 1: will look the hearer to display frame be made as 3 times (3H) of the height H of display frame apart from S, calculate the pixel displacement amount, but needn't be defined in this situation.For example, can utilize TOF(Time Of Flight: the transit time) type sensor range sensor calculate look the hearer to display frame apart from S.
(j) following situation has been described in above-mentioned embodiment 1: interocular distance e is made as the mean value 6.4cm of adult male, calculates the pixel displacement amount, but needn't be defined in this situation.For example, camera can be set in display device, calculate interocular distance according to the face image of being obtained by this camera.In addition, can be adult or child to looking the hearer, be that the male sex or women judge, calculate its corresponding interocular distance e.
(k) in above-mentioned embodiment 2, when carrying out the Region Segmentation of subject, come cut zone according to the distribution of Luminance Distribution and depth information, but be not limited thereto as region segmentation method.For example, can be only come cut zone according to the distribution of depth information.In addition, cut zone is come as unique point in the intersection point position that can only use Luminance Distribution to extract edge (the sharp position that changes of brightness) or edge.
The detection at edge can be by the difference (subdifferential) of obtaining the brightness between pixel, and calculates edge strength according to this difference and carry out.In addition, can utilize other edge detection method extract minutiae.
(l) in above-mentioned embodiment 1, indicate to show the mode of the menu of GUI as making the user select the unit of depth configuration, but so long as the unit that can select, also can use other gimmick, for example, with official hour interval painted display background, subject, prospect in order, load button is by inferior operation when the degree of depth of user's expectation shows with being colored, and which position selection probably is configured in thus.In addition, even the situation of painted display background, subject, prospect in order also is made as the nonsensical option that does not show the arrangement plan shaped part under subject such as hides at the situation.
(m) in above-mentioned embodiment 1, depth information calculating section 103 can be utilized for example TOF(Time Of Flight) distance of equidistant each subject of sensor measurement of type range sensor and generate depth information.In addition, can obtain together simple eye image and depth information from the network of outside, server, recording medium etc.In addition, the simple eye image of can parsing obtaining also generates depth information.Particularly, at first, image is divided into by the pixel set extremely uniformly of the attributes such as the color of " super pixel point ", brightness, the super pixel point of this super pixel point with adjacency compared, analyze the inferior variation of texture layer, infer thus the distance of subject.Simple eye image can be the view data of utilizing such as the shooting of the camera heads such as S.L.R.In addition, being not limited to the actual photographed image, can be CG(Computer Graphics: the computer graphic image) etc.
(n) show the menu of GUI in above-mentioned embodiment 1, its display position is the upper left corner to be decided to be the display position of acquiescence, but display position is not limited to this position.Can move to the position that subject is not hidden and show, to be configured with the nonoverlapping mode of the allocation position of figure shaped part.
(o) 3 D visual image is divided into the zone of subject in above-mentioned embodiment 2, but does not consider that subject is what just utilizes brightness and depth information to cut apart the zone of subject.But, in the Region Segmentation of subject, can come inspected object with personage/object recognition technique arbitrarily, make its with as (k) the Region Segmentation gimmick, the distributed combination of the depth information used in above-mentioned embodiment 2 of record are carried out Region Segmentation.In addition, can the zone of cutting apart be numbered, make that this numbering is overlapping to be presented on subject, so that the user selects the form of this numbering to select the subject of overlay chart shaped part.
(p) in above-mentioned embodiment 2, can utilize person recognition identification of function personage's zone, carry out Region Segmentation, make the figure shaped part easily overlapping with respect to the personage.
(q) enumerate the colored example that is designated as the figure shaped part in above-mentioned embodiment 1, but in the situation that be figure shaped part, for example hair as the part of the person, the figure shaped part of lip, can utilize the face recognition function with overlapping this figure shaped part of the mode that is configured in appropriate location on the face.
(r) enumerate the colored example that is designated as the figure shaped part in above-mentioned embodiment 1, but in the situation that the figure shaped part is balloon, can utilize the zone of face recognition function retrieval mouth, arrive mouth and the mode do not hidden as the face of subject is configured with the front end of balloon.
Industrial utilizability
The related user interface control device of a mode according to the present invention, the configurable deep of figure shaped part can be easily set when composite diagram shaped part in 3 D visual image, the purposes such as PC, flat board, smart mobile phone, portable phone of the processing of 3 D visual image can be applied to carry out.In addition, also particularly useful for the application program of conditioning system.
Label declaration
10: camera; 20: loudspeaker; 30:GPS; 40: sensor; 50: touch panel; 60: microphone; 70: recording medium; 100: handling part; 101,201: operation input acceptance division; 102: control part; 103,203: the depth information calculating section; 104,204: depth information preservation section; 105,206: the graphical information obtaining section; 106,205: the depth information analysis unit; 107,207: the degree of depth is set the prompting part; 108,208: the stereo-picture generating unit; 109,209: efferent; The 111:GUI prompting part; 112: flexible display part; 113: degree of depth determination section; 114: the displacement obtaining section; 115: image synthesizes section; 200; Display; 202; The graphics overlay control part; 211; The GUI prompting part; 212; Flexible display part; 213; Degree of depth determination section; 214; The prompting part, zone; 300; User interface display device; 1201; Region Segmentation section.

Claims (16)

1. a user interface control device, provide user interface when synthesising pattern in 3 D visual image, and this user interface is used for setting the degree of depth of the depth direction that configures figure, it is characterized in that, this user interface control device possesses:
The planimetric position determining unit in the situation that dispose figure on side's visual point image of formation 3 D visual image, is determined the scope that figure is shared;
The visual point image degree of depth obtains the unit, obtains the degree of depth of the subject in impinging upon described side's visual point image in the described scope of determining; And
Tip element illustrates the option and the option that other degree of depth that can configure figure different from the described degree of depth that obtains are shown of the described degree of depth that obtains.
2. user interface control device as claimed in claim 1, is characterized in that,
The option that the described degree of depth that obtains is shown is illustrated in the degree of depth that is positioned at the subject before the most close in the subject that impinges upon in the shared scope of described figure,
The option that can configure other degree of depth of described figure is shown the degree of depth than the degree of depth more close front side that is positioned at described subject before the most close is shown.
3. user interface control device as claimed in claim 2, is characterized in that,
In the described scope of determining of described side's visual point image, also there are other subjects except being positioned at the most close described front subject, and in the situation that between 2 subjects, the degree of depth exceeds threshold value and different, Tip element is also pointed out the option of the intermediate depth of the degree of depth that described 2 subjects are shown.
4. user interface control device as claimed in claim 3 is characterized in that also possessing:
Receiving unit is accepted any the selection in described a plurality of options;
Flexible display unit after the selection of accepting described option, shows until accept to indicate from user's decision the size of described figure repeatedly with changing; And
Degree of depth determining means, in the situation that accept the time point of described decision indication, figure is amplified demonstration, to determine as configuring the degree of depth of figure than the degree of depth of the degree of depth shown in the described option of selecting near the front side, in the situation that accept the time point of described decision indication, figure is dwindled demonstration, will determine as configuring the degree of depth of figure by the inboard degree of depth than the degree of depth shown in the described option of selection.
5. user interface control device as claimed in claim 4, is characterized in that,
Degree of depth determining means is in the selecteed situation of the option of the intermediate depth of the degree of depth that 2 subjects are shown, make and be positioned in described 2 subjects nearby that the degree of depth of the subject of side and the size that flexible display unit amplifies the figure of demonstration most are mapped, the degree of depth that is positioned at inboard subject in described 2 subjects and the size that flexible display unit dwindles the figure of demonstration most are mapped, determine thus and the corresponding degree of depth of size at the figure of accepting the time point that described decision indicates.
6. user interface control device as claimed in claim 4, is characterized in that,
Degree of depth determining means is in the shared scope of described figure and in the situation that have arbitrary subject than the degree of depth shown in the described option of selecting by the inboard, to be positioned at the degree of depth of this inboard subject and the size of the figure that flexible display unit shows minimumly and be mapped, determine and accept thus the corresponding degree of depth of size of the figure of the time point that described decision is indicated.
7. user interface control device as claimed in claim 4, is characterized in that,
degree of depth determining means is in the shared scope of described figure and in the situation that have arbitrary subject than the degree of depth shown in the described option of selecting near the front side, make and be positioned at this nearby the degree of depth of the subject of side and the size of the figure that flexible display unit shows maximumly are mapped, in the shared scope of described figure and in the situation that do not have subject than the degree of depth shown in the described option of selecting near the front side, make than the degree of depth shown in the described option of selecting and be mapped near the prescribed depth of front side and the size of the figure that flexible display unit shows maximumly, determine and accept thus the corresponding degree of depth of size of the figure of the time point that described decision is indicated.
8. user interface control device as claimed in claim 4 is characterized in that also possessing:
Displacement is obtained the unit, calculates for produce the parallax of stereoscopic visual effect with the described degree of depth that determines, obtains displacement by this parallax is converted to pixel count; And
The image synthesis unit, to described side's visual point image synthesising pattern in the scope of being determined by described planimetric position determining unit, to the opposing party's visual point image of consisting of 3 D visual image from the scope determined by described planimetric position determining unit synthesising pattern in the scope of the described displacement of displacement in the horizontal direction.
9. user interface control device as claimed in claim 1, is characterized in that,
The visual point image degree of depth obtains a described side visual point image and the Stereo matching of the opposing party visual point image degree of depth that obtain described subject of unit by consisting of 3 D visual image.
10. user interface control device as claimed in claim 1 is characterized in that also possessing:
The zone division unit, zone and the degree of depth of the degree of depth adjacency when described side's visual point image is divided into the stereoscopic vision demonstration exceed threshold value and different a plurality of zones;
The zone Tip element, a plurality of zones that prompting is divided; And
The zone receiving unit, the selection of any in the zone of accepting to point out,
Described planimetric position determining unit determines in the mode of at least a portion in the described zone that comprises selection the scope that described figure is shared.
11. user interface control device as claimed in claim 10 is characterized in that,
Described regional Tip element is pointed out a plurality of zones of division by show the zone of adjacency with different colors.
12. user interface control device as claimed in claim 10 is characterized in that,
Described regional Tip element is by showing a plurality of zones of pointing out division to the additional different text in each zone.
13. user interface control device as claimed in claim 10 is characterized in that,
In the division based on described side's visual point image of regional division unit, determine by the sharp edge that changes of brightness between the pixel that is extracted in described side's visual point image or the intersection point at edge the border that each is regional,
Degree of depth when stereoscopic vision shows is used the degree of depth of each pixel that the Stereo matching by the described side's visual point image that consists of 3 D visual image and the opposing party's visual point image obtains.
14. a user interface control method is controlled user interface when synthesising pattern in 3 D visual image, this user interface is used for setting the degree of depth of the depth direction that configures figure, it is characterized in that, this user interface control method comprises:
The planimetric position determining step in the situation that dispose figure on side's visual point image of formation 3 D visual image, is determined the scope that figure is shared;
The visual point image degree of depth obtains step, obtains the degree of depth of the subject in impinging upon described side's visual point image in the described scope of determining; And
The prompting step, the option that the described degree of depth that obtains is shown with different from the described degree of depth that obtains, as to configure other degree of depth of figure options is shown.
15. a computer program is controlled user interface when synthesising pattern in 3 D visual image, this user interface is used for setting the degree of depth of the depth direction that configures figure, it is characterized in that, this computer program makes computing machine carry out following steps:
The planimetric position determining step in the situation that dispose figure on side's visual point image of formation 3 D visual image, is determined the scope that figure is shared;
The visual point image degree of depth obtains step, obtains the degree of depth of the subject in impinging upon described side's visual point image in the described scope of determining; And
The prompting step, the option that the described degree of depth that obtains is shown with different from the described degree of depth that obtains, as to configure other degree of depth of figure options is shown.
16. integrated circuit, be the integrated circuit of user interface control device, this user interface control device provides user interface when synthesising pattern in 3 D visual image, and this user interface is used for setting the degree of depth of the depth direction that configures figure, this integrated circuit is characterised in that to possess:
The planimetric position determining unit in the situation that dispose figure on side's visual point image of formation 3 D visual image, is determined the scope that figure is shared;
The visual point image degree of depth obtains the unit, obtains the degree of depth of the subject in impinging upon described side's visual point image in the described scope of determining; And
Tip element, the option that the described degree of depth that obtains is shown with different from the described degree of depth that obtains, as to configure other degree of depth of figure options is shown.
CN2012800020451A 2011-10-13 2012-08-10 User interface control device, user interface control method, computer program, and integrated circuit Pending CN103168316A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2011-226092 2011-10-13
JP2011226092 2011-10-13
PCT/JP2012/005109 WO2013054462A1 (en) 2011-10-13 2012-08-10 User interface control device, user interface control method, computer program, and integrated circuit

Publications (1)

Publication Number Publication Date
CN103168316A true CN103168316A (en) 2013-06-19

Family

ID=48081534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012800020451A Pending CN103168316A (en) 2011-10-13 2012-08-10 User interface control device, user interface control method, computer program, and integrated circuit

Country Status (4)

Country Link
US (1) US9791922B2 (en)
JP (1) JPWO2013054462A1 (en)
CN (1) CN103168316A (en)
WO (1) WO2013054462A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469338A (en) * 2013-09-25 2015-03-25 联想(北京)有限公司 Control method and device
CN109542293A (en) * 2018-11-19 2019-03-29 维沃移动通信有限公司 A kind of menu interface setting method and mobile terminal

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9569695B2 (en) 2012-04-24 2017-02-14 Stmicroelectronics S.R.L. Adaptive search window control for visual search
GB2511526A (en) 2013-03-06 2014-09-10 Ibm Interactor for a graphical object
US20160033770A1 (en) 2013-03-26 2016-02-04 Seiko Epson Corporation Head-mounted display device, control method of head-mounted display device, and display system
JP6369005B2 (en) * 2013-10-25 2018-08-08 セイコーエプソン株式会社 Head-mounted display device and method for controlling head-mounted display device
JP5849206B2 (en) * 2013-03-27 2016-01-27 パナソニックIpマネジメント株式会社 Image processing apparatus, image processing method, and image processing program
JP5834253B2 (en) 2013-03-27 2015-12-16 パナソニックIpマネジメント株式会社 Image processing apparatus, image processing method, and image processing program
US20150052460A1 (en) * 2013-08-13 2015-02-19 Qualcomm Incorporated Method for seamless mobile user experience between outdoor and indoor maps
JP6351295B2 (en) * 2014-02-21 2018-07-04 キヤノン株式会社 Display control apparatus and display control method
KR20150101915A (en) * 2014-02-27 2015-09-04 삼성전자주식회사 Method for displaying 3 dimension graphic user interface screen and device for performing the same
US20160165207A1 (en) * 2014-12-03 2016-06-09 Kabushiki Kaisha Toshiba Electronic device, method, and computer program product
KR102423175B1 (en) 2017-08-18 2022-07-21 삼성전자주식회사 An apparatus for editing images using depth map and a method thereof
JP7428687B2 (en) 2021-11-08 2024-02-06 株式会社平和 gaming machine

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005078424A (en) * 2003-09-01 2005-03-24 Omron Corp Device and method for preparing photographic seal
US20060143020A1 (en) * 2002-08-29 2006-06-29 Sharp Kabushiki Kaisha Device capable of easily creating and editing a content which can be viewed in three dimensional way
US20090219383A1 (en) * 2007-12-21 2009-09-03 Charles Gregory Passmore Image depth augmentation system and method
JP2009230431A (en) * 2008-03-21 2009-10-08 Fujifilm Corp Method, device and program for outputting image
US20100080448A1 (en) * 2007-04-03 2010-04-01 Wa James Tam Method and graphical user interface for modifying depth maps
US20110234760A1 (en) * 2008-12-02 2011-09-29 Jeong Hyu Yang 3d image signal transmission method, 3d image display apparatus and signal processing method therein

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101729023B1 (en) * 2010-10-05 2017-04-21 엘지전자 주식회사 Mobile terminal and operation control method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060143020A1 (en) * 2002-08-29 2006-06-29 Sharp Kabushiki Kaisha Device capable of easily creating and editing a content which can be viewed in three dimensional way
JP2005078424A (en) * 2003-09-01 2005-03-24 Omron Corp Device and method for preparing photographic seal
US20100080448A1 (en) * 2007-04-03 2010-04-01 Wa James Tam Method and graphical user interface for modifying depth maps
US20090219383A1 (en) * 2007-12-21 2009-09-03 Charles Gregory Passmore Image depth augmentation system and method
JP2009230431A (en) * 2008-03-21 2009-10-08 Fujifilm Corp Method, device and program for outputting image
US20110234760A1 (en) * 2008-12-02 2011-09-29 Jeong Hyu Yang 3d image signal transmission method, 3d image display apparatus and signal processing method therein

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469338A (en) * 2013-09-25 2015-03-25 联想(北京)有限公司 Control method and device
CN109542293A (en) * 2018-11-19 2019-03-29 维沃移动通信有限公司 A kind of menu interface setting method and mobile terminal
CN109542293B (en) * 2018-11-19 2020-07-31 维沃移动通信有限公司 Menu interface setting method and mobile terminal

Also Published As

Publication number Publication date
WO2013054462A1 (en) 2013-04-18
JPWO2013054462A1 (en) 2015-03-30
US20130293469A1 (en) 2013-11-07
US9791922B2 (en) 2017-10-17

Similar Documents

Publication Publication Date Title
CN103168316A (en) User interface control device, user interface control method, computer program, and integrated circuit
EP3742332B1 (en) Method and apparatus for training model for recognizing key points of hand, and method and apparatus for recognizing key points of hand
CN105637564B (en) Generate the Augmented Reality content of unknown object
CN101657839B (en) System and method for region classification of 2D images for 2D-to-3D conversion
CN102741879B (en) Method for generating depth maps from monocular images and systems using the same
CN105659200B (en) For showing the method, apparatus and system of graphic user interface
US20190147224A1 (en) Neural network based face detection and landmark localization
CN107484428B (en) Method for displaying objects
US20220237812A1 (en) Item display method, apparatus, and device, and storage medium
US9723295B2 (en) Image processing device, image processing method, image processing computer program, and information recording medium whereupon image processing computer program is stored
CN111710036B (en) Method, device, equipment and storage medium for constructing three-dimensional face model
CN106210538A (en) Show method and apparatus and the program of image based on light field on a user device
CN103582893A (en) Two-dimensional image capture for an augmented reality representation
CN101558404A (en) Image segmentation
CN104574267A (en) Guiding method and information processing apparatus
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
CN105590309A (en) Method and device for segmenting foreground image
CN106200960A (en) The content display method of electronic interactive product and device
KR102443214B1 (en) Image processing apparatus and control method thereof
CN107609490A (en) Control method, control device, Intelligent mirror and computer-readable recording medium
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
US11159717B2 (en) Systems and methods for real time screen display coordinate and shape detection
KR20200000106A (en) Method and apparatus for reconstructing three dimensional model of object
CN110945537A (en) Training device, recognition device, training method, recognition method, and program
CN114766042A (en) Target detection method, device, terminal equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: MATSUSHITA ELECTRIC (AMERICA) INTELLECTUAL PROPERT

Free format text: FORMER OWNER: MATSUSHITA ELECTRIC INDUSTRIAL CO, LTD.

Effective date: 20141009

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20141009

Address after: Seaman Avenue Torrance in the United States of California No. 2000 room 200

Applicant after: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA

Address before: Osaka Japan

Applicant before: Matsushita Electric Industrial Co.,Ltd.

C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: Seaman Avenue Torrance in the United States of California No. 20000 room 200

Applicant after: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA

Address before: Seaman Avenue Torrance in the United States of California No. 2000 room 200

Applicant before: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA

COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM:

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130619