CN103329165A - Scaling pixel depth values of user-controlled virtual object in three-dimensional scene - Google Patents

Scaling pixel depth values of user-controlled virtual object in three-dimensional scene Download PDF

Info

Publication number
CN103329165A
CN103329165A CN2011800644840A CN201180064484A CN103329165A CN 103329165 A CN103329165 A CN 103329165A CN 2011800644840 A CN2011800644840 A CN 2011800644840A CN 201180064484 A CN201180064484 A CN 201180064484A CN 103329165 A CN103329165 A CN 103329165A
Authority
CN
China
Prior art keywords
value
dimensional
user
pixel depth
virtual objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011800644840A
Other languages
Chinese (zh)
Other versions
CN103329165B (en
Inventor
B·M·吉诺瓦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment LLC
Original Assignee
Sony Computer Entertainment America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/986,814 external-priority patent/US9041774B2/en
Priority claimed from US12/986,854 external-priority patent/US8619094B2/en
Priority claimed from US12/986,827 external-priority patent/US8514225B2/en
Priority claimed from US12/986,872 external-priority patent/US9183670B2/en
Application filed by Sony Computer Entertainment America LLC filed Critical Sony Computer Entertainment America LLC
Priority to CN201610191451.7A priority Critical patent/CN105894567B/en
Publication of CN103329165A publication Critical patent/CN103329165A/en
Application granted granted Critical
Publication of CN103329165B publication Critical patent/CN103329165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N2013/40Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene
    • H04N2013/405Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene the images being stereoscopic or three dimensional

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Library & Information Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Pixel depth values of a user-controlled virtual object in a three-dimensional scene may be re-scaled to avoid artifacts when the scene is displayed. Minimum and maximum threshold values can be determined for the three-dimensional scene. Each pixel depth value of the user-controlled virtual object can be compared to the minimum threshold value and the maximum threshold value. A depth value of each pixel of the user-controlled virtual object that falls below the minimum threshold value can be set to a corresponding low value. Each pixel depth value of the user-controlled virtual object that exceeds the maximum threshold value can be set to a corresponding high value.

Description

The pixel depth value of the virtual objects of the user's control in the scaling three-dimensional scenic
The cross reference of related application
The name that the application relates on January 7th, 2011 and submits to is called commonly assigned, the co-pending application number 12/986,814 (attorney docket SCEA10052US00) of " DYNAMIC ADJUSTMENT OF PREDETERMINED THREE-DIMENSIONAL VIDEO SETTINGS BASED ON SCENE CONTENT ".
The name that the application relates on January 7th, 2011 and submits to is called commonly assigned, the co-pending application number 12/986,854 (attorney docket SCEA10054US00) of " MORPHOLOGICAL ANTI-ALIASING (MLAA) OF A RE-PROJECTION OF ATWO-DIMENSIONAL IMAGE ".
The name that the application relates on January 7th, 2011 and submits to is called commonly assigned, the co-pending application number 12/986,872 (attorney docket SCEA10055US00) of " MULTI-SAMPLERESOLVING OF RE-PROJECTION OF TWO-DIMENSIONAL IMAGE ".
Technical field
Embodiment of the present invention relate to the pixel depth value of the virtual objects of the user's control in the scaling three-dimensional scenic.
Background technology
In the past few years, come to have become quite popular with the ability of sensed in three dimensions two dimensional image by many different technology.Provide a degree of depth aspect just may go out the stronger sense of reality for any scene creation of describing to two dimensional image.This introducing of 3D vision performance has strengthened spectators' experience, the especially experience in the video-game boundary widely.
The technology that has many three-dimensional renderings for the Given Graph picture.Recently, proposed a kind of technology for one or more two dimensional images being projected to three dimensions, described technology is called as play up (DIBR) based on depth image.With the key concept that usually depends on " solid " video (namely, to the video flowing-of two separation be used for left eye and one be used for right eye-collection, transmission and demonstration) former motion compare, this new idea is based on single as video (that is, single video flowing) and the more flexibly joint transmission of pursuing pixel depth information that is associated.According to this Data Representation, then can generate in real time by so-called DIBR technology one or more " virtual " view of 3-D scene at receiver side.This new way that 3-D view is played up is brought the some advantages that surpass previous approach.
At first, this approach allows to adjust the 3-D projection or shows to cooperate different three-dimensional displays and the optical projection system of broad range.Because needed left-eye view and right-eye view only generate at 3D-TV receiver place, so can watch condition and described view was adjusted in presenting aspect ' perceived depth ' for concrete.This experiences for spectators provide the 3-D that customizes, and it is cosily to watch the solid of any kind or the experience of Autostereoscopic 3D-TV display.
DIBR also allows to change based on the 2D to 3D of " exercise recovery structure (structure from motion) " approach, and described conversion can be used to the list that has been recorded to generate needed depth information as video material.Therefore, concerning the program making (programming) of broad range, can generate the 3D video according to the 2D video, this may play a significant role in the success of 3D-TV.
Head movement parallax (that is, by perceived locational apparent displacement or the difference of the caused object of variation on the viewing angle) can be supported under DIBR, in order to another extra three-dimensional depth hint is provided.This has eliminated and has used solid or well-known " shear distortion " (that is, stereo-picture shows as and follow described observer when the observer changes viewing location) that Autostereoscopic 3D-the TV system is usually experienced.
In addition, eliminated from the beginning the luminosity asymmetry (for example, with regard to brightness, contrast or color) that can destroy between relief left-eye view and the right-eye view, because two views are effectively synthetic from same original image.In addition, described approach can realize based on the automatic object segmentation of degree of depth keying and allow the easily integration of synthetic 3D object in " real world " sequence.
At last, this approach allows spectators to adjust the reproduction of the degree of depth to be fit to his/her individual preference-the spitting image of being that each conventional 2D-TV allows spectators to control to adjust color rendering by (going) saturation degree.This is very important feature, because the degree of depth appreciation degree of age groups there are differences.For example, the recent research of Norman etc. confirms: the elderly is responsive not as the young man to the perception three-dimensional depth.
When each spectators can have unique preferred depth collection is set, each scene of presenting to described spectators also can have unique preferred depth collection is set.Which scope of the content provided degree of depth setting of each scene should be used for the best of described scene and watch.For each scene, one again the projective parameter collection may not be desirable.For example, depending on has how many remote backgrounds to be in the visual field, and different parameters can play preferably effect.Because whenever the content of scene change scene just changes, so when determining projective parameter, existing 3D system can not obtain the content of scene again.
Embodiment of the present invention produce under this situation.
Description of drawings
Figure 1A is the flow diagram/schematic that the method for the dynamic adjustment that a kind of three-dimensional scenic of determining for the user according to embodiment of the present invention arranges is shown.
Figure 1B illustrates the three-dimensional again schematic diagram of the key concept of projection.
Fig. 1 C is the reduced graph that the embodiment that the virtual video camera that arranges according to the 3D video of embodiment of the present invention adjusts is shown.
Fig. 1 D is the reduced graph that the embodiment that the mechanical video camera that arranges according to the 3D video of embodiment of the present invention adjusts is shown.
Fig. 2 A to Fig. 2 B is the schematic diagram of the problem of the virtual objects that is illustrated in the control of user in the three-dimensional scenic element that penetrates virtual world.
Fig. 2 C illustrates the schematic diagram of pixel depth value scaling of problem that the virtual objects that solves user's control in three-dimensional scenic penetrates the element of virtual world.
Fig. 3 is the schematic diagram that illustrates according to the method for the pixel depth value of the virtual objects of a kind of user's control for the scaling three-dimensional scenic of embodiment of the present invention.
Fig. 4 is the block scheme that illustrates according to the equipment of the pixel depth value of the virtual objects of a kind of user's control for implementing dynamic adjustment that three-dimensional scenic that the user determines arranges and/or scaling three-dimensional scenic of embodiment of the present invention.
Fig. 5 is the block scheme that illustrates according to the embodiment of the Cell processor implementation of the equipment of the pixel depth value of the virtual objects of a kind of user's control for implementing dynamic adjustment that three-dimensional scenic that the user determines arranges and/or scaling three-dimensional scenic of embodiment of the present invention.
Fig. 6 A illustrates according to embodiment of the present invention a kind of has embodiment for the non-instantaneity computer-readable recording medium of the instruction of implementing the dynamic adjustment that three-dimensional scenic that the user determines arranges.
Fig. 6 B illustrates according to embodiment of the present invention a kind of has embodiment for the non-instantaneity computer-readable recording medium of the instruction of the pixel depth value of the virtual objects of user's control of implementing the scaling three-dimensional scenic.
Fig. 7 is the isometric view that according to an aspect of the present invention three-dimensional is watched glasses.
Fig. 8 is the system-level block scheme that according to an aspect of the present invention three-dimensional is watched glasses.
Embodiment
For any spectators of the 3-D view of projection, some characteristics/hint is being arranged them to the perception of the degree of depth.The ability of the degree of depth in each spectators' perception tripleplane is unique for themselves eyes.Some hint can provide some depth characteristic that is associated with given scenario for spectators.For example and not with ways to restrain, these eyes hints can comprise stereoscopic vision (stereopsis), convergence and shade stereoscopic vision.
Stereoscopic vision refers to that spectators are by processing the ability of being judged the degree of depth by object to the resulting information of different projections on each retina.By using two images of the Same Scene that obtains from slightly different angle, might will arrive with the accuracy of height object apart from triangle division.If object is remote, so that image drop on two aberrations (disparity) on the retina will be very little.If object is close or approaching, aberration will be very large so.By the angle difference between the different projections of adjusting Same Scene, spectators may optimization he to the perception of the degree of depth.
Convergence is another eyes hint of depth perception.When two eyeballs were watched attentively on same target, they were just assembled.This convergence will be stretched extraocular muscle.The kinaesthesia sense of these extraocular muscles helps the perception to the degree of depth just.When eye gaze was on remote object, the angle of convergence was less, and when watching attentively on nearlyer object, the angle of convergence is larger.By adjusting the convergence of eyes for given scenario, spectators may optimization he to the perception of the degree of depth.
Shade stereoscopic vision refers in order to given scenario is given the stereoscopic fusion of the shade of the degree of depth.Strengthen or reduce scene shade intensity further the optimization spectators to the perception of the degree of depth.
Hint the scene setting that is associated by adjusting with these eyes, spectators can optimization he to the general three perception of the degree of depth.Although given user may be able to select general three-dimensional scenic that collection is set for watching all scenes, each scene is unique, and therefore, depends on the content of that concrete scene, may need dynamically to adjust some visual cues/user and arrange.For example, in the situation of virtual world, the concrete object that spectators watch attentively in the given scenario may be important.Yet spectators' predetermined three-dimensional scene setting may not be best for watching that concrete object.Here, spectators' setting will be dynamically adjusted according to described scene, perceives described concrete object under the collection so that arrange at better three-dimensional scenic.
Figure 1A is the process flow diagram that the method for the dynamic adjustment that a kind of three-dimensional scenic of determining for the user according to embodiment of the present invention arranges is shown.At first, spectators 115 communicate with the processor 113 that is configured to make three dimensional video data flow to visual displays 111.Processor 113 can be following form: video game console, computer equipment, maybe can process any other device of three dimensional video data.For example and not with ways to restrain, visual displays 111 can be the form of the ready televisor of 3-D, it is shown as text, numeral, graphical symbol or other visual object will be watched the stereo-picture of glasses 119 perception by a pair of 3-D.In Fig. 7 to Fig. 8, describe and be described in more detail below the embodiment that 3-D watches glasses.3-D watches glasses 119 can be following form: active liquid crystal shutter glasses, active " blood-shot eye illness " shutter glasses, passive linear polarization glasses, passive Circular Polarisation glasses, interference light filter glasses, complementary colors stereoscopic projection film or be configured to watched by visual displays 111 and watched glasses with any one secondary other 3-D of the image of three dimensional constitution projection.Spectators 115 can communicate by user interface 117 and processor 113, and described user interface can be taked following form: rocking bar, controller, telepilot, keyboard, or any other device that can be combined with graphical user interface (GUI).
Spectators 115 can select at first to remain to be applied to present to audience one group of general three-dimensional video setting of each three-dimensional scenic of 115.For example and not with ways to restrain, the outer boundary that spectators can selected depth, three-dimensional scenic just is projected within the described outer boundary.As additional embodiment, the user can arrange the predetermined value of stereoscopic vision, convergence or shade stereoscopic vision.In addition, if the user does not arrange the predetermined value of these parameters, so described predetermined value can be the Default Value default value.
Can be arranged and the example of other 3D video parameter setting of being dynamically adjusted based on scene content includes but not limited to by the user: 3D depth effect and 3D scope the two.How many 3D effects severity control has be presented to the user.The outer boundary of the degree of depth represents scope and parallax (our degree of depth and effect slider) basically.In the implementation that relates to again projection, can be as mentioned below adjustment drop shadow curve.Can be adjustment to the shape character of described again drop shadow curve to the adjustment of drop shadow curve again, described shape can be linear or perhaps be the S shape of emphasizing the center.In addition, can adjust the parameter of described shape.For example and not with ways to restrain, for linear drop shadow curve again, can adjust end points or slope.For the again drop shadow curve of S shape, can how soon wait the S oblique ascension to adjust.
In relating to other embodiment of again projection, can provide certain edge fog with the repairing leak, and spectators 115 can drive described repairing.In addition, thus can use use of the present invention again the embodiment of projection or other means help reduce ghost image-permission and pursue the scene adjustment based on user's scaling to drive color contrast.In addition, in the situation that do not relate to again projection, the user can adjust having scaling how far from the input video camera or to the slight fine setting of camera angle.Can comprise depth of field setting or camera aperture with other video camera setting of adjusting by the scene mode.
Because differently perception 3D vision performance of one or more spectators 115, so different spectators can have according to their preference different general three-dimensional scene setting combinations.For example, study verified: the elderly is responsive not as the young man to the perception three-dimensional depth, and therefore, the elderly may benefit from the scene setting that increases the perception of the degree of depth.Similarly, the young man may find that the arranging of perception of reducing the degree of depth can alleviate eye fatigue and tired out, still provides pleasant three-dimensional experience for spectators simultaneously.
When spectators 115 were observing the steady flow of three-dimensional scenic 103, the one or more scenes that also do not show to spectators can be stored in the output buffer 101.Can arrange them according to the presenting order of scene 103.Scene 103 refers to the one or more 3 D video frames take one group of sharing characteristic as feature.For example, one group of frame of video that represents the different views of same landscape can be characterized as being a scene.Yet the close-up view of object can represent different scenes with the perspective view of object.Be important to note that: the combination of any amount of frame can be characterized as being a scene.
Scene 103 before being presented to spectators through two stages.At first process the one or more characteristics 105 of described scene to determine to be associated with given scenario.Then determine to be applied to one or more scale factors 107 of user's predetermined set according to those characteristics.Then, described scale factor can be transferred to processor 113 as metadata 109 and be applied to dynamically adjust spectators' setting, indicate such as 110 places.Then, can use adjusted the setting that described scene is presented on the display 111, indicate such as 112 places.This allows each scene to present to audience in such a way: keep spectators' basic preference, still keep simultaneously the vision integrality of described scene by the specific content of scene is taken into account.In the situation that do not relate to again projection, metadata can transfer to harvester to adjust, and is adjusted in the game our virtual camera position, or for example adjusts, such as employed physics video camera in 3D chat embodiment.
Before the embodiment of explanation the inventive method, some backgrounds of discussing about three-dimensional video system are useful.Embodiment of the present invention can be applied to the again projection setting for the 3D video that generates according to the 2D video by projection process again.In projection again, can according to each pixel in normal two dimensional image and the described image be associated synthesize left eye virtual view and the right eye virtual view of a scene by pixel depth information.This process can be implemented as follows by processor 113.
At first, utilize the depth data of each pixel in the original image that the original graph picture point is projected in the 3D world again.After this, with these 3d space spot projections to the plane of delineation of " virtual " video camera that is positioned at required viewing location place.The linking of projection (2D to 3D) and follow-up projection (3D to 2D) is sometimes referred to as 3D rendering distortion or again projection again.As shown in Figure 1B, by comparing with the operation of " truly " stereo camera, be appreciated that described again projection.In " truly ", high-quality stereo camera, usually utilize a kind of in two kinds of distinct methods to establish so-called parallax free setting (ZPS), that is, select the focusing distance Z in the 3D scene cIn " introversion (toed-in) " approach, select described ZPS by the inside rotation of the associating of left-eye camera and right-eye camera.In the displacement transducer approach, focusing distance Z cThe plane can be established by the little displacement h of imageing sensor, described imageing sensor is used for standoff distance t cLeft eye " virtual " video camera and right eye " virtual " video camera of parallel placement, as shown in Figure 1B.Each virtual video camera can be feature by the focal distance f of determining, described focal length represents the distance between virtual video camera camera lens and the imageing sensor.Employed to hither plane P in this distance and more described implementations in this article nHither plane apart from Z nCorresponding.
Technically, " introversion " approach is easier to realize in " truly " stereo camera.Yet the displacement transducer approach is preferred for projection more sometimes, because it can not introduce unwanted vertical differentiation, described vertical differentiation can be the potential source of the eye fatigue between left-eye view and the right-eye view.
Consider horizontal coordinate and vertical coordinate (u in the original 2D image, the depth information Z of each pixel of v) locating can use the displacement transducer approach, according to following equation generate left-eye view and right-eye view the respective pixel coordinate (u ', v '), (u ", v "):
For left-eye view, u ′ = u α u t c + t hmp 2 Z + h , v’=v;
For right-eye view, u ′ ′ = u α u t c + t hmp 2 Z + h , ”v=v。
In aforementioned equation, α uConvergence angle in the horizontal direction, seen in Figure 1B.t HmpItem is the optional translation item (sometimes being called head movement parallax item) of explanation spectators' actual viewing location.
By following equation, the displacement h of left-eye view and right-eye view can with assemble angle [alpha] u, focusing distance Z cAnd horizontal convergence angle [alpha] uRelevant:
For left-eye view,
Figure BDA00003478603700073
For right-eye view,
Figure BDA00003478603700074
Processor 113 can receive scene 103 with regard to following aspect: original 2D image and by pixel depth information together with can be applied to the 3D video parameter by the setting of scene acquiescence scaling, such as α u, t c, Z c, f and t HmpOr its combination (for example, ratio).For example, scaling setting can be illustrated in for 0 (for there not being the 3D perception) and greater than the multiplier of change between certain value of 1 (for the 3D perception that strengthens).The 3D video parameter that changes virtual video camera arranges the qualitative perception that affects the 3D video.Not with ways to restrain, some qualitative effects that increase (+) or reduce (-) selected 3D video parameter are described among the following table I for example.
Table I
Figure BDA00003478603700075
Figure BDA00003478603700081
In Table I, term " screen parallax " refers to the level difference between left-eye view and the right-eye view; Term " perceived depth " refers to the Apparent Depth of the shown scene that perceived by spectators; Term " object size " refers to the apparent size that is presented at the object on the screen 111 that perceived by described spectators.
In some implementations, plane P nearby nWith far plane P fRather than convergence angle α uWith sensor interval t cAbove employed math equation is described.Term " hither plane " refers to by video camera-namely, the closest approach in the scene that imageing sensor gathers.Solstics in the scene that term " far plane " refers to be gathered by video camera.Do not exceed far plane P to playing up f, that is, exceed far plane apart from Z fAnything of (as describing among Figure 1B) attempted.Hither plane and far plane can be selected indirectly by the value of selecting some variable in the equation by a kind of system of math equation described above that uses.Alternatively, can adjust convergence angle α based on selected hither plane and far plane uWith sensor interval t cValue.
Can to three-dimensional again the operation requirements of optical projection system carry out following description: 1) to the selection of the hither plane of given scenario; 2) to the selection of the far plane of given scenario; 3) define conversion from described hither plane to described far plane for the again projection of described given scenario.Described conversion is called again drop shadow curve sometimes, and it makes the amount of horizontal pixel and vertical pixel displacement relevant with pixel depth basically; 4) a kind of for inessential/important pixel being filtered and/or the method for weighting; 5) a kind of system, its be used for making may the scene conversion process occur to any variation smoothing of 1 to 3, in order to prevent the inharmonious montage of the degree of depth that perceived by spectators 115.Three-dimensional video system also comprises 6 usually) allow certain mechanism of spectators' scaling 3-D effect.
Typically optical projection system requires concrete appointment as follows with above 6 again: the 1) hither plane of the video camera of scene; 2) far plane of the video camera of scene; 3) the only conversion of horizontal shift of pixel.Fixing displacement (usually claiming convergence) is turned down amount-pixel that a depth value with each pixel is inversely proportional to darker or far away, pixel is just fewer because of the displacement of assembling.For example by the above math equation that provides this requirement can be described; 4) be constant because of 1 to 3, so needn't be weighted; 5) be constant because of 1 to 3, so needn't carry out smoothing; And 6) slider for example can be used for adjusting described conversion by the amount that scaling pixel linearly will displacement.This will add a constant ratio factor for second in the equation of u ' or u ' ' (with possibility the 3rd) item from above in fact.This class constant ratio factor can be implemented via user's capable of regulating slider, and described user's capable of regulating slider trends towards making hither plane and far plane (and so average effect) to move towards screen plane.
This may cause three-dimensional relatively poor use.Given scenario may be unbalanced and cause that unnecessary eyes are tired out.3D video editor or 3D game developer must make up all scenes and film carefully, so that all objects correctly in the layout scene.
For given 3 D video, exist to be arranged in and watch comfort zone 121 near the zone of visual displays.The image that perceives from screen more away from, watch with regard to more uncomfortable (for most people).Therefore, the three-dimensional scenic setting that is associated with given scenario is intended to make the use maximization of comfort zone 121.Although some things can be in the outside of comfort zone 121, wish that usually most things that spectators watch attentively is all in comfort zone 121.For example and not with ways to restrain, spectators can arrange the border of comfort zone 121, simultaneous processor 113 can dynamically be adjusted scene setting, so that the use of comfort zone 121 is maximized for each scene.
The maximized simple and direct approach of use of comfort zone 121 can be related to: nearly plane layout becomes to equal the minimum pixel degree of depth that is associated with given scenario, and far plane is arranged to equal the maximum pixel degree of depth that is associated with given scenario, keeps simultaneously as above for typical case's defined character 3 to 6 of optical projection system again.This will make the use maximization of comfort zone 121, but it but reckons without in described scene or the effect of the object that flies outward, and this may cause huge displacement in three dimensions.
For example and not with ways to restrain, some embodiment of method of the present invention can additionally be taken the mean depth of described scene into account.Can be towards the mean depth of a target drives scene.The three-dimensional scenic data can be the given scenario Offered target, allow simultaneously the user the described scene of their perception (for example, the border of comfort zone) is had how far carry out scaling from described target.
Can carry out following imagination to the false code that is used for calculating such mean value:
The minimum depth value of all pixels in the paired scene of plane layout nearly, and far plane can be arranged to maximum depth value all described pixels in described scene.The target perceived depth can be by creator of content concrete the appointment and preference by a user value of scaling in addition.By use the mean value calculate with from above Transformation Properties 3, how far might calculate average scene depth has from the target perceived depth.Not with ways to restrain, then assemble and target increment (as shown in table 1) by adjusting simply for example, can make the displacement of general perceives scene depth.Can also make the smoothing of described target increment, do the same to hither plane with far plane as following.Can also use other method of the adjustment aim degree of depth, as in the 3D film, use in order in scene changes, to guarantee the method for the consistent degree of depth.Yet should note: the 3D film can not provide for spectators a kind of method of adjustment aim scene depth at present.
For example and not with ways to restrain, a kind of approach of determining one or more three-dimensional characters of being associated with given scenario is to determine and use following two important scene characteristics: the standard deviation of the mean pixel degree of depth of scene and the pixel depth of that scene.Can carry out following imagination to the mean value that is used for the calculating pixel degree of depth and the false code of standard deviation:
Then nearly plane layout becomes the mean pixel degree of depth of scene to deduct the standard deviation of the pixel depth of that scene.Similarly, the mean pixel degree of depth that far plane can be arranged to scene adds the standard deviation of the pixel depth of that scene.If these results are not enough, optical projection system can become frequency domain with the data-switching of expression scene so again, to be used for the mean pixel degree of depth of given scenario and the calculating of standard deviation.As above embodiment, being urged to target depth can same mode finish.
For a kind of method for unessential pixel being filtered with weighting is provided, can study the unessential pixel of scene and mark in great detail.Unessential pixel probably will comprise particle and other the incoherent small geometry body that leaps.In the situation of video-game, this can easily finish in rasterization process, otherwise, probably will use a kind of algorithm be used to finding little cluster degree of depth aberration.If a kind of method can be distinguished the place that the user watches attentively, near the degree of depth of pixel should be thought so outbalance-our focal point is far away, pixel is just more inessential.A kind of like this method can include, without being limited to: determine cursor or graticule whether in image and their positions in image, or by being used to measure from the feedback of specialized glasses the rotation of eyes.These class glasses can comprise the simple video camera at the eyeball place that points to the wearer.Described video camera can provide image, and in described image, the white of the eye of user's eyes can differentiate with dark color part (for example, pupil).With the position of determining pupil and make described position and join with the eyeball Angular correlation, can determine Rotation of eyeball by analysis image.For example, pupil placed in the middle will be roughly corresponding to the straight forward eyeball of orientation.
In some embodiments, the pixel in the middle body of the display 111 that may require emphasis is because the value of edge is probably more inessential.If with the fixed two-dimensional distance of ignoring the degree of depth with one-tenth of the distance between the pixel, can imagine the inclined to one side weighting statistic model that simply has of emphasizing this class center pixel or focus with following false code so:
Figure BDA00003478603700111
Figure BDA00003478603700112
For providing a kind of major part that keeps picture to be in system in the comfort zone 121, except or substitute the convergence described in the above embodiment, should adjust hither plane and far plane (or other variable in the math equation described above).Processor 113 can be configured to implement a process, and described process is as by the contemplated process of following false code:
Figure BDA00003478603700113
The two is the value between 0 and 1 of control rate of change for viewerScale and contentScale.Spectators 115 adjust the value of viewerScale, and creator of content arranges the value of contentScale.Same smoothing can be applied to above convergence adjustment.
In some implementation (such as video-game) because may need processor 113 can drive object in the scene from screen 111 farther or more close to, so the target set-up procedure that increases as follows may be useful:
Figure BDA00003478603700121
Positive displacement will trend towards making nearPlane and farPlane to move back in the scene.Similarly, negative displacement will make things move closelyer.
After one or more characteristics (for example, hither plane, far plane, the mean pixel degree of depth, standard deviation pixel depth etc.) 105 of determining given scenario, can determine scale factor collection 107.How these scale factors makes described scene maximize in the border of the comfort zone 121 that the user determines if can being indicated.In addition, in these scale factors can be used for being controlled at the speed that the scene change process is revised three-dimensional setting.
In case determine just described scale factor to be stored in the three-dimensional scenic data as metadata 109 corresponding to the scale factor of the characteristic of given scenario.Scene 103 (with its three-dimensional data of following) can transfer to processor 113 together with the metadata 109 that is associated with that scene.Then processor 113 can adjust described three-dimensional scenic setting according to described metadata.
Be important to note that: scene can be processed comes determine scale factor and metadata in the different phase that the three-dimensional data crossfire is processed, and described scene is processed after being not limited in being placed in output buffer 101.In addition, the three-dimensional scenic determined of user arranges the border that collection is not limited to arrange tripleplane.For example and not with ways to restrain, the scene setting that the user determines also can comprise the sharpness of the object in the control three-dimensional scenic or the intensity of the shade in the described three-dimensional scenic.
Although under the situation of again projection previous embodiment is described, embodiment of the present invention are not limited to this class implementation.The scaling again degree of depth of projection and the concept of scope can be applicable to adjust input parameter equally well, and described input parameter for example is used for the position of the virtual or true stereo video camera of real-time 3D video.If the video camera feed-in is dynamic, can implement so the adjustment to the input parameter that is used for real-time stereo content.Fig. 1 C and Fig. 1 D illustrate the embodiment according to the dynamic adjustment of the video camera feed-in of alternate embodiment of the present invention.
Seen at Fig. 1 C, processor 113 can be according to left-eye view and the right-eye view of three-dimensional data generating scene 103, described three-dimensional data indicated object and the position of virtual three-dimensional video camera 114 in simulated environment 102 that comprises left-eye camera 114A and right-eye camera 114B are such as the position in video-game or virtual world.For the purpose of embodiment, the virtual three-dimensional video camera can be considered to be the part of a unit with two individual cameras.Yet embodiment of the present invention comprise following implementation: wherein the virtual three-dimensional video camera is independent and is not the part of a unit.Should note: the position of virtual video camera 114A, 114B and orientation have determined things shown in scene.For example, the hypothetical simulation environment is the rank of first person shooting (FPS) game, and wherein incarnation 115A represents user 115.The user is by controlling movement and the action of incarnation 115A with processor 113 and suitable controller 117.In response to user command, processor 113 can be selected position and the orientation of virtual video camera 114A, 114B.If virtual video camera points to remote object (such as non-player role 116), so with described camera points near the situation of object (such as non-player role 118) compare, described scene can have the larger degree of depth.These objects can be determined according to the three-dimensional information that the physical modeler parts by game generate by processor with respect to all positions of virtual video camera.Can calculate the degree of depth of object in the visual field of video camera for described scene.Then can calculate mean depth, depth capacity, depth range etc. for described scene, and these can be used for selecting the 3D parameter (such as α by the scene value u, t c, Z c, f and t Hmp) default value and/or scale factor.For example and not with ways to restrain, processor 113 can be implemented the look-up table or the function that make concrete 3D parameter relevant with the concrete combination of pursuing the scene value.Can determine by rule of thumb that 3D parameter and acquiescence are by the relation of the sheet format between scene value and/or the scale factor or funtcional relationship.Then processor 113 can revise other default value and/or scale factor according to preferably arranging of user.
In the variant of the embodiment of in about Figure 1A to Fig. 1 C, describing, also might implement similar adjustment that the 3D parameter is arranged with motorization physics stereo camera.For example, consider Video chat embodiment, for example, as describing among Fig. 1 D.In this case, first user 115 and the second user 115 ' carry out via first processor 113 and the second processor 113 ', a 3D video camera 114 and the 2nd 3D video camera 114 ' and the first controller 117 and second controller 117 ' respectively alternately.Processor 113,113 ' is coupled to each other by for example network 120, and described network can be cable network or wireless network, LAN (Local Area Network) (LAN), wide area network or other communication network.The 3D video camera 114 of first user comprises left-eye camera 114A and right-eye camera 114B.The left-eye image of the environment of first user and eye image all are presented on the video display 111 ' of the processor 113 ' that is attached to the second user.In the same manner, the second user's 3D video camera 114 ' comprises left-eye camera 114A ' and right-eye camera 114B '.For the purpose of embodiment, left eye stereo camera and right eye stereo camera can be the physical pieces of a unit with two integrated cameras (the independent lens unit and the separated sensor that for example, are used for left view and right view).Yet embodiment of the present invention comprise following implementation: wherein virtual left-eye camera and right-eye camera are the part of a unit physically independently of one another and not.
The left-eye image of the second user's environment and eye image all are presented on the video display 111 of the processor 113 that is attached to first user.The processor 113 of first user can be determined by scene 3D value according to left-eye image and eye image.For example, two video cameras gather the color buffer zone usually.With the depth recovery algorithm that is fit to, can recover depth information according to the color buffer information of left-eye camera and right-eye camera.Processor 113 can transfer to depth information the second user's processor 113 ' together with image.Should note: depend on scene content, depth information can change.For example, the scene that is gathered by video camera 114A ', 114B ' can contain the object that is in different depth, such as user 115 ' and remote object 118 '.The different depth of these objects in described scene can affect the mean pixel degree of depth of described scene and the standard deviation of pixel depth.
Can be used in the video camera 114 of first user and the second user's the two left-eye camera and right-eye camera motorization of video camera 114 ' so that can be in operation adjust be used for described left-eye camera and right-eye camera parameter (such as f, t cAnd " introversion " angle) value.First user can be selected the initial setting up of the 3D video parameter of video camera 114, such as the spacing t between video camera cAnd/or the relative level anglec of rotation of left-eye camera 114A and right-eye camera 114B (with regard to " introversion ").For example, as described above, the second user 115 ' can use second controller 117 ' and the second processor 113 to adjust setting (for example, f, the t of 3D video parameter of the video camera 114 of first user cOr inclined angle) so that adjustment proportional factor.Then the data of the adjustment of the expression Comparative Examples factor can transfer to first processor 113 via network 120.First processor can be adjusted with described adjustment the 3D video parameter setting of the video camera 114 of first user.In a similar manner, first user 115 can be adjusted the setting of the second user's 3D video camera 114.By this way, each user 115,115 ' can arrange at comfortable 3D the lower 3D video image of watching the opposing party's environment.
The pixel depth value of the virtual objects of the user's control in the scaling three-dimensional scenic
The improvement that 3-D view is played up has significant impact in the zone of the interactive virtual environment that adopts 3-D technology.Many video-games enforcement 3-D views play up to create the virtual environment for user interactions.Yet the simulate real world physical phenomenon promotes to be very expensive and quite to be difficult to carry out with the user interactions of virtual world.Therefore, some unwanted vision disorder may appear in the implementation of game.
The virtual objects (for example, role and rifle) that causes the user to control when the pseudomorphism of 3 D video a problem can occur when penetrating other element in the virtual world (for example, background landscape).When the virtual objects of user control penetrates other element in the virtual world, weakened widely the sense of reality of game.In the situation of first person shooting, the sight line of the described first person may be hindered or perhaps some critical elements may be covered.Therefore, the virtual objects of the control of the user in the three-dimensional virtual environment is necessary to eliminate the appearance of these vision disorders alternately as any program of feature.
Embodiment of the present invention can be configured to the virtual objects pixel depth of scaling user control, in order to solve the problem of element that the virtual objects of user's control penetrates the three-dimensional scenic of virtual world.In the situation of first person shooting (FPS) video-game, possible embodiment will be the end such as the gun barrel of seeing from the ejaculator visual angle.
The virtual objects that Fig. 2 A to Fig. 2 B illustrates user control penetrates the problem of the element that uses the virtual world in the three-dimensional scenic that again projection generates.When the virtual objects of user control penetrates other element in the virtual world, weakened widely the sense of reality of game.As shown among Fig. 2 A, the virtual environment of the scaling of the pixel depth value of unexecuted virtual objects to user control (for example therein, scene) in, the virtual objects 201 of user's control (for example, another element 203 that gun barrel) can penetrate virtual world (for example, wall), thereby causes the potential sense of reality that hinders and weaken of watching, as discussed above.In the situation that first person shooting, the sight line of the first person may be hindered or perhaps some critical elements (for example, the end of gun barrel) may be covered.In Fig. 2 A, show the element of hiding with imaginary line.
The common solution of two dimension first person video-game is the degree of depth of the object in the scaling virtual world, in order to eliminate visual artefacts in the two dimensional image (or change be not same significant different pseudomorphisms with described pseudomorphism).Usually in the rasterization process of two-dimensional video image, use described scaling.In first person shooting embodiment, this means whether the top of nozzle barrel 201 does not pass wall 203, spectators will see the top of described gun barrel.Described solution plays good effect for two-dimensional video, yet, problem has appearred when this solution is applied to the measurements of the chest, waist and hips video.Described problem is: with respect to the remainder of two dimensional image, the depth value of scaling no longer represents the real point in the three-dimensional.Therefore, when using again projection when generating left-eye view and right-eye view, degree of depth scaling causes that object presents compression and is on the errors present at depth dimensions.For example, as shown in Fig. 2 B, perceiving now gun barrel 201 will be by " crushing " on depth direction, and when it should from physical screen more close to the time, described gun barrel is oriented to extremely near spectators.Another problem in the projection is again: degree of depth scaling also can stay the large leak that is difficult to fill up in image when finishing.
In addition, mean from the real depth value of three-dimensional scene information degree of depth scaling is got back to original value or rewritten depth value: spectators still will see gun barrel, but described gun barrel will be perceived as the back that is in wall.Although in fact virtual objects 201 should be stopped by wall 203, spectators will see the mirage phantom part of described virtual objects.It is bothersome that this degree of depth pierces through effect, because wall is still seen in spectators' expectation.
For addressing this problem, embodiment of the present invention are applied to object in the scene with second group of scaling, in order to they are placed in the suitable perceived position in the described scene.Can be after the rasterisation of two dimensional image, but use described the second scaling before the again projection of described image or in the process to generate left-eye view and right-eye view.Fig. 2 C illustrates the virtual environment (for example, scene) of the scaling of wherein carrying out the virtual objects pixel depth value that the user is controlled.Here, to the scaling of pixel depth, the virtual objects 201 of user's control can be near another element 203 of virtual world by as discussed above, but but is restricted and can not piercing elements 203.The second scaling controlling depth value is between nearly value N and the value F far away.In essence, object may be rendered as still crushed on depth dimensions, but can apply fully control at its thickness.This is a kind of balance, certainly, can provide for spectators the control of this second scaling, for example, and as discussed above.
Therefore, can eliminate or reduce significantly virtual objects by user control to penetrate the caused vision of element of virtual world disorderly.
Fig. 3 is the schematic diagram that illustrates according to the method for the pixel depth value of the virtual objects of a kind of user's control for the scaling three-dimensional scenic of embodiment of the present invention.
For addressing this problem, program can be used the second scaling to the pixel depth value of the virtual objects of user's control according to the three-dimensional scenic content that remains to be presented to the user.
Scene 103 can be arranged in output buffer 101 before presenting to the user.Can arrange them according to the presenting order of these scenes 103.Scene 103 refers to the one or more 3 D video frames take one group of sharing characteristic as feature.For example, one group of frame of video that represents the different views of same landscape can be characterized as being a scene.Yet the close-up view of same target also can represent different scenes with perspective view.Be important to note that: the combination of any amount of frame can be characterized as being a scene.
As in 133 places indications, to the initial depth scaling of the two dimensional image of three-dimensional scenic 103.Usually in the rasterization process of two dimensional image, carry out described initial depth scaling with the view projection matrix of having revised.This depth information with scaling writes in the depth buffer of described scene.
Before the user is presented to scene 103 in three-dimensional ground (for example, as left-eye view and right-eye view), can study described scene in great detail to determine that for the problem of being discussed more than solving be crucial key property.For given scenario 103, at first determine the Minimum Threshold limit value, as indicating at 135 places.This Minimum Threshold limit value represents the minimum pixel depth value, and any fragment of the virtual objects of user's control must not drop on below the described minimum pixel depth value.Secondly, determine the terminal threshold value, as indicating in 137 places.This terminal threshold value representation maximum pixel depth value, any fragment of the virtual objects of user's control must be no more than described maximum pixel depth value.The situation that these threshold limit value (TLV)s can be advanced in virtual environment to the virtual objects of user's control arranges a restriction, can not penetrate other element in the described virtual environment so that the virtual objects of described user control is restricted.
When the virtual objects of user control moves in virtual world, virtual objects is followed the tracks of their pixel depth value and made it and the pixel depth value of above determined threshold limit value (TLV) compares, as indicating in 139 places.No matter when the pixel depth value of any fragment of the virtual objects of user's control drops on below the Minimum Threshold limit value, all those pixel depth values is arranged to low value, as indicating in 141 places.For example and not with ways to restrain, this low value can be described Minimum Threshold limit value.Alternatively, this low value can be the scaling value of the virtual objects pixel depth value controlled of user.For example, drop on the pixel depth value below the described Minimum Threshold limit value and then smallest offset is added product by multiply by with inverse proportion, can determine described low value.
No matter when the pixel depth value of any fragment of the virtual objects of user's control surpasses the terminal threshold value, all those pixel depth values is arranged to high value, as indicating in 143 places.For example and not with ways to restrain, this high value can be described terminal threshold value.Alternatively, this high value can be the scaling value of the virtual objects pixel depth value controlled of user.For example, by multiply by the pixel depth value that surpasses described terminal threshold value with inverse proportion and then deducting product from peak excursion, can determine described high value.
For in essence tiny do not need to strengthen concerning the virtual objects of the perception of the degree of depth, low/high value is arranged to minimum/terminal threshold value and is played especially good effect.These low/high values make described virtual objects away from the virtual video camera displacement effectively.Yet, for needs strengthen concerning the virtual objects (such as alignment clamp) of the perception of the degree of depth, scaling mentioned above is low/high value can more effectively play a role.
Before program is carried out by processor 113, can determine Minimum Threshold limit value and terminal threshold value by described program.Can also in the content of carrying out described program, determine these values by processor 113.In described program implementation process, by the pixel depth value of the virtual objects of processor 113 completing users control and the comparison of threshold limit value (TLV).Similarly, in described program implementation process, finish above threshold limit value (TLV) or drop on the low value of virtual objects pixel depth of the user control below the threshold limit value (TLV) and the establishment of high value by described processor.
After the pixel depth value is carried out the second scaling, processor 113 can be carried out again projection with two dimensional image and with the pixel depth value collection of the virtual objects of the user of gained control, so that two or more views of generating three-dimensional scene (for example, left-eye view and right-eye view), as indicating in 145 places.Described two or more views may be displayed on the three dimensional display, as indicating in 147 places.
By being arranged to low value and high value above any pixel depth value of the virtual objects of the user of threshold limit value (TLV) control, solved the problem that penetrates other virtual world element.Although the mutual physical phenomenon of simulation virtual objects and its virtual world will address this problem effectively, in fact this quite is difficult to carry out.Therefore, come the ability of pixel depth value of the virtual objects of scaling user control to provide a kind of simple, effective solution of cost for described problem according to method described above.
Equipment
Fig. 4 illustrates according to embodiment of the present invention a kind of can be used for implement dynamic adjustment that three-dimensional scenic that the user determines arranges and/or to the block scheme of the computer equipment of the scaling of pixel depth value.Equipment 200 generally can comprise processor module 201 and storer 205.Processor module 201 can comprise one or more processor cores.The embodiment that uses the disposal system of a plurality of processor modules is Cell processor, and embodiment for example is described in detail in Cell Broadband Engine ArchitectureIn, it can be online with
Http:// www-306.ibm.com/chip/techlib/techlib.nsf/techdocs/1AEEE1 270EA2776387257060006E61BA/ $ file/CBEA_01_pub.pdf obtains, and it is incorporated herein by reference.
Storer 205 can be the form of integrated circuit, such as RAM, DRAM, ROM etc.Storer 205 can also be can be by the primary memory of all processor die block access.In some embodiments, processor module 201 can have the local storage that is associated with each core.Program 203 can be stored in the primary memory 205 in the form of the processor instructions that described processor module is carried out.Program 203 can be configured to carry out the dynamic adjustment that the three-dimensional scenic that the user is determined arranges collection.Program 203 can also be configured to carry out the scaling to the pixel depth value of the virtual objects of the control of the user in the three-dimensional scenic, for example, as above about as described in Fig. 3.Can any suitable processor readable language (for example, C, C++, JAVA, Assembly, MATLAB, FORTRAN) and many other Languages come write-in program 203.Input data 207 also can be stored in the storer.This class input data 207 can comprise the scale factor that three-dimensional that the user determines arranges collection, the three-dimensional character that is associated with given scenario or is associated with some three-dimensional character.Input data 207 can also comprise the threshold limit value (TLV) that is associated with three-dimensional scenic and the pixel depth value that is associated with the object of user's control.In the implementation of program 203, the part of program code and/or data can be loaded in the local storage of storer or processor core, to be used for by a plurality of processor core parallel processings.
Equipment 200 can also comprise well-known support function 209, such as I/O (I/O) element 211, power supply (P/S) 213, clock (CLK) 215 and high-speed cache 217.Equipment 200 can randomly comprise high-capacity storage 219, such as disc driver, CD-ROM drive, tape drive or analog with storage program and/or data.Device 200 can comprise randomly that display unit 221 and user interface section 225 are to promote mutual between described equipment and the user.For example and not with ways to restrain, display unit 221 can be the form of the ready televisor of 3-D, it is shown as text, numeral, graphical symbol or other visual object will be watched the stereo-picture of glasses 227 perception by a pair of 3-D, and described 3-D watches glasses to could be attached to I/O element 211.Stereo refers to the amplification by plastic in the two dimensional image of slightly different image being presented to every eyes.User interface 225 can comprise keyboard, mouse, rocking bar, light pen, or other device that can be combined with graphical user interface (GUI).Equipment 200 can also comprise that network interface 223 is to allow described device to communicate through network (such as the internet) and other device.
The parts of system 200 comprise that processor 201, storer 205, support function 209, high-capacity storage 219, user interface 225, network interface 223 and display 221 can operationally be connected to each other via one or more data buss 227.These parts can be embodied in hardware, software or firmware or these parts in two or more some combinations.
Exist many other modes to come so that use the parallel processing of a plurality of processors in the described equipment to rationalize.For example, in some implementations, for example by at two or more processor cores replicating code and so that each processor core is implemented described code to process the different pieces of information piece in the heart, might " untie " treatment loop.This class implementation can be avoided and the stand-by period of setting described loop and being associated.When being applied to embodiment of the present invention, a plurality of processors can be determined the scale factor of different scenes concurrently.The processing time that the ability of deal with data can also saves valuable concurrently, thus obtain for scaling corresponding to the more effective of the pixel depth value of the virtual objects of one or more users' controls of three-dimensional scenic and the system that rationalizes.The processing time that the ability of deal with data can also saves valuable concurrently, thus dynamic adjustment more effective of the scene setting collection determined for three-dimensional user and the system that rationalizes obtained.
Except the embodiment that can implement the disposal system of parallel processings at three or more processors be Cell processor.Existence can be classified as the many different processor architecture of Cell processor.For example and not restrictedly, Fig. 5 illustrates one type Cell processor.Cell processor 300 comprises primary memory 301, single supply processor elements (PPE) 307, and eight coprocessor elements (SPE) 311.Alternatively, described Cell processor can be configured with any amount of SPE.With reference to Fig. 3, storer 301, PPE307 and SPE311 can communicate with one another and communicate by letter with I/O device 315 through ring-type element interconnect bus 317.Storer 301 contains the input data 303 with feature identical with input data described above and the program 305 with feature identical with program described above.Among the SPE311 at least one can comprise the part that pending parallel processing is arranged of programmed instruction 313 and input data 303 in its local storage (LS), for example, and as described above.PPE307 can comprise programmed instruction 309 in its L1 high-speed cache.Programmed instruction 309,313 can be configured to implement embodiment of the present invention, for example, describes about Fig. 1 or Fig. 3 as above.For example and not with ways to restrain, instruction 309,313 can have the feature identical with program described above 203.Instruction 309,313 and data 303 can also be stored in the storer 301 to be used for when needed by SPE311 and PPE307 access.
For example and not with ways to restrain, instruction 309,313 can comprise for implementing as the instruction of the dynamic adjustment instruction that the above three-dimensional scenic of determining about the described user of Fig. 1 arranges.Alternatively, instruction 309,313 can be configured to implement the scaling to the pixel depth value of the virtual objects of user's control, for example, describes about Fig. 3 as above.
For instance, PPE307 can be 64 the PowerPC processor units (PPU) with the high-speed cache that is associated.PPE307 can comprise optional vector multimedia extension unit.Each SPE311 comprises coprocessor unit (SPU) and local storage (LS).In some implementations, local storage can have for example about memory span of 256 kilobyte for program and data.SPU is the uncomplicated computing unit of comparing with PPU, because described SPU does not carry out system management function usually.SPU can have single instruction multiple data (SIMD) ability and usually deal with data and any needed data transmission of initialization (being limited by the access character that is set by PPE), in order to carry out the task that they obtain distribution.SPU allows System Implementation to need the application program of higher computing unit density and the instruction set that provides can be provided effectively.A large amount of SPU in the system that is managed by PPE allow to carry out cost through the application program of broad range and effectively process.For instance, the feature of Cell processor can be for being called as the architecture of unit bandwidth engine architecture (CBEA).In the compatible architecture of CBEA, a plurality of PPE can be combined into a PPE group, and a plurality of SPE can be combined into a SPE group.For the purpose of embodiment, Cell processor is depicted as to be had with the single SPE group of single SPE with single PPE group of single PPE.Alternatively, Cell processor can comprise many group power programmer elements (PPE group) and many group coprocessor elements (SPE group).The CBEA compatible processor for example is described in detail in Cell Broadband Engine ArchitectureIn, it can be online with https: //www-306.ibm.com/chips/techlib/techlib.nsf/techdocs/1AEEE 1270EA277638725706000E61BA/ $ file/CBEA_01_pub.pdf obtains, it is incorporated herein by reference.
According to another embodiment, the instruction that is used for the dynamic adjustment of the three-dimensional scenic setting that the user determines can be stored in computer-readable recording medium.For example and not with ways to restrain, Fig. 6 A illustrates the embodiment according to the non-instantaneity computer-readable recording medium 400 of embodiment of the present invention.Storage medium 400 contain with a kind of can be by the computer-readable instruction of the form of computer processor unit retrieval, decipher and execution storage.For example and not with ways to restrain, computer-readable recording medium can be computer-readable memory, such as random access memory (RAM) or ROM (read-only memory) (ROM), (for example be used for fixed disk drive, hard disk drive) computer-readable memory disk, or removable disk drive.In addition, computer-readable recording medium 400 can be flash memory device, computer-readable tape, CD-ROM, DVD-ROM, Blu-ray Disc (Blu-Ray), HD-DVD, UMD, or other optical storage medium.
Storage medium 400 contains the instruction 401 that is useful on the dynamic adjustment that three-dimensional scenic that the user determines arranges.The instruction 401 of the dynamic adjustment of the three-dimensional scenic setting that the user determines can be configured to implement dynamic adjustment according to above about the described method of Fig. 1.Specifically, dynamically adjust the instruction 403 that instruction 401 can comprise the three-dimensional character of determining scene, described instruction is used for determining that given scenario and three-dimensional described scene watch some relevant characteristic of optimization of setting.Dynamically adjust instruction 401 and may further include the instruction 405 of determining scale factor, described instruction is configured to determine some optimization adjustment that one or more scale factors will be made with expression based on the characteristic of given scenario.
Dynamically adjust the instruction 407 that instruction 401 can also comprise the three-dimensional setting that the adjustment user determines, described instruction is configured to described one or more scale factors are applied to the three-dimensional scenic setting that described user determines, so that the result is: with the 3-D projection of user preference and the two scene of taking into account of intrinsic scene characteristics.Described result be scene according to the visual performance of user's predetermined set, described user's predetermined set can be according to some the characteristic correct that is associated with described scene, so that each user of optimization is to the perception of given scenario uniquely.
Dynamically adjust the instruction 409 that instruction 401 can comprise displayed scene in addition, described instruction is configured to arrange according to the above dynamically three-dimensional scenic of adjustment that obtains scene is presented on the visual displays.
According to another embodiment, the instruction of pixel depth value of virtual objects that is used for user's control of scaling three-dimensional scenic can be stored in the computer-readable recording medium.For example and not with ways to restrain, Fig. 6 B illustrates the embodiment according to the non-instantaneity computer-readable recording medium 410 of embodiment of the present invention.Storage medium 410 contain with a kind of can be by the computer-readable instruction of the form of computer processor unit retrieval, decipher and execution storage.For example and not with ways to restrain, computer-readable recording medium can be computer-readable memory, such as random access memory (RAM) or ROM (read-only memory) (ROM), (for example be used for fixed disk drive, hard disk drive) computer-readable memory disk, or removable disk drive.In addition, computer-readable recording medium 410 can be flash memory device, computer-readable tape, CD-ROM, DVD-ROM, Blu-ray Disc, HD-DVD, UMD, or other optical storage medium.
Storage medium 410 contains the instruction 411 of pixel depth value of the virtual objects of the user control that is useful in the scaling three-dimensional scenic.The instruction 411 of pixel depth value of virtual objects that is used for user's control of scaling three-dimensional scenic can be configured to implement the pixel depth scaling according to above about the described method of Fig. 3.Specifically, pixel depth scaling instruction 411 can comprise initial scaling instruction 412, and described initial scaling instruction can be carried out the initial scaling of the two dimensional image of three-dimensional scenic when being performed.Instruction 411 may further include the instruction 413 for the minimum threshold of definite three-dimensional scenic of determining the Minimum Threshold limit value, and for concrete scene, the pixel depth value of the virtual objects of user's control may not can drop on below the described Minimum Threshold limit value.Similarly, pixel depth scaling instruction 411 can also comprise the instruction 415 for the terminal threshold of definite three-dimensional scenic of determining the terminal threshold value, for concrete scene, the pixel depth value of the virtual objects of user's control may not can surpass described terminal threshold value.
Pixel depth scaling instruction 411 can also comprise the instruction 417 of comparison virtual objects pixel depth, and pixel depth and above determined threshold limit value (TLV) that described instruction is used for being associated with the virtual objects of user's control compare.Compare with the pixel depth value of threshold limit value (TLV) by the pixel depth value with the virtual objects of user control, the position that can follow the tracks of continuously the virtual objects that the user controls can not penetrate other virtual component in the three-dimensional scenic to guarantee it.
Pixel depth scaling instruction 411 may further include the instruction 419 of the virtual objects pixel depth being arranged to low value, and any part of the degree of depth of described instruction restriction virtual objects can not drop on below the Minimum Threshold limit value.The low value that is assigned to the low pixel depth value of mistake of virtual objects can be described Minimum Threshold limit value itself, or the scaling value of low pixel depth value, as discussed above.
Pixel depth scaling instruction 411 can comprise the instruction 421 of the virtual objects pixel depth being arranged to high value in addition, and any part of the degree of depth of described instruction restriction virtual objects is no more than the terminal threshold value.The high value that is assigned to the too high pixel depth value of virtual objects can be described terminal threshold value itself, or the scaling value of high pixel depth value, as discussed above.
The instruction of pixel depth scaling may further include again projection instruction 423, and the pixel depth value set pair two dimensional image of the virtual objects of user's control of described instruction use gained is carried out again projection to produce two or more views of three-dimensional scenic.Pixel depth scaling instruction 411 can comprise the instruction 425 of displayed scene in addition, and described instruction is configured to use the virtual objects pixel depth of gained that collection is set scene is presented on the visual displays.
As mentioned above, embodiment of the present invention can utilize three-dimensional to watch glasses.Displaying three-dimensional is according to an aspect of the present invention watched the embodiment of glasses 501 among Fig. 7.Glasses can comprise the framework 505 for the left LCD eyeglass 510 of fixing and right LCD eyeglass 512.As mentioned above, each eyeglass 510 and 512 can be rapidly and optionally blackening, sees through eyeglass in order to prevent the wearer.Left earphone 530 and right earphone 532 also preferably are connected to framework 505.The antenna 520 that is used for the sending and receiving wireless messages also can be included in framework 505 or.Can follow the tracks of glasses to determine whether described glasses are just seen to screen via any means.For example, the front of glasses can also comprise for detection of the one or more photoelectric detectors 540 of described glasses towards the orientation of monitor.
Can provide alternative demonstration from the image of video feed-in with various known technologies.The visual displays 111 of Fig. 1 can be configured to each video feed-in of sharing at screen is operated with the progressive scan pattern.Yet embodiment of the present invention can also be configured to terleaved video is worked, such as description.For the standard television monitor, as use those video monitors of staggered NTSC or PAL form, the image of two screen feed-ins to interlock and from a plurality of row of an image of a video feed-in can with a plurality of line interlacings from an image of another video feed-in.For example, show from the odd-numbered line from the Image Acquisition of the first video feed-in, and then show from the even number line from the Image Acquisition of the second video feed-in.
Show the system-level diagram of the glasses that can be combined with embodiment of the present invention among Fig. 8.Glasses can comprise processor 602, and it is carried out from the instruction that is stored in the program 608 in the storer 604.Storer 604 can also be stored any other memory scan/memory element that will offer processor 602 and glasses or from the data of any other memory scans of described processor and described glasses/memory element output.Other element of processor 602, storer 604 and glasses can communicate each other through bus 606.Other element of this class can comprise lcd driver 610, and it provides the optionally driving signal of the left LCD eyeglass 612 of shield and right LCD eyeglass 614.Lcd driver can be in the different time and with various durations individually, or block together left LCD eyeglass and right LCD eyeglass at same time or with the identical duration.
Frequency when blocking the LCD eyeglass can shift to an earlier date storer (for example, based on NTSC given frequency) in glasses.Alternatively, can input 616 (for example, adjusting or key in knob or the button of required frequency) by means of the user and select frequency.In addition, required frequency and initially blocking the start time, or instruction time section out of Memory (during the described time period, should or should not block the LCD eyeglass, no matter this class time period whether arrange frequency and the duration under) can transfer to glasses via wireless launcher receiver 601 or any other input element.Wireless launcher/receiver 601 can comprise any wireless launcher, comprises bluetooth transmitters/receiver.
Note amplifier 620 also can receive the information from wireless launcher/receiver 601,, will offer L channel and the R channel of the audio frequency of left speaker 622 or right loudspeaker 624 that is.Glasses can also comprise microphone 630.Microphone 630 can be united use so that voice communication to be provided with game; Voice signal can transfer to game console or another device via wireless launcher/receiver 601.
Glasses can also comprise one or more photoelectric detectors 634.Photoelectric detector can be used for determining whether glasses are directed towards monitor.For example, photoelectric detector can detect the light intensity of the described photoelectric detector of incident and with communication to processor 602.May shift the essence decline of leaving on the relevant light intensity of monitor with user's sight if described processor detects, so described processor can stop blocking eyeglass.Therefore can also use definite glasses (and user) whether towards other method of monitor orientation.For example, can use the one or more video cameras that replace photoelectric detector, and check by processor 602 whether the image that gathers can comprise towards the directed several possible embodiment of such video camera of using of monitor with definite glasses: whether the check contrast level points to described monitor to detect described video camera, or attempts the luminance test pattern on the described monitor of detection.By via wireless launcher/receiver 601 with communication to processor 602, provide the device of a plurality of feed-ins can indicate the existence of this class testing pattern to described monitor.
Should note: for example, by software or the firmware of implementing at processor 602, can implement by glasses some aspect of embodiment of the present invention.For example, can in described glasses, implement by content driven and by the color contrast of user's scaling/adjustment or proofread and correct and arrange, and make extra metadata streams be sent to glasses.In addition, along with wireless and improvement LCD, processor 113 can directly be broadcast left eye image data and eye image data to glasses 119, thereby eliminates the needs to independent display 111.Alternatively, can present single as image and the pixel depth value that is associated to described glasses from display 111 or processor 113.These two means that more in fact projection process will occur at described glasses.
Watch glasses to watch the embodiment of the implementation of three-dimensional 3D rendering although described wherein with passive or active 3D, embodiment of the present invention are not limited to this class implementation.Clear and definite, embodiment of the present invention go for not relying on the stereo 3 D video technology that the passive or active 3D of head tracking watches glasses.The embodiment of the stereo 3 D video technology that this class " is exempted from wear a pair of spectacles " is called automatic stereo technology or free stereo sometimes.The embodiment of this class technology includes but not limited to the technology based on the use of lenticular lens.Lenticular lens is the array of magnifier, and it is designed so that when watching from slightly different angle, amplify different images.Can select different images so that three-dimensional viewing effect to be provided when watching lenticulated screen with different angles.The quantity of the image that generates increases pro rata with the number of views of described screen.
Clearer and more definite, in the lenticular lens video system, can come according to the depth information of each pixel in original 2D image and the described image the again projected image from slightly different viewing angle of generating scene.Use again shadow casting technique, can according to described original 2D image and depth information generate described scene from the different views of different viewing angles progressively.The image of expression different views can be divided into band and be presented on the automatic stereoscopic display device with staggered pattern, and described automatic stereoscopic display device has the indicator screen between lenticular lens array and viewing location.The eyeglass that consists of described lenticular lens can be to align with described band and generally be the wide cylindrical magnifier of described band twice.Depend on the angle of watching screen, spectators perceive the different views of scene.Can select different views that the illusion of the degree of depth in shown scene is provided.
Although with reference to some preferred styles of the present invention the present invention is carried out quite detailed description, other pattern is possible.Therefore, the spirit and scope of appended claims should be not limited to the description to the contained preferred styles of this paper.On the contrary, should determine scope of the present invention together with the four corner of their equivalents with reference to appended claims.
Disclosed all features can identical by being used for, of equal value or similar purpose alternative features be replaced in this instructions (comprising any claim of enclosing, summary and graphic), unless clear is arranged in addition.Therefore, unless clear is arranged in addition, disclosed each feature only is an embodiment of a series of general equivalences or similar characteristics.Any feature (no matter whether preferred) can make up with any further feature (no matter whether preferred).In the claims of enclosing, indefinite article " one (individual/kind) " refer to the one or more amount in the described article item afterwards, be the situation of exception unless clear is arranged in addition.As specified in the 35th piece the 6th section of the 112nd article of United States code, clear any key element of a claim of being used for carrying out " device " of appointed function will not explained not according to " device " or " step " clause.Specifically, the use of " step (step of) " is not intended to quote the regulation of the 35th piece the 6th section of the 112nd article of United States code in this paper claims.
The reader notice can be turned to this instructions submit to simultaneously and with this instructions openly for All Files and the official document of public examination, and the content of any file and official document is incorporated herein by reference.

Claims (20)

1. the method for one or more pixel depth values of the virtual objects of the user's control that is used for the scaling three-dimensional scenic, described method comprises:
A) implementation is to the initial depth scaling of the two dimensional image of described three-dimensional scenic;
B) determine the Minimum Threshold limit value of described three-dimensional scenic;
C) determine the terminal threshold value of described three-dimensional scenic;
D) each pixel depth value of the virtual objects of described user being controlled and described Minimum Threshold limit value and described terminal threshold value compare;
Each pixel depth value that e) will drop on the virtual objects of the following described user's control of described Minimum Threshold limit value is arranged to corresponding low value;
F) each pixel depth value of the virtual objects that will control above the described user of described terminal threshold value is arranged to corresponding high value;
G) carry out the again projection of described two dimensional image with the gained pixel depth value collection of the virtual objects of described user control, in order to generate two or more views of described three-dimensional scenic; And
H) described two or more views are presented on the three dimensional display.
2. the method for claim 1, wherein e) in the described low value corresponding to dropping on the pixel depth below the described Minimum Threshold limit value be described Minimum Threshold limit value.
3. the method for claim 1, wherein f) in the described high value corresponding to the pixel depth that surpasses described terminal threshold value be described terminal threshold value.
4. the method for claim 1 is wherein determined e by multiply by described pixel depth with inverse proportion and smallest offset being added the above product) in corresponding to the described low value that drops on the pixel depth below the described Minimum Threshold limit value.
5. the method for claim 1 is wherein determined e by multiply by described pixel depth with inverse proportion and deducting described product from peak excursion) the described high value corresponding to the pixel depth that surpasses described terminal threshold value.
6. the method for claim 1, wherein said three dimensional display is left-eye view and the right-eye view that three-dimensional display and described two or more views comprise described three-dimensional scenic.
7. the method for claim 1, wherein said three dimensional display are that automatic stereoscopic display device and described two or more views comprise two or more the staggered views from the described three-dimensional scenic of slightly different viewing angle.
8. the method for claim 1, wherein said initial depth scaling is to carry out in the rasterization process of described two dimensional image.
9. method as claimed in claim 8 is wherein at g) before or during carry out b), c), d), e) and f) in one or more.
10. equipment that is used for the one or more pixel depth values of scaling, described equipment comprises:
Processor;
Storer; And
The computer code instruction, it is embodied in the described storer and can be carried out by described processor, wherein said computer code instruction is configured to implement a kind of method of one or more pixel depth values of virtual objects of the user's control for the scaling three-dimensional scenic, and described method comprises:
A) implementation is to the initial depth scaling of the two dimensional image of described three-dimensional scenic;
B) determine the Minimum Threshold limit value of described three-dimensional scenic;
C) determine the terminal threshold value of described three-dimensional scenic;
D) each pixel depth value of the virtual objects of described user being controlled and described Minimum Threshold limit value and described terminal threshold value compare;
Each pixel depth value that e) will drop on the virtual objects of the following described user's control of described Minimum Threshold limit value is arranged to corresponding low value;
F) each pixel depth value of the virtual objects that will control above the described user of described terminal threshold value is arranged to corresponding high value;
G) carry out the again projection of described two dimensional image with the gained pixel depth value collection of the virtual objects of described user control, in order to generate two or more views of described three-dimensional scenic; And
H) described two or more views are presented on the three dimensional display.
11. equipment as claimed in claim 10, it further comprises the 3D vision display, and it is configured to show described given scenario according to the scaling pixel depth value corresponding to described one or more virtual objects.
12. equipment as claimed in claim 11, wherein said three dimensional display are left-eye view and right-eye view that three-dimensional display and described two or more views comprise described three-dimensional scenic.
13. being automatic stereoscopic display device and described two or more views, equipment as claimed in claim 11, wherein said three dimensional display comprises two or more staggered views from the described three-dimensional scenic of slightly different viewing angle.
14. equipment as claimed in claim 10, wherein said initial depth scaling is to carry out in the rasterization process of described two dimensional image.
15. equipment as claimed in claim 14 is wherein at g) before or during carry out b), c), d), e) and f) in one or more.
16. a computer program, it comprises:
Non-instantaneity, computer-readable recording medium, it has the computer readable program code that is embodied in the described medium, described computer readable program code is for one or more pixel depth values of the virtual objects of user's control of scaling three-dimensional scenic, and described computer program has:
A) implementation is to the initial depth scaling of the two dimensional image of described three-dimensional scenic;
B) determine the Minimum Threshold limit value of described three-dimensional scenic;
C) determine the terminal threshold value of described three-dimensional scenic;
D) each pixel depth value of the virtual objects of described user being controlled and described Minimum Threshold limit value and described terminal threshold value compare;
Each pixel depth value that e) will drop on the virtual objects of the following described user's control of described Minimum Threshold limit value is arranged to corresponding low value;
F) each pixel depth value of the virtual objects that will control above the described user of described terminal threshold value is arranged to corresponding high value;
G) carry out the again projection of described two dimensional image with the gained pixel depth value collection of the virtual objects of described user control, in order to generate two or more views of described three-dimensional scenic; And
H) described two or more views are presented on the three dimensional display.
17. computer program as claimed in claim 16, wherein said three dimensional display are left-eye view and right-eye view that three-dimensional display and described two or more views comprise described three-dimensional scenic.
18. being automatic stereoscopic display device and described two or more views, computer program as claimed in claim 16, wherein said three dimensional display comprise two or more staggered views from the described three-dimensional scenic of slightly different viewing angle.
19. computer program as claimed in claim 16, wherein said initial depth scaling is to carry out in the rasterization process of described two dimensional image.
20. computer program as claimed in claim 19 is wherein at g) before or during carry out b), c), d), e) and f) in one or more.
CN201180064484.0A 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls Active CN103329165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610191451.7A CN105894567B (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US12/986,872 2011-01-07
US12/986,854 2011-01-07
US12/986,814 US9041774B2 (en) 2011-01-07 2011-01-07 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
US12/986,854 US8619094B2 (en) 2011-01-07 2011-01-07 Morphological anti-aliasing (MLAA) of a re-projection of a two-dimensional image
US12/986,827 US8514225B2 (en) 2011-01-07 2011-01-07 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US12/986,814 2011-01-07
US12/986,827 2011-01-07
US12/986,872 US9183670B2 (en) 2011-01-07 2011-01-07 Multi-sample resolving of re-projection of two-dimensional image
PCT/US2011/063001 WO2012094075A1 (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201610191451.7A Division CN105894567B (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene

Publications (2)

Publication Number Publication Date
CN103329165A true CN103329165A (en) 2013-09-25
CN103329165B CN103329165B (en) 2016-08-24

Family

ID=46457655

Family Applications (7)

Application Number Title Priority Date Filing Date
CN201610191451.7A Active CN105894567B (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
CN201180063720.7A Active CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN201180063836.0A Active CN103283241B (en) 2011-01-07 2011-12-02 The multisample of the reprojection of two dimensional image is resolved
CN201610191875.3A Active CN105959664B (en) 2011-01-07 2011-12-02 The dynamic adjustment of predetermined three-dimensional video setting based on scene content
CN201610095198.5A Active CN105898273B (en) 2011-01-07 2011-12-02 The multisample parsing of the reprojection of two dimensional image
CN201180063813.XA Active CN103348360B (en) 2011-01-07 2011-12-02 The morphology anti aliasing (MLAA) of the reprojection of two dimensional image
CN201180064484.0A Active CN103329165B (en) 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls

Family Applications Before (6)

Application Number Title Priority Date Filing Date
CN201610191451.7A Active CN105894567B (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
CN201180063720.7A Active CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN201180063836.0A Active CN103283241B (en) 2011-01-07 2011-12-02 The multisample of the reprojection of two dimensional image is resolved
CN201610191875.3A Active CN105959664B (en) 2011-01-07 2011-12-02 The dynamic adjustment of predetermined three-dimensional video setting based on scene content
CN201610095198.5A Active CN105898273B (en) 2011-01-07 2011-12-02 The multisample parsing of the reprojection of two dimensional image
CN201180063813.XA Active CN103348360B (en) 2011-01-07 2011-12-02 The morphology anti aliasing (MLAA) of the reprojection of two dimensional image

Country Status (5)

Country Link
KR (2) KR101741468B1 (en)
CN (7) CN105894567B (en)
BR (2) BR112013017321A2 (en)
RU (2) RU2573737C2 (en)
WO (4) WO2012094075A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329690A (en) * 2017-06-29 2017-11-07 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN109992175A (en) * 2019-04-03 2019-07-09 腾讯科技(深圳)有限公司 For simulating object display method, device and the storage medium of blind person's impression
CN110136191A (en) * 2013-10-02 2019-08-16 基文影像公司 The system and method for size estimation for intrabody objects

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016010246A1 (en) * 2014-07-16 2016-01-21 삼성전자주식회사 3d image display device and method
CN105323573B (en) 2014-07-16 2019-02-05 北京三星通信技术研究有限公司 3-D image display device and method
EP3232406B1 (en) * 2016-04-15 2020-03-11 Ecole Nationale de l'Aviation Civile Selective display in a computer generated environment
CN109398731B (en) * 2017-08-18 2020-09-08 深圳市道通智能航空技术有限公司 Method and device for improving depth information of 3D image and unmanned aerial vehicle
GB2571306A (en) * 2018-02-23 2019-08-28 Sony Interactive Entertainment Europe Ltd Video recording and playback systems and methods
RU2749749C1 (en) * 2020-04-15 2021-06-16 Самсунг Электроникс Ко., Лтд. Method of synthesis of a two-dimensional image of a scene viewed from a required view point and electronic computing apparatus for implementation thereof
CN111275611B (en) * 2020-01-13 2024-02-06 深圳市华橙数字科技有限公司 Method, device, terminal and storage medium for determining object depth in three-dimensional scene
CN112684883A (en) * 2020-12-18 2021-04-20 上海影创信息科技有限公司 Method and system for multi-user object distinguishing processing
US11882295B2 (en) 2022-04-15 2024-01-23 Meta Platforms Technologies, Llc Low-power high throughput hardware decoder with random block access
US20230334736A1 (en) * 2022-04-15 2023-10-19 Meta Platforms Technologies, Llc Rasterization Optimization for Analytic Anti-Aliasing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1755733A (en) * 2004-05-10 2006-04-05 微软公司 Interactive exploded view from two-dimensional image
US20060088206A1 (en) * 2004-10-21 2006-04-27 Kazunari Era Image processing apparatus, image pickup device and program therefor
US20100215251A1 (en) * 2007-10-11 2010-08-26 Koninklijke Philips Electronics N.V. Method and device for processing a depth-map

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2724033B1 (en) * 1994-08-30 1997-01-03 Thomson Broadband Systems SYNTHESIS IMAGE GENERATION METHOD
US5790086A (en) * 1995-01-04 1998-08-04 Visualabs Inc. 3-D imaging system
GB9511519D0 (en) * 1995-06-07 1995-08-02 Richmond Holographic Res Autostereoscopic display with enlargeable image volume
US8369607B2 (en) * 2002-03-27 2013-02-05 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
EP2362670B1 (en) * 2002-03-27 2012-10-03 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
JP2005528643A (en) * 2002-06-03 2005-09-22 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Video scaling
EP1437898A1 (en) * 2002-12-30 2004-07-14 Koninklijke Philips Electronics N.V. Video filtering for stereo images
US7663689B2 (en) * 2004-01-16 2010-02-16 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
US8094927B2 (en) * 2004-02-27 2012-01-10 Eastman Kodak Company Stereoscopic display system with flexible rendering of disparity map according to the stereoscopic fusing capability of the observer
EP1851727A4 (en) * 2005-02-23 2008-12-03 Craig Summers Automatic scene modeling for the 3d camera and 3d video
JP4555722B2 (en) * 2005-04-13 2010-10-06 株式会社 日立ディスプレイズ 3D image generator
US20070146360A1 (en) * 2005-12-18 2007-06-28 Powerproduction Software System And Method For Generating 3D Scenes
GB0601287D0 (en) * 2006-01-23 2006-03-01 Ocuity Ltd Printed image display apparatus
US8044994B2 (en) * 2006-04-04 2011-10-25 Mitsubishi Electric Research Laboratories, Inc. Method and system for decoding and displaying 3D light fields
US7778491B2 (en) 2006-04-10 2010-08-17 Microsoft Corporation Oblique image stitching
CN100510773C (en) * 2006-04-14 2009-07-08 武汉大学 Single satellite remote sensing image small target super resolution ratio reconstruction method
US20080085040A1 (en) * 2006-10-05 2008-04-10 General Electric Company System and method for iterative reconstruction using mask images
US20080174659A1 (en) * 2007-01-18 2008-07-24 Mcdowall Ian Wide field of view display device and method
GB0716776D0 (en) * 2007-08-29 2007-10-10 Setred As Rendering improvement for 3D display
US8493437B2 (en) * 2007-12-11 2013-07-23 Raytheon Bbn Technologies Corp. Methods and systems for marking stereo pairs of images
KR101419979B1 (en) * 2008-01-29 2014-07-16 톰슨 라이센싱 Method and system for converting 2d image data to stereoscopic image data
JP4695664B2 (en) * 2008-03-26 2011-06-08 富士フイルム株式会社 3D image processing apparatus, method, and program
US9019381B2 (en) * 2008-05-09 2015-04-28 Intuvision Inc. Video tracking systems and methods employing cognitive vision
US8106924B2 (en) 2008-07-31 2012-01-31 Stmicroelectronics S.R.L. Method and system for video rendering, computer program product therefor
US8743114B2 (en) * 2008-09-22 2014-06-03 Intel Corporation Methods and systems to determine conservative view cell occlusion
CN101383046B (en) * 2008-10-17 2011-03-16 北京大学 Three-dimensional reconstruction method on basis of image
TWI527429B (en) * 2008-10-28 2016-03-21 皇家飛利浦電子股份有限公司 A three dimensional display system
US8335425B2 (en) * 2008-11-18 2012-12-18 Panasonic Corporation Playback apparatus, playback method, and program for performing stereoscopic playback
CN101783966A (en) * 2009-01-21 2010-07-21 中国科学院自动化研究所 Real three-dimensional display system and display method
RU2421933C2 (en) * 2009-03-24 2011-06-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." System and method to generate and reproduce 3d video image
US8289346B2 (en) 2009-05-06 2012-10-16 Christie Digital Systems Usa, Inc. DLP edge blending artefact reduction
US9269184B2 (en) * 2009-05-21 2016-02-23 Sony Computer Entertainment America Llc Method and apparatus for rendering image based projected shadows with multiple depth aware blurs
US8933925B2 (en) * 2009-06-15 2015-01-13 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes
CN101937079B (en) * 2010-06-29 2012-07-25 中国农业大学 Remote sensing image variation detection method based on region similarity

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1755733A (en) * 2004-05-10 2006-04-05 微软公司 Interactive exploded view from two-dimensional image
US20060088206A1 (en) * 2004-10-21 2006-04-27 Kazunari Era Image processing apparatus, image pickup device and program therefor
US20100215251A1 (en) * 2007-10-11 2010-08-26 Koninklijke Philips Electronics N.V. Method and device for processing a depth-map

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136191A (en) * 2013-10-02 2019-08-16 基文影像公司 The system and method for size estimation for intrabody objects
CN110136191B (en) * 2013-10-02 2023-05-09 基文影像公司 System and method for size estimation of in vivo objects
CN107329690A (en) * 2017-06-29 2017-11-07 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN107329690B (en) * 2017-06-29 2020-04-17 网易(杭州)网络有限公司 Virtual object control method and device, storage medium and electronic equipment
CN109992175A (en) * 2019-04-03 2019-07-09 腾讯科技(深圳)有限公司 For simulating object display method, device and the storage medium of blind person's impression

Also Published As

Publication number Publication date
BR112013016887A2 (en) 2020-06-30
RU2573737C2 (en) 2016-01-27
CN105959664B (en) 2018-10-30
WO2012094076A9 (en) 2013-07-25
RU2013136687A (en) 2015-02-20
CN105894567A (en) 2016-08-24
BR112013017321A2 (en) 2019-09-24
WO2012094074A3 (en) 2014-04-10
CN103329165B (en) 2016-08-24
CN105898273B (en) 2018-04-10
KR20140004115A (en) 2014-01-10
CN105898273A (en) 2016-08-24
CN105894567B (en) 2020-06-30
BR112013016887B1 (en) 2021-12-14
KR101741468B1 (en) 2017-05-30
RU2562759C2 (en) 2015-09-10
WO2012094075A1 (en) 2012-07-12
CN103348360A (en) 2013-10-09
WO2012094074A2 (en) 2012-07-12
CN103348360B (en) 2017-06-20
CN103283241B (en) 2016-03-16
CN103283241A (en) 2013-09-04
CN103947198B (en) 2017-02-15
KR101851180B1 (en) 2018-04-24
RU2013129687A (en) 2015-02-20
WO2012094077A1 (en) 2012-07-12
CN103947198A (en) 2014-07-23
WO2012094076A1 (en) 2012-07-12
CN105959664A (en) 2016-09-21
KR20130132922A (en) 2013-12-05

Similar Documents

Publication Publication Date Title
US9723289B2 (en) Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN103329165B (en) The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
US8514225B2 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US8094927B2 (en) Stereoscopic display system with flexible rendering of disparity map according to the stereoscopic fusing capability of the observer
JP2004221700A (en) Stereoscopic image processing method and apparatus
CN109901710A (en) Treating method and apparatus, storage medium and the terminal of media file
US20130044939A1 (en) Method and system for modifying binocular images
US8947512B1 (en) User wearable viewing devices
JP2004007395A (en) Stereoscopic image processing method and device
JP2004007396A (en) Stereoscopic image processing method and device
WO2018010677A1 (en) Information processing method, wearable electric device, processing apparatus, and system
JP2011172172A (en) Stereoscopic video processing device and method, and program
US20170104982A1 (en) Presentation of a virtual reality scene from a series of images
Moreau Visual immersion issues in Virtual Reality: a survey
Celikcan et al. Attention-aware disparity control in interactive environments
US12081722B2 (en) Stereo image generation method and electronic apparatus using the same
Bickerstaff Case study: the introduction of stereoscopic games on the Sony PlayStation 3
Mikšícek Causes of visual fatigue and its improvements in stereoscopy
JP2018504014A (en) Method for reproducing an image having a three-dimensional appearance
Watt et al. 3D media and the human visual system
US9609313B2 (en) Enhanced 3D display method and system
Kim et al. Adaptive interpupillary distance adjustment for stereoscopic 3d visualization
Gurrieri Improvements in the visualization of stereoscopic 3D imagery
Grasnick The hype cycle in 3D displays: inherent limits of autostereoscopy
Banks I3. 1: invited paper: the importance of focus cues in stereo 3d displays

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant