CN107590859A - A kind of mixed reality picture processing method and service equipment - Google Patents
A kind of mixed reality picture processing method and service equipment Download PDFInfo
- Publication number
- CN107590859A CN107590859A CN201710781247.5A CN201710781247A CN107590859A CN 107590859 A CN107590859 A CN 107590859A CN 201710781247 A CN201710781247 A CN 201710781247A CN 107590859 A CN107590859 A CN 107590859A
- Authority
- CN
- China
- Prior art keywords
- scene
- rendered
- resolution
- entity
- virtual scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
A kind of mixed reality picture processing method and service equipment, this method include:Obtain the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen;Identify the eyeball fixes direction corresponding watching area in the display picture;Whether the target scene for judging display in the watching area is entity scene, if it is, reducing the resolution ratio of virtual scene to be rendered, first resolution is obtained, and render the virtual scene to be rendered, the virtual scene after being rendered according to the first resolution;Wherein, the first resolution is less than default resolution ratio.Implement the embodiment of the present invention, rendering efficiency can be improved, so as to reduce the time delay that picture is shown, it is clearly that can also make entity scene in the display picture that user sees, and virtual scene is fuzzy so that display picture more meets the visual law of people in actual life.
Description
Technical field
The present invention relates to mixed reality technical field, and in particular to a kind of mixed reality picture processing method and service are set
It is standby.
Background technology
Mixed reality technology can merge real world and virtual world to produce one both comprising real entities or including
The new visualization environment of virtual screen.Under the situation of mixed reality, user can see in the aobvious screen of MR heads while deposit
In entity scene and virtual scene, for the boundary between Fuzzy Entities scene and virtual scene, mixed reality technology can be with
Visual field change corresponding to the action such as displacement or rotation of user's head is captured, and according to visual field change to be rendered virtual
Scene is handled accordingly.However, at present due to the influence of the factors such as processor computing capability, mixed reality technology is from determination
The user visual field changes to has certain time delay in the screen that MR heads show between display picture, when time delay is excessive, Yong Huke
Disconnected with the virtual scene and entity scene that are clearly felt that in display picture, or even can therefore produce strong spinning sensation.
The content of the invention
The embodiment of the invention discloses a kind of mixed reality picture processing method and service equipment, can reduce picture and show
Time delay.
First aspect of the embodiment of the present invention discloses a kind of mixed reality picture processing method, and methods described includes:
Obtain the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen;
Identify the eyeball fixes direction corresponding watching area in the display picture;
Whether the target scene for judging display in the watching area is entity scene, if it is, reducing void to be rendered
Intend the resolution ratio of scene, obtain first resolution, and the virtual scene to be rendered is rendered according to the first resolution, obtain
Virtual scene to after rendering;
Wherein, the first resolution is less than default resolution ratio.
As an alternative embodiment, in first aspect of the embodiment of the present invention, methods described also includes:
If the target scene is not entity scene, determine the watching area in the virtual scene to be rendered
Corresponding central area;
The resolution ratio of the neighboring area in the virtual scene to be rendered in addition to the central area is reduced, obtains
Two resolution ratio;
The central area is rendered according to the default resolution ratio, and the week is rendered according to the second resolution
Border area domain, the virtual scene after being rendered;
Wherein, the second resolution is less than the default resolution ratio.
As an alternative embodiment, in first aspect of the embodiment of the present invention, methods described also includes:
Control MR heads are aobvious to gather image within the vision using the binocular camera for simulating human eye work;
Separation between entity scene and background image is carried out to described image;
Isolated entity scene is loaded onto in the virtual scene after described render.
As an alternative embodiment, in first aspect of the embodiment of the present invention, in the aobvious utilization of the control MR heads
After the binocular camera collection image within the vision for simulating human eye work, methods described also includes:
Judge to whether there is entity scene in described image, if it is, performing described to described image progress entity scene
The step of separation between background image.
As an alternative embodiment, in first aspect of the embodiment of the present invention, methods described also includes:
The content of the target scene is identified, and searches the prompt message with content binding;
The prompt message is loaded onto in the virtual scene after described render.
Second aspect of the embodiment of the present invention discloses a kind of service equipment, including:
Acquiring unit, for obtaining the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen;
First recognition unit, for identifying the eyeball fixes direction corresponding watching area in the display picture;
First judging unit, whether the target scene for judging to show in the watching area is entity scene;
Processing unit, for when it is entity scene that first judging unit, which judges the target scene, reduction to be treated
The resolution ratio of the virtual scene rendered, obtains first resolution;
Rendering unit, for rendering the virtual scene to be rendered according to the first resolution, after being rendered
Virtual scene;
Wherein, the first resolution is less than default resolution ratio.
As an alternative embodiment, in second aspect of the embodiment of the present invention, in addition to:
Determining unit, for when it is not entity scene that first judging unit, which judges the target scene, it is determined that
The watching area corresponding central area in the virtual scene to be rendered;
The processing unit, it is additionally operable to reduce the periphery in the virtual scene to be rendered in addition to the central area
The resolution ratio in region, obtains second resolution;
The rendering unit, it is additionally operable to render the central area according to the default resolution ratio, and according to described
Second resolution renders the neighboring area, the virtual scene after being rendered;
Wherein, the second resolution is less than the default resolution ratio.
As an alternative embodiment, in second aspect of the embodiment of the present invention, in addition to:
Control unit, figure within the vision is gathered using the binocular camera for simulating human eye work for controlling MR heads aobvious
Picture;
Separative element, for carrying out the separation between entity scene and background image to described image;
First loading unit, for being loaded onto isolated entity scene in the virtual scene after described render.
As an alternative embodiment, in second aspect of the embodiment of the present invention, in addition to:
Second judging unit, in the aobvious binocular camera to be worked using human eye is simulated of described control unit control MR heads
After gathering image within the vision, judge that described image whether there is entity scene;
The separative element, specifically for judging entity scene be present in described image in second judging unit
When, the separation between entity scene and background image is carried out to described image.
As an alternative embodiment, in second aspect of the embodiment of the present invention, in addition to:
Second recognition unit, for identifying the content of the target scene, and search and believe with the prompting of content binding
Breath;
Second loading unit, for being loaded onto the prompt message in the virtual scene after described render.
Compared with prior art, the embodiment of the present invention has the advantages that:
Service equipment shows the eyeball fixes direction of wearer by obtaining MR heads, it may be determined that MR heads show wearer in MR heads
Watching area in the display picture of aobvious screen, that is to say, that can determine that MR heads show wearer's notice in display picture
The region most concentrated, if the scene shown in the region is entity scene, it is believed that MR heads show the notice of wearer not
In virtual scene, therefore the resolution ratio of virtual scene to be rendered is reduced, and render this according to the resolution ratio after reduction and treat wash with watercolours
The virtual scene of dye, it is possible to increase rendering efficiency, so as to reduce the time delay that picture is shown, can also see user aobvious
It is that clearly, and virtual scene is fuzzy to show in picture entity scene so that display picture more meets people in actual life
Visual law.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, it will use below required in embodiment
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for ability
For the those of ordinary skill of domain, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings other attached
Figure.
Fig. 1 is a kind of schematic flow sheet of mixed reality picture processing method disclosed in the embodiment of the present invention;
Fig. 2 is the schematic flow sheet of another mixed reality picture processing method disclosed in the embodiment of the present invention;
Fig. 3 is the schematic flow sheet of another mixed reality picture processing method disclosed in the embodiment of the present invention;
Fig. 4 is a kind of structural representation of service equipment disclosed in the embodiment of the present invention;
Fig. 5 is the structural representation of another service equipment disclosed in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.Based on this
Embodiment in invention, the every other reality that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example is applied, belongs to the scope of protection of the invention.
It should be noted that term " comprising " and " having " and their any changes in the embodiment of the present invention and accompanying drawing
Shape, it is intended that cover non-exclusive include.Such as contain the process of series of steps or unit, method, system, product or
The step of equipment is not limited to list or unit, but alternatively also include the step of not listing or unit, or it is optional
Ground is also included for the intrinsic other steps of these processes, method, product or equipment or unit.
The embodiment of the invention discloses a kind of mixed reality picture processing method and service equipment, can reduce picture and show
Time delay.It is described in detail individually below.
Embodiment one
Referring to Fig. 1, Fig. 1 is a kind of flow signal of mixed reality picture processing method disclosed in the embodiment of the present invention
Figure.Wherein, the mixed reality picture processing method described by Fig. 1 is applied to the service equipment with the aobvious connection of MR heads, and the present invention is real
Example is applied not limit.For example, the service equipment with the aobvious connection of MR heads can be PC, smart mobile phone, Cloud Server
Deng the embodiment of the present invention does not limit.Wherein, the operating system with the service equipment of the aobvious connection of MR heads may include but be not limited to
Windows operating system, (SuSE) Linux OS, Android operation system, IOS etc., the embodiment of the present invention is not
Limit.As shown in figure 1, the mixed reality picture processing method may comprise steps of:
101st, service equipment obtains the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen.
As an alternative embodiment, in embodiments of the present invention, service equipment obtains the eye that MR heads show wearer
The mode of ball direction of gaze is specifically as follows:
Service equipment shows the human eye infrared signature image of wearer using the near-infrared camera shooting MR heads of high speed;
Service equipment extracts human eye feature point from above-mentioned human eye infrared signature image;
Service equipment calculates the position of human eye fixation point according to human eye feature point and the Rotation of eyeball model pre-established
Put the eyeball fixes direction for showing wearer as MR heads.
102nd, service equipment identification eyeball fixes direction corresponding watching area in above-mentioned display picture.
In the embodiment of the present invention, service equipment can be by centered on the blinkpunkt position of human eye, using distance to a declared goal as half
The border circular areas in footpath sees eyeball fixes direction corresponding watching area on real picture as.In daily life, when us
When watching an object attentively, notice is concentrated on the object watched attentively, and can seldom notice the object on periphery, such as
Say that we are seldom it will be noted that the curtain on periphery, even this block when we are watching the content that one piece of screen plays
The structure of screen in itself, now we may can be appreciated that screen play content be clear and periphery curtain be it is fuzzy,
Therefore it can show that a distance for meeting visual law is used as above-mentioned distance to a declared goal by testing adjustment using technical staff.
103rd, service equipment judges whether the target scene of display in watching area is entity scene, if it is, performing step
Rapid 104, if not, terminating this flow.
In the embodiment of the present invention, the display picture of the aobvious screen of MR heads may both include the entity of necessary being in real world
Scene, also include being generated by service equipment and being shown to the virtual scene in screen.For example, in the mixing of a multi-user
User B and multiple can be included under real exhibitions situation, in the display picture that user A (namely MR heads show wearer) sees
Showpiece, wherein, user B is entity portrait, and what user A saw is exactly the real appearances of user B, and remaining multiple showpiece is that service is set
The virtual bundle product of standby generation.When user A for B with talking face to face, user A may see to user B, now service and set
The standby eye feature by analyzing user A, true user A watching area are the region for including user B in display picture, user B
The demarcation thing for being used for demarcating positions of the user B in real space may be worn with it, and virtual showpiece need not demarcate position
Put, do not demarcate thing, therefore service equipment can utilize in image recognition algorithm lookup watching area and whether there is above-mentioned mark
Earnest, if there is, it is possible to determine that the target scene (namely user B) of display is entity scene in watching area.
104th, service equipment reduces the resolution ratio of virtual scene to be rendered, obtains first resolution, and according to first point
Resolution renders virtual scene to be rendered, the virtual scene after being rendered.
In the embodiment of the present invention, above-mentioned first resolution is less than default resolution ratio.In general, service equipment is in life
Into, it is necessary to the threedimensional model of virtual scene is projected to two dimensional surface, then being carried out when virtual scene to each pixel
Coloring renders.And in mixed reality technology, rendering for picture is real-time, such as the virtual scene of a certain frame has been completed
Render and be shown on the aobvious screen of MR heads, that is, the display picture that the MR heads that service equipment is got in step 101 are aobvious, clothes
Business equipment needs to determine the threedimensional model of the virtual scene of next frame according to the positional information of the aobvious wearer of moment MR heads, then to upper
The virtual scene of next frame stated is rendered.At present, most of mixed reality technology is to each picture in virtual scene to be rendered
Vegetarian refreshments is all rendered with high-resolution, so that all areas are all than more visible in display picture, therefore picture is rendered and occupied
The substantial amounts of computing resource of service equipment, in the case where service equipment computing resource is limited, operation time can be caused to increase.So
And all objects in display picture can need not be presented with identical definition in mixed reality situation, if with
What family was watched attentively is entity scene, then user may not notice the object in virtual scene, therefore service equipment can use
The relatively low first resolution of numerical value renders virtual scene to be rendered, it is possible to reduce amount of calculation when picture renders, raising render
Efficiency, so as to reduce the time delay that picture is shown.In addition, service equipment need not render to entity scene, can directly by
The aobvious entity scene picture photographed of MR heads is loaded onto in virtual scene, therefore user is look at the entity scene in display picture
When, it is seen that entity scene be that clearly, and virtual scene is fuzzy, so more meets the vision rule of people in actual life
Rule, nor can be because all objects are all clearly and make eyes and feel fatigue because of treated multi information in picture.
It can be seen that implementing the mixed reality picture processing method described by Fig. 1, service equipment can MR heads are aobvious to wear by obtaining
The eyeball fixes direction of wearer, determine that MR heads show watching area of the wearer in the display picture that MR heads show screen, that is,
Saying can determine that MR heads show the region that wearer's notice in display picture is most concentrated, if the scene shown in the region
It is entity scene, it is believed that MR heads show the notice of wearer not in virtual scene, therefore reduce virtual scape to be rendered
The resolution ratio of elephant, and render the virtual scene to be rendered according to the resolution ratio after reduction, it is possible to increase rendering efficiency, so as to subtract
The time delay that few picture is shown.In addition, in the method described by Fig. 1, the display picture of service equipment generation is presented on MR heads and shown
Screen in when more meet the visual law of people in actual life, nor can be because all objects all be clear in picture
And make eyes of user and fatigue felt because of treated multi information.
Embodiment two
Referring to Fig. 2, Fig. 2 is the flow signal of another mixed reality picture processing method disclosed in the embodiment of the present invention
Figure.As shown in Fig. 2 the mixed reality picture processing method may comprise steps of:
201st, service equipment obtains the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen.
202nd, service equipment identification eyeball fixes direction corresponding watching area in above-mentioned display picture.
203rd, service equipment judges whether the target scene of display in watching area is entity scene, if it is, performing step
Rapid 204, if not, performing step 205~step 207.
204th, service equipment reduces the resolution ratio of virtual scene to be rendered, obtains first resolution, and according to first point
Resolution renders virtual scene to be rendered, the virtual scene after being rendered.
In the embodiment of the present invention, above-mentioned first resolution is less than default resolution ratio.
205th, service equipment determines watching area corresponding central area in virtual scene to be rendered.
In the embodiment of the present invention, the target scene of display in the watching area of wearer (namely user) is shown when MR heads is
During virtual scene, the scene that display picture that user sees equally is followed in watching area is the scape clearly and outside watching area
As if this fuzzy visual law, therefore service equipment can use different resolution ratio in virtual scene to be rendered
Different zones are rendered.Because MR heads show the displacement or rotation of (namely user's head), the mesh shown in watching area
Certain difference be present between position of the logo image in virtual scene to be rendered and the position in display picture, service is set
The direction of motion and movement velocity that the MR heads that the standby gyroscope that can be shown according to MR heads and acceleration analysis go out show, can be calculated
Watching area corresponding central area in virtual scene to be rendered.
206th, service equipment reduces the resolution ratio of the neighboring area in addition to central area in virtual scene to be rendered, obtains
To second resolution.
207th, service equipment renders central area according to default resolution ratio, and renders peripheral region according to second resolution
Domain, the virtual scene after being rendered.
In the embodiment of the present invention, above-mentioned second resolution is less than default resolution ratio, and service equipment can be by peripheral region
Domain renders neighboring area, so as to when target scene is virtual scene, subtract as an entirety according to second resolution
The amount of calculation that few picture renders, reduce the time delay that picture is shown.Central area and neighboring area are entered according to different resolution ratio
Row renders, and in the virtual scene after being rendered, the scene of central area is that clearly, and the scene of neighboring area is fuzzy
's.
Optionally, service equipment can also be finely divided to neighboring area, for example, service equipment can be by peripheral region
Domain is divided into the first area surrounded but not comprising central area, and the second area in neighboring area in addition to first area,
And render first area, and 20 percent wash with watercolours according to default resolution ratio according to 60 the percent of default resolution ratio
Second area is contaminated, so that the display picture that user sees is transitioned into fuzzy peripheral region from clearly central area naturally
Domain.Further alternative, service equipment can also be adjusted between central area and first area and second area when rendering
Contrast, reduce the probability that user produces tunnel vision effect because of fuzzy pictures.
It can be seen that implementing the method described by Fig. 2, service equipment can be when user watches entity scene attentively, according to less than pre-
If the first resolution of resolution ratio render virtual scene to be rendered, so as to reduce amount of calculation when picture renders, subtract
The time delay that few picture is shown.Further, service equipment can also be when user watches virtual scene attentively, by virtual scape to be rendered
Central area is rendered as being divided into central area and neighboring area, and according to default resolution ratio, according to less than default resolution
The second resolution of rate renders neighboring area, can further reduce amount of calculation when rendering, it is also possible that what is presented is virtual
Scene also complies with real-life visual law, improves Consumer's Experience.Further, in the method described by Fig. 2, clothes
Business equipment can also be finely divided to neighboring area so that display picture is transitioned into fuzzy naturally from clearly central area
Neighboring area.In addition, service equipment can also adjust the contrast between central area and neighboring area when rendering, reduce and use
Family produces the probability of tunnel vision effect because of fuzzy pictures.
Embodiment three
Referring to Fig. 3, Fig. 3 is the flow signal of another mixed reality picture processing method disclosed in the embodiment of the present invention
Figure.As shown in figure 3, the mixed reality picture processing method may comprise steps of:
In the embodiment of the present invention, step 301~step 307 is identical with above-mentioned step 201~step 207, herein below
Do not repeat.Wherein, after service equipment performs the virtual scene after step 304 is rendered, step 308 is directly performed, with
And after service equipment performs the virtual scene after step 307 is rendered, continue executing with step 308.
308th, service equipment control MR heads are aobvious gathers figure within the vision using the binocular camera for simulating human eye work
Picture.
309th, service equipment judges to whether there is entity scene in the image, if it is, step 310~step 313 is performed,
If not, directly perform step 312~step 313.
310th, service equipment carries out the separation between entity scene and background image to the image.
311st, isolated entity scene is loaded onto in the virtual scene after rendering by service equipment.
In the embodiment of the present invention, isolated entity scene can be loaded onto virtually by service equipment according to original size
In scene, obtain a frame be used for export to the aobvious screen of MR heads display picture so that user, which can see, has merged entity scene
With the display picture of virtual scene, while can make user learn between user and entity scene (such as another user) it is true away from
From so as to reduce user due to the injury thing such as the actual distance between user and entity scene can not be learned and is collided
The probability of happening of part.
As an alternative embodiment, service equipment can with the aobvious connection of multiple MR heads, service equipment obtain from
First MR heads show the entity scene isolated in the image that equipment collects, and are identified by the demarcation thing of the entity scene
When the entity scene is the wearer that the 2nd MR heads show, the eyeball fixes direction that the 2nd MR heads show wearer is obtained;
Service equipment shows the facial model of wearer according to the 2nd MR heads in database and the 2nd MR heads show wearer's
Eyeball fixes direction, the 2nd MR heads of generation show the eye motion model of wearer;
The location position thing that service equipment is shown using the 2nd MR heads is the eye motion model and isolated entity scape
The 2nd MR heads show the facial remainder alignment of wearer as in, and the facial remainder refers to the face that the 2nd MR heads show wearer
Not by the aobvious part blocked of MR heads;
Service equipment, which renders the eye motion model, makes it show the face of wearer with the 2nd MR heads in isolated image
Portion's remainder fusion, the entity scene after being handled;
Service equipment by the entity scene after processing be loaded onto the first MR heads it is aobvious corresponding to render after virtual scene in.
Wherein, implement above-mentioned embodiment, the 2nd MR heads can be reappeared in the display picture that the first MR heads show screen
The face of aobvious wearer is by the aobvious part blocked of MR heads so that the first MR heads show wearer (such as user A) it can be seen that the 2nd MR
The facial expression of the aobvious wearer (such as user B) of head, improve authenticity of the different user in exchange in mixed reality scene.
312nd, the content of service equipment identification target scene, and search the prompt message with content binding.
313rd, service equipment will be prompted to information and be loaded onto in the virtual scene after rendering.
In the embodiment of the present invention, above-mentioned target scene can be entity scene or virtual scene, and the present invention is real
Example is applied not limit.For example, if service equipment identifies that target scene is virtual showpiece, find and the virtual showpiece
What is bound is the introduction of the virtual showpiece, then the introduction of the virtual showpiece is loaded onto the virtual scene after rendering by service equipment
In so that user can see the virtual showpiece and its introduction in the screen that MR heads show;If service equipment identifies mesh
Mark scene be entity portrait (such as another user), find with the entity portrait binding be category of the user in scene of game
Property, then the attribute of the entity portrait is loaded onto in the virtual scene after rendering by service equipment so that user can be in MR heads
The entity portrait and its attribute are seen in aobvious screen.
As an alternative embodiment, in mixed reality scene of game, service equipment identifies target scene
After content, the virtual scene with content binding can be searched as virtual scene to be rendered, and the void to be rendered to this
Intend scene to be rendered so that service equipment can trigger the story of a play or opera according to the eyeball fixes direction of user, improve user and playing
Interactive experience in scene.
It can be seen that in the method described by Fig. 3, service equipment can reduce to be rendered when user watches entity scene attentively
Virtual scene render resolution ratio, and when user watches virtual scene attentively, centered on virtual scene division to be rendered
Region and neighboring area, and reduce neighboring area renders resolution ratio, so as to reduce amount of calculation when picture renders, subtracts
The time delay that few picture is shown.Further, in the method described by Fig. 3, service equipment be able to will separate according to original size
Obtained entity scene is loaded onto in the virtual scene after rendering so that user can learn true between user and entity scene
Actual distance is from the probability for the injury accidents such as reduction collides, when entity scene is entity portrait, service equipment can also be aobvious
Show the facial expression that entity portrait is reappeared in picture, improve authenticity of the different user in exchange in mixed reality scene.
In addition, in the method described by Fig. 3, service equipment can also be being identified in the target scene that user watches attentively
Rong Shi, the prompt message with the binding of the content of target scene is loaded onto in the virtual scene after rendering so that user can nothing
It the operation such as need to click on, directly see above-mentioned prompt message in the screen that MR heads show, the interactive experience of user can be improved.Or
Person, service equipment can also be after the virtual scene bound with the content be found out, using the virtual scene as to be rendered
Virtual scene, and the virtual scene to be rendered to this renders so that service equipment can be according to the eyeball fixes side of user
To game scenario is triggered, the interactive experience of user can also be improved.
Example IV
Referring to Fig. 4, Fig. 4 is a kind of service equipment disclosed in the embodiment of the present invention.As shown in figure 4, the service equipment bag
Include:
Acquiring unit 401, for obtaining the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen
Face;
As an alternative embodiment, in embodiments of the present invention, acquiring unit 401 obtains MR heads and shows wearer's
The mode in eyeball fixes direction is specifically as follows:
Acquiring unit 401 shows the human eye infrared signature image of wearer using the near-infrared camera shooting MR heads of high speed;
Acquiring unit 401 extracts human eye feature point from above-mentioned human eye infrared signature image;
Acquiring unit 401 calculates human eye fixation point according to human eye feature point and the Rotation of eyeball model pre-established
Position as MR heads show wearer eyeball fixes direction.
First recognition unit 402, for identifying that the eyeball fixes direction that acquiring unit 401 obtains corresponds in display picture
Watching area;
First judging unit 403, the target scape shown in the watching area identified for judging the first recognition unit 402
As if no is entity scene;
Processing unit 404, for when it is entity scene that the first judging unit 403, which judges target scene, wash with watercolours to be treated in reduction
The resolution ratio of the virtual scene of dye, obtains first resolution;
Rendering unit 405, the first resolution for being obtained according to processing unit 404 render virtual scene to be rendered,
Virtual scene after being rendered;
In the embodiment of the present invention, above-mentioned first resolution is less than default resolution ratio.
Wherein, implement the service equipment shown in Fig. 4, the eyeball fixes direction of wearer can be shown by obtaining MR heads, can
To determine that MR heads show watching area of the wearer in the display picture that MR heads show screen, that is to say, that can determine that MR heads show
The region that wearer's notice in display picture is most concentrated, if the scene shown in the region is entity scene, it can recognize
Show the notice of wearer not in virtual scene for MR heads, therefore reduce the resolution ratio of virtual scene to be rendered, and according to
Resolution ratio after reduction renders the virtual scene to be rendered, it is possible to increase rendering efficiency, so as to reduce the time delay that picture is shown.
In addition, implementing the service equipment shown in Fig. 5, more meet when the display picture of generation can be made to be presented in the aobvious screen of MR heads existing
Grow directly from seeds it is living in people visual law, nor can because in picture all objects be all clearly and make eyes of user because
Treat multi information and feel fatigue.
Embodiment five
Referring to Fig. 5, Fig. 5 is the structural representation of another service equipment disclosed in the embodiment of the present invention.Wherein, Fig. 5
The service equipment of shown service equipment as shown in Figure 4 optimizes what is obtained.Compared with the service equipment shown in Fig. 4, Fig. 5
Shown service equipment can also include:
Determining unit 406, for when it is not entity scene that the first judging unit 403, which judges target scene, it is determined that note
Viewed area corresponding central area in virtual scene to be rendered;
In the embodiment of the present invention, as an alternative embodiment, the gyro that determining unit 406 can show according to MR heads
The direction of motion and movement velocity that the MR heads that instrument and acceleration analysis go out show, calculate watching area in virtual scene to be rendered
In corresponding central area.
Above-mentioned processing unit 404, in being additionally operable to reduce and being determined in virtual scene to be rendered except determining unit 406
The resolution ratio of neighboring area beyond heart district domain, obtains second resolution;
Above-mentioned rendering unit 405, it is additionally operable to render central area according to default resolution ratio, and according to processing unit
404 second resolutions determined render neighboring area, the virtual scene after being rendered;
Wherein, above-mentioned second resolution is less than default resolution ratio.
Optionally, the service equipment shown in Fig. 5 can also include:
Control unit 407, gathered within sweep of the eye using the binocular camera for simulating human eye work for controlling MR heads aobvious
Image;
Second judging unit 408, the binocular camera shooting for simulating human eye work is utilized for controlling MR heads aobvious in control unit 407
After head gathers image within the vision, judge that the image that control unit 407 obtains whether there is entity scene;
Separative element 409, it is right for when the second judging unit 408 is judged entity scene to be present in above-mentioned image
Image carries out the separation between entity scene and background image;
First loading unit 410, after being rendered for what isolated entity scene was loaded onto into rendering unit 405 obtains
Virtual scene in;
In the embodiment of the present invention, as an alternative embodiment, service equipment can show connection with multiple MR heads, because
This, the first loading unit 410 shows the entity isolated in the image that collects of equipment from the first MR heads obtaining separative element 409
After scene, and when identifying the wearer that the entity scene shows for the 2nd MR heads by the demarcation thing of the entity scene, also
The eyeball fixes direction for showing wearer for obtaining the 2nd MR heads from acquiring unit 401;Shown according to the 2nd MR heads in database
The facial model of wearer and the 2nd MR heads show the eyeball fixes direction of wearer, and the 2nd MR heads of generation show the eye of wearer
Action model, and using the location position thing that the 2nd MR heads show in the eye motion model and isolated entity scene the
Two MR heads show wearer facial remainder alignment, the facial remainder refer to the 2nd MR heads show wearer face not by
The aobvious part blocked of MR heads;Triggering rendering unit 405 perform render the eye motion model make its with isolated image
2nd MR heads show the operation of the facial remainder fusion of wearer, and the entity scape after being handled from rendering unit 405
As;By the entity scene after processing be loaded onto the first MR heads it is aobvious corresponding to render after virtual scene in.
Wherein, implement above-mentioned embodiment, the 2nd MR heads can be reappeared in the display picture that the first MR heads show screen
The face of aobvious wearer is by the aobvious part blocked of MR heads so that the first MR heads show wearer (such as user A) it can be seen that the 2nd MR
The facial expression of the aobvious wearer (such as user B) of head, improve authenticity of the different user in exchange in mixed reality scene.
Further alternative, the service equipment shown in Fig. 5 can also include:
Second recognition unit 411, for identifying the target scape shown in the watching area that identifies of the first recognition unit 402
The content of elephant, and search the prompt message with content binding;
Second loading unit 412, wash with watercolours is obtained for the prompt message of the second recognition unit 411 to be loaded onto into rendering unit 405
In virtual scene after dye.
In the embodiment of the present invention, as an alternative embodiment, the second recognition unit 411 identifies target scene
After content, the virtual scene with content binding can also be searched as virtual scene to be rendered, and trigger rendering unit
405 pairs of virtual scenes to be rendered render so that service equipment can trigger acute according to the eyeball fixes direction of user
Feelings, improve interactive experience of the user in mixed reality scene of game.
Wherein, the service equipment shown in Fig. 5 is implemented, can be when user watches entity scene attentively, according to less than default point
The first resolution of resolution renders virtual scene to be rendered, and, will be to be rendered virtual when user watches virtual scene attentively
Scene is divided into central area and neighboring area, and renders central area according to default resolution ratio, according to less than default point
The second resolution of resolution renders neighboring area, amount of calculation when being rendered so as to reduce, and reduces the time delay that picture is shown.This
Outside, implement the service equipment shown in Fig. 5 and be also possible that user watches attentively in the picture of presentation scene is clearly and other scapes
As if fuzzy so that the picture of presentation meets real-life visual law, improves Consumer's Experience.Further, implement
Isolated entity scene can be loaded onto in the virtual scene after rendering by the service equipment shown in Fig. 5 so that Yong Huke
To learn the actual distance between user and entity scene, the probability for the injury accident such as collide is reduced, can also be in entity
When scene is entity portrait, the facial expression of entity portrait is reappeared in display picture, so as to improve in mixed reality scene not
With authenticity of the user in exchange.Further, implement the service equipment shown in Fig. 5 identifying user to watch attentively
Target scene content when, will be loaded onto with the prompt message that target scene is bound in the virtual scene after rendering so that use
Family directly can see above-mentioned prompt message, so as to improve user without the operation such as click in the screen that MR heads show
Interactive experience.Or service equipment can also be after the virtual scene bound with the content be found out, the virtual scene
As virtual scene to be rendered, and the virtual scene to be rendered to this renders so that service equipment can be according to user
Eyeball fixes direction triggering game scenario, the interactive experience of user can also be improved.
Embodiment six
The embodiment of the present invention discloses a kind of service equipment, including:
The processor for being stored with the memory of executable program code and being coupled with memory;
Wherein, processor calls the executable program code stored in memory, performs any shown in Fig. 1~Fig. 3
Mixed reality picture processing method.
In addition, the embodiment of the present invention discloses a kind of computer-readable recording medium, it stores computer program, wherein, should
Computer program causes computer to perform any mixed reality picture processing methods of Fig. 1~Fig. 3.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, storage
Medium include read-only storage (Read-Only Memory, ROM), random access memory (Random Access Memory,
RAM), programmable read only memory (Programmable Read-only Memory, PROM), erasable programmable is read-only deposits
Reservoir (Erasable Programmable Read Only Memory, EPROM), disposable programmable read-only storage (One-
Time Programmable Read-Only Memory, OTPROM), the electronics formula of erasing can make carbon copies read-only storage
(Electrically-Erasable Programmable Read-Only Memory, EEPROM), read-only optical disc (Compact
Disc Read-Only Memory, CD-ROM) or other disk storages, magnetic disk storage, magnetic tape storage or can
For carrying or any other computer-readable medium of data storage.
A kind of mixed reality picture processing method and service equipment disclosed in the embodiment of the present invention have been carried out in detail above
Introduce, specific case used herein is set forth to the principle and embodiment of the present invention, the explanation of above example
It is only intended to help the method and its core concept for understanding the present invention;Meanwhile for those of ordinary skill in the art, according to this
The thought of invention, there will be changes in specific embodiments and applications, in summary, this specification content should
It is interpreted as limitation of the present invention.
Claims (10)
1. a kind of mixed reality picture processing method, it is characterised in that methods described includes:
Obtain the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen;
Identify the eyeball fixes direction corresponding watching area in the display picture;
Whether the target scene for judging display in the watching area is entity scene, if it is, reducing virtual scape to be rendered
The resolution ratio of elephant, first resolution is obtained, and the virtual scene to be rendered is rendered according to the first resolution, obtain wash with watercolours
Virtual scene after dye;
Wherein, the first resolution is less than default resolution ratio.
2. mixed reality picture processing method according to claim 1, it is characterised in that methods described also includes:
If the target scene is not entity scene, determine that the watching area is corresponding in the virtual scene to be rendered
Central area;
The resolution ratio of the neighboring area in the virtual scene to be rendered in addition to the central area is reduced, obtains second point
Resolution;
The central area is rendered according to the default resolution ratio, and the peripheral region is rendered according to the second resolution
Domain, the virtual scene after being rendered;
Wherein, the second resolution is less than the default resolution ratio.
3. mixed reality picture processing method according to claim 1, it is characterised in that methods described also includes:
Control MR heads are aobvious to gather image within the vision using the binocular camera for simulating human eye work;
Separation between entity scene and background image is carried out to described image;
Isolated entity scene is loaded onto in the virtual scene after described render.
4. the mixed reality picture processing method according to right wants 3, it is characterised in that show in the control MR heads and utilize mould
After the binocular camera of anthropomorphic eye work gathers image within the vision, methods described also includes:
Judge to whether there is entity scene in described image, if it is, performing the entity scene that carries out described image with carrying on the back
The step of separation between scape image.
5. the mixed reality picture processing method according to any one of Claims 1 to 4, it is characterised in that methods described is also
Including:
The content of the target scene is identified, and searches the prompt message with content binding;
The prompt message is loaded onto in the virtual scene after described render.
A kind of 6. service equipment, it is characterised in that including:
Acquiring unit, for obtaining the display picture that MR heads show the eyeball fixes direction of wearer and MR heads show screen;
First recognition unit, for identifying the eyeball fixes direction corresponding watching area in the display picture;
First judging unit, whether the target scene for judging to show in the watching area is entity scene;
Processing unit, for when it is entity scene that first judging unit, which judges the target scene, reducing to be rendered
Virtual scene resolution ratio, obtain first resolution;
Rendering unit, it is virtual after being rendered for rendering the virtual scene to be rendered according to the first resolution
Scene;
Wherein, the first resolution is less than default resolution ratio.
7. service equipment according to claim 6, it is characterised in that also include:
Determining unit, for when it is not entity scene that first judging unit, which judges the target scene, it is determined that described
Watching area corresponding central area in the virtual scene to be rendered;
The processing unit, it is additionally operable to reduce the neighboring area in the virtual scene to be rendered in addition to the central area
Resolution ratio, obtain second resolution;
The rendering unit, it is additionally operable to render the central area according to the default resolution ratio, and according to described second
Resolution ratio renders the neighboring area, the virtual scene after being rendered;
Wherein, the second resolution is less than the default resolution ratio.
8. service equipment according to claim 6, it is characterised in that also include:
Control unit, image within the vision is gathered using the binocular camera for simulating human eye work for controlling MR heads aobvious;
Separative element, for carrying out the separation between entity scene and background image to described image;
First loading unit, for being loaded onto isolated entity scene in the virtual scene after described render.
9. service equipment according to claim 8, it is characterised in that also include:
Second judging unit, in the aobvious binocular camera collection to be worked using human eye is simulated of described control unit control MR heads
After image within the vision, judge that described image whether there is entity scene;
The separative element, it is right specifically for when second judging unit is judged entity scene to be present in described image
Described image carries out the separation between entity scene and background image.
10. according to the service equipment described in any one of claim 6~9, it is characterised in that also include:
Second recognition unit, for identifying the content of the target scene, and search the prompt message with content binding;
Second loading unit, for being loaded onto the prompt message in the virtual scene after described render.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710781247.5A CN107590859A (en) | 2017-09-01 | 2017-09-01 | A kind of mixed reality picture processing method and service equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710781247.5A CN107590859A (en) | 2017-09-01 | 2017-09-01 | A kind of mixed reality picture processing method and service equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107590859A true CN107590859A (en) | 2018-01-16 |
Family
ID=61051800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710781247.5A Pending CN107590859A (en) | 2017-09-01 | 2017-09-01 | A kind of mixed reality picture processing method and service equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107590859A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109637418A (en) * | 2019-01-09 | 2019-04-16 | 京东方科技集团股份有限公司 | A kind of display panel and its driving method, display device |
CN109712224A (en) * | 2018-12-29 | 2019-05-03 | 青岛海信电器股份有限公司 | Rendering method, device and the smart machine of virtual scene |
CN109801353A (en) * | 2019-01-16 | 2019-05-24 | 北京七鑫易维信息技术有限公司 | A kind of method of image rendering, server and terminal |
CN110413108A (en) * | 2019-06-28 | 2019-11-05 | 广东虚拟现实科技有限公司 | Processing method, device, system, electronic equipment and the storage medium of virtual screen |
CN111047676A (en) * | 2018-10-12 | 2020-04-21 | 中国移动通信集团广西有限公司 | Image rendering method and device and storage medium |
CN112216161A (en) * | 2020-10-23 | 2021-01-12 | 新维畅想数字科技(北京)有限公司 | Digital work teaching method and device |
CN112579029A (en) * | 2020-12-11 | 2021-03-30 | 上海影创信息科技有限公司 | Display control method and system of VR glasses |
CN112633273A (en) * | 2020-12-18 | 2021-04-09 | 上海影创信息科技有限公司 | User preference processing method and system based on afterglow area |
CN113205583A (en) * | 2021-04-28 | 2021-08-03 | 北京字跳网络技术有限公司 | Scene rendering method and device, electronic equipment and readable storage medium |
CN113903210A (en) * | 2021-10-08 | 2022-01-07 | 首都机场集团有限公司 | Virtual reality simulation driving method, device, equipment and storage medium |
CN114374832A (en) * | 2020-10-14 | 2022-04-19 | 中国移动通信有限公司研究院 | Virtual reality experience control method and device, user equipment and network equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101174332A (en) * | 2007-10-29 | 2008-05-07 | 张建中 | Method, device and system for interactively combining real-time scene in real world with virtual reality scene |
CN102568026A (en) * | 2011-12-12 | 2012-07-11 | 浙江大学 | Three-dimensional enhancing realizing method for multi-viewpoint free stereo display |
CN105528083A (en) * | 2016-01-12 | 2016-04-27 | 广州创幻数码科技有限公司 | Mixed reality identification association method and device |
US20160267716A1 (en) * | 2015-03-11 | 2016-09-15 | Oculus Vr, Llc | Eye tracking for display resolution adjustment in a virtual reality system |
CN106096857A (en) * | 2016-06-23 | 2016-11-09 | 中国人民解放军63908部队 | Augmented reality version interactive electronic technical manual, content build and the structure of auxiliary maintaining/auxiliary operation flow process |
CN106228591A (en) * | 2016-07-12 | 2016-12-14 | 江苏奥格视特信息科技有限公司 | Virtual reality ultrahigh speed real-time rendering method |
CN106412563A (en) * | 2016-09-30 | 2017-02-15 | 珠海市魅族科技有限公司 | Image display method and apparatus |
CN106919248A (en) * | 2015-12-26 | 2017-07-04 | 华为技术有限公司 | It is applied to the content transmission method and equipment of virtual reality |
CN107004296A (en) * | 2014-08-04 | 2017-08-01 | 脸谱公司 | For the method and system that face is reconstructed that blocks to reality environment |
CN107018336A (en) * | 2017-04-11 | 2017-08-04 | 腾讯科技(深圳)有限公司 | The method and apparatus of image procossing and the method and apparatus of Video processing |
-
2017
- 2017-09-01 CN CN201710781247.5A patent/CN107590859A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101174332A (en) * | 2007-10-29 | 2008-05-07 | 张建中 | Method, device and system for interactively combining real-time scene in real world with virtual reality scene |
CN102568026A (en) * | 2011-12-12 | 2012-07-11 | 浙江大学 | Three-dimensional enhancing realizing method for multi-viewpoint free stereo display |
CN107004296A (en) * | 2014-08-04 | 2017-08-01 | 脸谱公司 | For the method and system that face is reconstructed that blocks to reality environment |
US20160267716A1 (en) * | 2015-03-11 | 2016-09-15 | Oculus Vr, Llc | Eye tracking for display resolution adjustment in a virtual reality system |
CN106919248A (en) * | 2015-12-26 | 2017-07-04 | 华为技术有限公司 | It is applied to the content transmission method and equipment of virtual reality |
CN105528083A (en) * | 2016-01-12 | 2016-04-27 | 广州创幻数码科技有限公司 | Mixed reality identification association method and device |
CN106096857A (en) * | 2016-06-23 | 2016-11-09 | 中国人民解放军63908部队 | Augmented reality version interactive electronic technical manual, content build and the structure of auxiliary maintaining/auxiliary operation flow process |
CN106228591A (en) * | 2016-07-12 | 2016-12-14 | 江苏奥格视特信息科技有限公司 | Virtual reality ultrahigh speed real-time rendering method |
CN106412563A (en) * | 2016-09-30 | 2017-02-15 | 珠海市魅族科技有限公司 | Image display method and apparatus |
CN107018336A (en) * | 2017-04-11 | 2017-08-04 | 腾讯科技(深圳)有限公司 | The method and apparatus of image procossing and the method and apparatus of Video processing |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111047676A (en) * | 2018-10-12 | 2020-04-21 | 中国移动通信集团广西有限公司 | Image rendering method and device and storage medium |
CN111047676B (en) * | 2018-10-12 | 2023-04-25 | 中国移动通信集团广西有限公司 | Image rendering method, device and storage medium |
CN109712224A (en) * | 2018-12-29 | 2019-05-03 | 青岛海信电器股份有限公司 | Rendering method, device and the smart machine of virtual scene |
CN109637418A (en) * | 2019-01-09 | 2019-04-16 | 京东方科技集团股份有限公司 | A kind of display panel and its driving method, display device |
CN109637418B (en) * | 2019-01-09 | 2022-08-30 | 京东方科技集团股份有限公司 | Display panel, driving method thereof and display device |
CN109801353A (en) * | 2019-01-16 | 2019-05-24 | 北京七鑫易维信息技术有限公司 | A kind of method of image rendering, server and terminal |
CN110413108A (en) * | 2019-06-28 | 2019-11-05 | 广东虚拟现实科技有限公司 | Processing method, device, system, electronic equipment and the storage medium of virtual screen |
CN110413108B (en) * | 2019-06-28 | 2023-09-01 | 广东虚拟现实科技有限公司 | Virtual picture processing method, device and system, electronic equipment and storage medium |
CN114374832A (en) * | 2020-10-14 | 2022-04-19 | 中国移动通信有限公司研究院 | Virtual reality experience control method and device, user equipment and network equipment |
CN112216161A (en) * | 2020-10-23 | 2021-01-12 | 新维畅想数字科技(北京)有限公司 | Digital work teaching method and device |
CN112579029A (en) * | 2020-12-11 | 2021-03-30 | 上海影创信息科技有限公司 | Display control method and system of VR glasses |
CN112633273A (en) * | 2020-12-18 | 2021-04-09 | 上海影创信息科技有限公司 | User preference processing method and system based on afterglow area |
CN113205583A (en) * | 2021-04-28 | 2021-08-03 | 北京字跳网络技术有限公司 | Scene rendering method and device, electronic equipment and readable storage medium |
CN113903210A (en) * | 2021-10-08 | 2022-01-07 | 首都机场集团有限公司 | Virtual reality simulation driving method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107590859A (en) | A kind of mixed reality picture processing method and service equipment | |
CN106873778B (en) | Application operation control method and device and virtual reality equipment | |
CN102981616B (en) | The recognition methods of object and system and computer in augmented reality | |
CN105117695B (en) | In vivo detection equipment and biopsy method | |
Upenik et al. | A simple method to obtain visual attention data in head mounted virtual reality | |
CN109345556A (en) | Neural network prospect for mixed reality separates | |
CN103140879B (en) | Information presentation device, digital camera, head mounted display, projecting apparatus, information demonstrating method and information are presented program | |
CN104740869B (en) | The exchange method and system that a kind of actual situation for merging true environment combines | |
CN205730297U (en) | Information processor | |
CN109246463B (en) | Method and device for displaying bullet screen | |
CN106445131B (en) | Virtual target operating method and device | |
Papenmeier et al. | DynAOI: A tool for matching eye-movement data with dynamic areas of interest in animations and movies | |
KR20230044401A (en) | Personal control interface for extended reality | |
KR101962578B1 (en) | A fitness exercise service providing system using VR | |
CN116235129A (en) | Confusion control interface for augmented reality | |
Cordeiro et al. | ARZombie: A mobile augmented reality game with multimodal interaction | |
CN108595004A (en) | More people's exchange methods, device and relevant device based on Virtual Reality | |
CN115129164A (en) | Interaction control method and system based on virtual reality and virtual reality equipment | |
JP2017068411A (en) | Device, method, and program for forming images | |
CN103785169A (en) | Mixed reality arena | |
CN111638798A (en) | AR group photo method, AR group photo device, computer equipment and storage medium | |
CN111045587A (en) | Game control method, electronic device, and computer-readable storage medium | |
CN112891940B (en) | Image data processing method and device, storage medium and computer equipment | |
Wen et al. | VR. net: A real-world dataset for virtual reality motion sickness research | |
CN106200973A (en) | A kind of method and device playing virtual reality file based on external image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180116 |
|
RJ01 | Rejection of invention patent application after publication |