CN106354251B - A kind of model system and method that virtual scene is merged with real scene - Google Patents
A kind of model system and method that virtual scene is merged with real scene Download PDFInfo
- Publication number
- CN106354251B CN106354251B CN201610681884.0A CN201610681884A CN106354251B CN 106354251 B CN106354251 B CN 106354251B CN 201610681884 A CN201610681884 A CN 201610681884A CN 106354251 B CN106354251 B CN 106354251B
- Authority
- CN
- China
- Prior art keywords
- real scene
- user
- real
- image
- scene object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of model system and method merged for virtual scene with real scene.The image of real scene where the present invention shoots user, extracts real scene object under user perspective, and acquires the various environmental parameters in true environment comprehensively;In turn, the present invention judges each real scene object and true environment factor during user enters virtual reality to the weighing factor of user's perception;According to the sequence of the weighing factor, the wherein higher real scene object of weighing factor and the true environment factor chosen are blended with virtual scene, and ignore object and factor that wherein weighing factor is lower than threshold value;And comprehensively considers the size of weighing factor and the spatial position of real scene object itself, coordinate real scene object and shown jointly with virtual scene object;For real scene object, its degree of compatibility with virtual scene is determined, and accordingly adjust the imaging model of real scene object.
Description
Technical field
The invention belongs to field of computer technology, and in particular to a kind of model system that virtual scene is merged with real scene
With method.
Background technique
Very popular virtual reality technology in recent years is in integrated application Computerized three-dimensional graph technology, sensing skill
On the basis of art, human-computer interaction technology, dimension display technologies, the 3D vision sense organ of high fidelity is presented for user, and real
The existing Three-dimensional Display world is interacted with the high real-time between user's real world behavior, is experienced user and is substantially equal to really
The experience in the world.It can generate that immerse effect be the weight that virtual reality technology is different from other graphical displays and human-computer interaction technology
Feature is wanted, it is so-called to immerse the display and naturally seamless human-computer interaction referred to through high fidelity, make the attention of user
Be totally absorbed in the environment that virtual reality is built, and be difficult to consciously and subconsciously carry out virtual reality world with really
The boundary in the world.
The construction of scene environment is the key that reach to immerse one ring of effect using virtual reality technology where user.Because no
By being applied virtual reality technology in motion picture projection, role playing game, still applies and restored with true environment
The flight of effect, driving, in athletic training, user is to be placed oneself in the midst of among specific scene environment first, and generate to field
The perception of scape environment recognizes, and then just itself can will be updated to story, game or training with switching with the variation of scene
Plot is finally reached immersive effects.
User is in film, game and the training entered under virtual reality technology, on the one hand, what user was faced is for electricity
The virtual scene that shadow, game and training are built, for example, virtual reality technology is presented to the user in role playing game
May be castle room, pirate's cabin, space station, the fighter plane cockpit etc. designed according to game scene be divorced from reality and
Fabricate the scene come out;On the other hand, real world is true where user can face and experience its body in which also not can avoid
Real field scape, for example, furnishings in room etc. where user can enter the article that in the user visual field or user is within reach,
Illumination, shade, temperature, humidity and sound from true environment etc., even user at one's side it is existing other people.
Effect preferably is immersed in order to reach, it is direct to user or potential that the real scene from real world cannot be ignored
Influence will necessarily then be made on user experience especially if real scene and virtual scene exist apparent immiscible
At uncoordinated, it can not finally generate and immerse effect.For example, if virtual scene shows dim and moist overcast and rainy picture,
And user can obviously experience in real scene the irradiation of outdoor bright and beautiful sunlight simultaneously, then this virtual scene and real scene
Powerful contrast necessarily to user psychology generate signal effect, so that user is clearly recognized that is seen and heard in virtual scene is all
Imaginary, treat so that conscious or subconsciousness virtual scene can be separated with the true environment where its own.
For this problem, a kind of existing way is by the user-isolated perception to true environment factor, to shield
Influence of the real scene to user, for example, allow user to take the closed helmet to can only see the picture of virtual scene, and
The sound of extraneous true environment is covered using audio.
However, we are not intended to the completely isolated real scene of user, but pass through in considerable practical application
Virtual scene is smoothly combined together with real scene, allow users to experience true environment factor and makes corresponding sound
It answers, but sufficiently itself will can be updated among the situation that virtual scene is constructed and reach again simultaneously and immerse effect.Citing comes
It says, it is intended that the factors such as weather, illumination in virtual scene are consistent with the weather in true environment where user, illumination,
To reach user in virtual scene to the impression of environmental condition and its impression of environmental condition in real scene is generated it is same
One property.In another example the article or even personage in real scene around user can be presented among virtual scene in a suitable manner,
As the virtual objects in virtual scene, thus, user in real scene to movement performed by these articles and personage,
Can project among virtual scene to the interaction of virtual objects.
It exists in the prior art and the object in virtual scene object and real scene is presented on virtual reality picture jointly
Technological means.For example, the image of real scene can be acquired with picture pick-up device, and provides and indicate the real scene image
The parameter of shooting visual angle;The display image of virtual scene is generated, the imaged viewing angle of virtual scene image is generally virtual using viewing
The user perspective of display display picture;According to the difference of real scene shooting visual angle and virtual scene imaged viewing angle, with virtual field
On the basis of scape imaged viewing angle, real scene image is adjusted, it is consistent with virtual scene imaged viewing angle to be allowed to imaged viewing angle;According to view
Angle real scene image adjusted, it is superimposed with virtual scene image, it forms real scene and virtual scene is superimposed
Display image.Another way is, is originally, for the real scene image with above-mentioned visual angle real scene image adjusted
In object establish model and render, generate and represent the object of real scene object;By the object in real scene according to its
Position under above-mentioned imaged viewing angle, is placed in the imaging space of virtual scene, thus total with object original in virtual scene
Image is shown with constituting.
In practical applications it was found that major defect existing for above-mentioned technological means is:
Firstly, the prior art is closed on the basis of imaged viewing angle consistency according to the spatial position between object merely
System's realization real scene object is merged with virtual scene object, and still, the various objects and factor in true environment are to user
Perception bring influence degree is not always depending on the spatial position where these objects and factor;Some in real scene
Object, although the center under the user perspective, during user is immersed in virtual reality, these objects are not
Apparent influence perceptually can be brought to user;And other object and factor in true environment, although being regarded in user
It is in backseat on spatial position under angle, even sightless (such as the temperature and humidity of true environment, or above
The weather in true environment mentioned and illumination), but but there is stronger influence on user's perception;So being immersed from realization
From the point of view of effect, during real scene is fused to virtual scene, it should pay the utmost attention to feel user in real scene
Know the biggish object of influence degree and factor, and reduces relatively small to user's sensation influence degree to the preferential of object and factor
Grade and weight.
Secondly, in the game of virtual reality, film and various simulated trainings, it is main still by virtual scene
Object and content show plot and guidance user interaction, and the main purpose for merging real scene is to avoid itself and virtual field
Scape is uncoordinated and destroys and immerses effect;Therefore, when the various objects and factor for including in real scene are relatively more or more complicated
In the case where, if excessively and excessively complicated real scene object is fused into, user can be weakened and concentrate attention
In virtual scene object, and excessive real scene object can also tie up the display space of virtual scene, or even having can
It can cause to show overlapping between object.
Real scene image is whether directly superimposed by third, or is imaged with the object of real scene for modeling originally,
In finally formed display image, the original object of object and virtual scene from real scene is all easy hair in visual effect
Raw not compatible situation.For example, if the furnishings in the shown castle room of virtual scene, and be fused into true
Scenario objects are various stylish articles, such as mobile phone etc. then will necessarily seriously destroy the immersive effects pursued originally.
Summary of the invention
In view of the above drawbacks of the prior art, the present invention provides a kind of moulds merged for virtual scene with real scene
Type System and method for.The image of real scene where the present invention shoots user extracts real scene object under user perspective, and
Various environmental parameters in acquisition true environment comprehensively;In turn, the present invention judge each real scene object and true environment because
Element is during user enters virtual reality to the weighing factor of user's perception;According to the sequence of the weighing factor, it is chosen
The higher real scene object of middle weighing factor and true environment factor are blended with virtual scene, and ignore wherein weighing factor
Lower than the object and factor of threshold value;And comprehensively consider the size of weighing factor and the space bit of real scene object itself
It sets, coordinates real scene object and shown jointly with virtual scene object;For real scene object, itself and virtual scene are determined
Degree of compatibility, and accordingly adjust the imaging model of real scene object.
The present invention provides the model systems that a kind of virtual scene is merged with real scene characterized by comprising
Real scene shooting unit for shooting real scene image with the shooting visual angle for approaching user perspective, and mentions
For indicating the parameter of the shooting visual angle of the real scene image;
Real scene object extracting unit, for adjusting the real scene image, making to adjust on the basis of user perspective
The imaged viewing angle of real scene image afterwards is consistent with user perspective;And it is identified simultaneously from real scene image adjusted
Extract real scene object;
True environment parameter acquisition unit, the true environment parameter for true environment where acquiring user;
User behavior shooting and recognition unit therefrom identify and extract the row of user for shooting the real-time pictures of user
To act, and determine the behavior act of user and the degree of association of real scene object;
Virtual scene generation unit generates just for the imaged viewing angle using the user perspective as virtual scene image
The virtual scene image of beginning, the virtual scene image include at least one virtual image object;
Weighing factor computing unit, for according to the real scene object in the real scene image under user perspective
Present spatial position, and according to the behavior act of the user and the degree of association of real scene object, calculate each true
The weighing factor that real scenario objects perceive user;And according to scheduled empirical criterion calculation true environment parameter to user
The weighing factor of perception;
Target discrimination unit is merged, for choosing the wherein higher real scene of weighing factor according to the weighing factor
Object and true environment factor are as the fusion target merged with virtual scene;
Real scene object model unit, for the real scene object as fusion target, for the real scene object
It establishes real scene object model and renders, as initial real scene object image;
Real scene object model reconfiguration unit, for according to as fusion target real scene object in user perspective
Lower locating spatial position, calculates the minimum range of itself and virtual image object;When the minimum range is less than distance threshold, root
According to the weighing factor of the real scene object, the scale of its real scene object model is reduced, and according to the true field after diminution
Scape object model renders again generates final real scene object image;
Virtual scene adjustment unit, according to as fusion target true environment factor, determine the true environment factor with
The matching degree of initial virtual scene image;When matching degree is unsatisfactory for predefined matching threshold, the virtual scene image is adjusted;
Scene integrated unit, for the space locating under user perspective according to the real scene object as fusion target
Final real scene object image is placed in the imaging space of virtual scene image by position, thus with virtual scene image
Collectively form display image.
Preferably, the user behavior shooting and recognition unit are drawn using eyeball tracking technology from the user of shooting in real time
The direction of visual lines of user is determined in face, and determines real scene object pointed by direction of visual lines;For user's direction of visual lines institute
The real scene object of direction assigns high association angle value, and the real scene object not being directed toward for direction of visual lines assigns low association
Angle value;Corresponding total correlation angle value is counted in the form of stored count value for each real scene object.
Preferably, the user behavior shooting and recognition unit extract its hand from user's real-time pictures of shooting
Spatial position, and the position is corresponded in the imaging space coordinate of real scene image;And then determine user's hand space
The space length of each real scene object space position in position and real scene image, when the space length is zero or distance
When being close within predetermined extent, then high association angle value is assigned to corresponding real scene object;When the space length does not approach
When within to predetermined extent, then low association angle value is assigned to corresponding real scene object;It is each in the form of stored count value
A real scene object counts corresponding total correlation angle value.
Preferably, the weighing factor computing unit judges that each real scene object is in institute in real scene image
The degree of closeness of the spatial position at place and the sighting center region under user perspective, and be each true field according to the degree of closeness
Scape object definition initial effects weighted value;Also, from user behavior shooting and recognition unit obtain the behavior act of user with respectively
The total correlation angle value of a real scene object weighs the initial effects that each real scene object has using the total correlation angle value
Weight values are modified, and obtain the weighing factor that each real scene object perceives user.
Preferably, the real scene object model unit determines the object of the real scene object as fusion target
Type, and the corresponding pre-determined model template of the object type is extracted, the scale parameter of real scene object is substituted into pre-determined model
Template obtains the real scene object model.
The present invention provides a kind of methods that virtual scene is merged with real scene characterized by comprising
Real scene shoots step, for shooting real scene image with the shooting visual angle for approaching user perspective, and mentions
For indicating the parameter of the shooting visual angle of the real scene image;
Real scene object extracting step, for adjusting the real scene image, making to adjust on the basis of user perspective
The imaged viewing angle of real scene image afterwards is consistent with user perspective;And it is identified simultaneously from real scene image adjusted
Extract real scene object;
True environment parameter acquisition step, the true environment parameter for true environment where acquiring user;
User behavior shooting and identification step therefrom identify and extract the row of user for shooting the real-time pictures of user
To act, and determine the behavior act of user and the degree of association of real scene object;
Virtual scene generation step generates just for the imaged viewing angle using the user perspective as virtual scene image
The virtual scene image of beginning, the virtual scene image include at least one virtual image object;
Weighing factor calculate step, for according to the real scene object in the real scene image under user perspective
Present spatial position, and according to the behavior act of the user and the degree of association of real scene object, calculate each true
The weighing factor that real scenario objects perceive user;And according to scheduled empirical criterion calculation true environment parameter to user
The weighing factor of perception;
Target discrimination step is merged, for choosing the wherein higher real scene of weighing factor according to the weighing factor
Object and true environment factor are as the fusion target merged with virtual scene;
Real scene object model establishment step, for the real scene object as fusion target, for the real scene
Object is established real scene object model and is rendered, as initial real scene object image;
Real scene object model reconstruct step, for according to as fusion target real scene object in user perspective
Lower locating spatial position, calculates the minimum range of itself and virtual image object;When the minimum range is less than distance threshold, root
According to the weighing factor of the real scene object, the scale of its real scene object model is reduced, and according to the true field after diminution
Scape object model renders again generates final real scene object image;
Virtual scene set-up procedure, according to as fusion target true environment factor, determine the true environment factor with
The matching degree of initial virtual scene image;When matching degree is unsatisfactory for predefined matching threshold, the virtual scene image is adjusted;
Scene fusion steps, for the space locating under user perspective according to the real scene object as fusion target
Final real scene object image is placed in the imaging space of virtual scene image by position, thus with virtual scene image
Collectively form display image.
Preferably, the user behavior shooting and identification step are drawn using eyeball tracking technology from the user of shooting in real time
The direction of visual lines of user is determined in face, and determines real scene object pointed by direction of visual lines;For user's direction of visual lines institute
The real scene object of direction assigns high association angle value, and the real scene object not being directed toward for direction of visual lines assigns low association
Angle value;Corresponding total correlation angle value is counted in the form of stored count value for each real scene object.
Preferably, the user behavior shooting and identification step extract its hand from user's real-time pictures of shooting
Spatial position, and the position is corresponded in the imaging space coordinate of real scene image;And then determine user's hand space
The space length of each real scene object space position in position and real scene image, when the space length is zero or distance
When being close within predetermined extent, then high association angle value is assigned to corresponding real scene object;When the space length does not approach
When within to predetermined extent, then low association angle value is assigned to corresponding real scene object;It is each in the form of stored count value
A real scene object counts corresponding total correlation angle value.
Preferably, the weighing factor calculating step judges that each real scene object is in institute in real scene image
The degree of closeness of the spatial position at place and the sighting center region under user perspective, and be each true field according to the degree of closeness
Scape object definition initial effects weighted value;Also, by user behavior shooting and identification step obtain user behavior act with respectively
The total correlation angle value of a real scene object weighs the initial effects that each real scene object has using the total correlation angle value
Weight values are modified, and obtain the weighing factor that each real scene object perceives user.
Preferably, the real scene object model establishment step determines the real scene object as fusion target
Object type, and the corresponding pre-determined model template of the object type is extracted, the scale parameter of real scene object is substituted into predetermined
Model template obtains the real scene object model.
To which the present invention selectively carries out a part of real scene object and void on the basis of imaged viewing angle consistency
The fusion of quasi- scenario objects, wherein determining to execute the true field merged to the influence degree that user perceives according to real scene object
Scape object, the real scene object priority high to user's sensation influence degree are merged, and are lower than one to user's sensation influence
The real scene object of degree is determined then without fusion, avoids the real scene object being fused excessive, excessively miscellaneous and weaken user
It is primarily focused among virtual scene;According to true environment parameter regulation virtual scene, make virtual scene and real world
The harmony of various environmental factors significantly improves;Fusion is formed by video vision effect of the real scene object in virtual reality
Fruit and virtual scene compatibility are strong;By the invention it is possible to keep user that can experience part true environment object and factor,
It is generated when effectively improving user's viewing virtual reality display simultaneously to immerse effect.
Detailed description of the invention
In conjunction with following description attached drawing, describe in detail to a specific embodiment of the invention, in which:
Fig. 1 is the frame structure signal for the model system that virtual scene is merged with real scene in the preferred embodiment of the present invention
Figure;
Fig. 2 is in the preferred embodiment of the present invention by imaged viewing angle real scene image schematic diagram adjusted;
Fig. 3 is real scene object and virtual scene object spatial relation schematic diagram in the preferred embodiment of the present invention;
Specific embodiment
Below by embodiment, technical solution of the present invention is described in further detail.
Shown in Figure 1, the present invention provides the model systems that a kind of virtual scene is merged with real scene, are described below
The specific structure and function of the system.
Real scene shooting unit 1001 shoots real scene image for the shooting visual angle to approach user perspective, and
And provide the parameter for indicating the shooting visual angle of the real scene image.Real scene shooting unit 1001 may include at least one
Video camera is laid in the position where user is practical in virtual reality image display system (can be 2D or 3D display system)
Near (such as user's seat), and the shooting visual angle that video camera executes picture photographing is arranged according to user perspective, spoken of here
User perspective refers to the scope of sight of its eye when user observes virtual reality picture, by the focal length, the scape that adjust video camera
It shoots scope of sight for the control such as deep, and can make real scene shooting unit by being superimposed the picture of multiple cameras
1001 shooting visual angle full simulation user perspective, even if real scene seen in photographed scene and user's reality keeps height one
Cause property.While real scene shooting unit 1001 exports captured real scene image, also output indicates shooting visual angle
Parameter is as additional information.
Real scene object extracting unit 1002, for adjusting the real scene image, making on the basis of user perspective
The imaged viewing angle of real scene image adjusted is consistent with user perspective;And know from real scene image adjusted
Not and extract real scene object.Since the position for video camera of real scene shooting unit 1001 never may be with user's eyes
Position be overlapped, thus while the shooting visual angle that video camera executes picture photographing is arranged according to user perspective as far as possible, but clapped
Real scene seen in the real scene image and user's reality taken the photograph can still have certain subtense angle, moreover, in practical application
In, the position for video camera of real scene shooting unit 1001 is fixed, and the eyes position of user is always with the shifting on its head
It moves and occurs slightly to change.Therefore, real scene object extracting unit 1002 can be clapped from user behavior described below
It takes the photograph and extracts the current eyes exact position of user in real time with recognition unit 1004, and calculated currently according to the eyes exact position
Real-time user perspective, also, using current user perspective in real time as benchmark, it corrects from real scene shooting unit 1001
Subtense angle existing for the imaged viewing angle of the real scene image of acquirement adjusts real scene image, makes real scene adjusted
The imaged viewing angle of image is consistent with current user perspective in real time.In turn, real scene object extracting unit 1002 utilizes base
In edge detection and/or the Space Target Recognition Algorithm of color structure feature extraction, decomposes and mention in real scene image
Each pinpoint target present in picture is taken out, as real scene object.For example, Fig. 2 shows adjust by imaged viewing angle
Real scene image afterwards, wherein 2001 virtual reality displays watched for user, 2002-2005 is respectively in user perspective
The other targets occurred in its lower scope of sight, such as the article around virtual reality display, real scene object extraction list
Member 1002 extracts these targets present in the real scene image of Fig. 2 respectively, wherein excludes virtual reality display
2001, and other target 2002-2005 are identified as real scene object.
True environment parameter acquisition unit 1003, the true environment parameter for true environment where acquiring user.Really
Environmental parameter acquisition unit 1003 can be implemented as one group of various types of sensor, to incude and export user place
The environmental parameters such as illumination, ambient brightness, sound, temperature, the humidity of true environment.True environment parameter acquisition unit 1003 is also
It may include that connection third party's environmental data provides the data communication interface of platform, for example, can be obtained from weather data platform
The weather data in user geographical location, as environmental parameter.Mechanics sensor can be installed on user's seat, to acquire use
Family stress in all directions and as environmental parameter.
User behavior shooting and recognition unit 1004 therefrom identify and extract user for shooting the real-time pictures of user
Behavior act, and determine user behavior act and real scene object the degree of association.User behavior shooting and recognition unit
1004 are independently of another or one group of camera of real scene shooting unit 1001, user behavior shooting and recognition unit
Position (such as user's seat) where 1004 user orienteds are practical is shot, so that the picture of user itself is obtained, the picture
It should include that user's face and user's upper half be leaned to one side the image of body --- including arm and hand ---.User behavior shooting and identification are single
Member 1004 utilizes user itself picture to carry out user location identification in turn.User behavior shooting and recognition unit 1004 identify user
The real time position of eyes, and it is supplied to correction of the above-mentioned real scene object extracting unit 1002 for subtense angle.Also, user
Behavior shooting and recognition unit 1004 carry out the identification and extraction of user behavior movement using user itself picture, and determine user
Behavior act and real scene object the degree of association.For example, user behavior shooting and recognition unit 1004 can use eyeball
Tracer technique and the direction of visual lines for determining user, and the subtense angle adjustment for combining real scene object extracting unit 1002 to provide
Real scene image afterwards determines real scene object pointed by direction of visual lines.User behavior shooting and recognition unit 1004
High association angle value is assigned for real scene object pointed by user's direction of visual lines, and direction of visual lines is not directed toward true
Scenario objects assign low association angle value.In another example user behavior shooting and recognition unit 1004 are extracted from user itself picture
The spatial position of its hand, and it is adjusted that the position corresponded to the subtense angle that real scene object extracting unit 1002 provides
In the imaging space coordinate of real scene image;In turn, user behavior shooting and recognition unit 1004 determine that user's hand is empty
Between in position and real scene image each real scene object space position space length, when the space length is zero (to say
It is bright to produce contact) or apart from when approaching to a certain extent, then high association angle value is assigned to corresponding real scene object, and it is right
Low association angle value is assigned apart from remote real scene object with user's hand spatial position.User behavior shooting and recognition unit
1004 routinely extract real-time and analysis user behavior movements during the entire process of user watches virtual reality display, and with
The form of stored count value is that each real scene object counts corresponding total correlation angle value.
Virtual scene generation unit 1005, it is raw for the imaged viewing angle using the user perspective as virtual scene image
At initial virtual scene image;The initial virtual scene image is according to virtual realities such as game, film or simulated trainings
The needs of application and imaginary original scene, and the virtual scene image is made of at least one virtual image object
, for example, virtual scene image shows the scene of the dinner party in castle room, then exist in the virtual scene image
The virtual images objects such as table, candlestick, service plate, bouquet goods of furniture for display rather than for use and metope mural painting, these are needed and complete void according to plot
What structure came out, and the pictures of these objects is generated as video is imaged using user perspective.
Weighing factor computing unit 1006, for the real scene shadow according to the real scene object under user perspective
The present spatial position as in, and according to the behavior act of the user and the total correlation angle value of real scene object, meter
Calculate the weighing factor that each real scene object perceives user;And joined according to scheduled empirical criterion calculation true environment
The weighing factor of several couples of users perception.There is the main target for concentrating our efforts for wishing to pay close attention to and neglect in the visual perception of people
The characteristics of omiting the by-end in the ken, during user is with the viewing virtual reality display of its user perspective, in the ken
Each real scene object in visible real scene, although can be by since user's attention is generally concentrated at virtual scene
To ignorance, but still can to user perceive generate it is to be realized that or subconsciousness on influence, weighing factor mentioned herein is with regard to anti-
The each real scene object reflected in the ken perceives bring influence degree to user.In general, the center of the human eye ken
Region is the main target of attention concern, and neighboring area is then the by-end of non-attention concern, therefore, weighing factor
Computing unit 1006 obtains and analyzes the real scene image under user perspective, judges that wherein each real scene object is in user
The degree of closeness of locating spatial position and central area under visual angle, the real scene image under user perspective as shown in Figure 2,
Wherein dotted line frame indicates the central area under user perspective, other than the display 2001 not as real scene object, really
Scenario objects 2002,2003 and the degree of closeness of central area are higher than real scene object 2004,2005, therefore, weighing factor
Computing unit 1006 can define higher initial effects weighted value for real scene object 2002,2003, and be real scene pair
As 2004, the 2005 lower initial effects weighted values of definition.But as the background technique of the application is sayed, in true environment
Various objects to perceive bring influence degree to user be not always to depend on these objects under user perspective where it
Spatial position.For user during watching a main target, sight is not directed toward the main target always, but should being directed toward
By-end-is again directed to cyclic switching under the mode of the main target around main target-direction;Wherein, sight is directed toward
Main target duration section is significantly longer, and it is appreciably shorter to be directed toward surrounding by-end duration section;And it is regarding
Line is directed toward the stage of surrounding by-end, is directed toward the influence degree that the higher by-end of frequency perceives user by sight and wants high
The lower other by-ends of frequency are directed toward in sight, although because attention is focused primarily upon main target by user, it can
There can be the concern to the by-end in subconsciousness.Therefore, although real scene object 2002,2003 is in
Heart district domain, but if found through analysis, the frequency that user's sight is directed toward real scene object 2004 is higher than real scene pair
As 2002,2003, then the influence degree that real scene object 2004 perceives user be actually greater than real scene object 2002,
2003.In addition, an important factor for tactile is also attraction user's perception, if limbs (usually hand) the more frequency of user
In contact with or close to some real scene object, even if then central area of the object far from the ken, the real scene object pair
The influence degree of user's perception still can be significantly greater.Based on the above reasons, weighing factor computing unit 1006 can be from user's row
The behavior act of user and the total correlation angle value of each real scene object, the total correlation are obtained for shooting and recognition unit 1004
Angle value embodies the correlation degree of user's sight direction and/or user's hand and each real scene object;Weighing factor calculates single
Member 1006 is modified using the initial effects weighted value that the total correlation angle value has each real scene object, i.e., according to total
It is associated with angle value and calculates a modifying factor, the association more big then modifying factor of angle value is bigger, and with the modifying factor multiplied by initial effects
Weighted value obtains the weighing factor that each real scene object perceives user.For example, it is assumed that real scene object in Fig. 2
2002,2003 initial effects weighted value is 10, and the initial effects weighted value of real scene object 2004,2005 is 5;From
The association angle value highest of user behavior shooting and the real scene object 2004 obtained of recognition unit 1004, real scene object
2002 association angle value is placed in the middle, and the association angle value of real scene object 2003,2005 is lower;Thus according to the association angle value meter
Calculating 2004 modifying factor of real scene object is 2, and the modifying factor of real scene object 2002 is 0.8, and real scene pair
As 2003,2005 modifying factor is 0.5;Finally, the weighing factor value of each real scene object is confirmed as: real scene pair
As 2004 be 10, real scene object 2002 is 8, real scene object 2003 is 5, real scene object 2005 is 2.5.
Also for reaching, to immerse effect very important for the influence that the various factors of true environment perceives user.Shadow
The influence that user perceives can be weighed according to scheduled empirical criterion calculation true environment parameter by ringing weight calculation unit 1006
Weight.For example, true environment parameter acquisition unit 1003 can provide the illumination of true environment where user, ambient brightness, sound,
As environmental parameter, weighing factor computing unit 1006 can be according to scheduled empirical mark for temperature, humidity, weather, body stress
Standard assigns different weighing factor values for these environmental parameters;For example, ambient brightness and weather can be endowed weighing factor value
6;Ambient lighting, temperature and humidity weighing factor value be 3;It is that it dynamically assigns weight according to the decibel of ambient sound
Value, it is clear that the decibel of ambient sound is higher, and weighing factor value is bigger;Assuming that the weighing factor of sound is 4 in this example;Body by
Power is also similar therewith, and in the case that body stress is slighter, weighing factor value is fewer, it is assumed that the influence of body stress in this example
Weighted value is also 4.
Target discrimination unit 1007 is merged, each real scene according to determined by weighing factor computing unit 1006 is used for
The weighing factor of object and true environment parameter chooses the wherein higher real scene object of weighing factor and true environment factor
As the fusion target merged with virtual scene.In this example, it is assumed that it is big that fusion target discrimination unit 1007, which sets and selects threshold value,
In being equal to 5, then real scene object 2004, real scene object 2002, real scene object 2003 and ambient brightness, weather
It will be selected as fusion target, and other real scene objects and true environment parameter are then ignored without being fused to
Among virtual scene.Also avoid so excessive and excessively miscellaneous influence degree actually simultaneously little real scene object and because
Element is fused into, weakens the attention that user pays close attention to virtual scene.
Real scene object model unit 1008, for the real scene object as fusion target, for the real scene
Object is established real scene object model and is rendered, as initial real scene object image.It is described as optimal scheme
Real scene object model unit 1008 determines the object type of the real scene object as fusion target, and extracts the object
The scale parameter of real scene object is substituted into pre-determined model template and obtains the true field by the corresponding pre-determined model template of type
Scape object model.In this example, it is determined that will be to real scene object 2004, real scene object 2002, real scene object
2003 are merged, it will be assumed that real scene object 2004 is a vase, real scene object 2002 be a cup and
Real scene object 2003 is a mobile phone, and virtual scene image is shown be still the dinner party in castle room scene;
Melt if we are directly added to the real image of real scene object 2004,2002 and 2003 among the virtual scene
It closes, or directly referring to the three-dimensional geometry Morphological Modeling of the real image of real scene object 2004,2002 and 2003 and melts
It closes among the virtual scene, then the visual effect of these real scene objects 2004,2002 and 2003 can be with virtual scene shadow
The style appearance of picture is uncoordinated, and the appearance meeting of especially mobile phone seriously destroys virtual reality and immerses effect to user.In order to keep away
Exempt from this problem, the present invention proposes, real scene object model unit 1008 is made with the judgement of target type discrimination algorithm first
For the object type of the real scene object 2004,2002 and 2003 of fusion target, it can automatically identify real scene pair
As 2004 be a vase, real scene object 2002 is a cup and real scene object 2003 is a mobile phone.Into
And have various common objects types pre- accordingly in the preset commodity modeling template library of real scene object model unit 1008
The object type of cover half pattern version, these pre-determined model templates and real scene object belongs to identical or approximate type, and
The visual effect of pre-determined model template and the visual effect of virtual scene image are mutually coordinated.In this example, determine real scene pair
As 2004 object type is vase, it can it is corresponding with vase pre- to extract a type from the commodity modeling template library
Cover half pattern version, the pre-determined model template can be also the original three-dimensional model of a vase, but the form and rendering of the vase
Visual effect afterwards and the visual effect in castle room are more coordinated, such as pattern more classicization;In turn, according to real scene
Scale parameter is substituted into the pre-determined model template, that is, adjusts vase by the scale parameter of Extraction of Image real scene object 2004
The size of original three-dimensional model is allowed to coordinate with 2004 phase of real scene object, so that real scene object model is obtained, as
Initial real scene object image.Identical processing can also be executed for real scene object 2002,2003.Wherein, right
In the real scene object 2003 that object type is mobile phone, the shape of classic phone is can be used in corresponding pre-determined model template
State, thus shield its with it is uncoordinated in virtual scene visual effect.
Real scene object model reconfiguration unit 1009, for according to as fusion target real scene object in user
Locating spatial position, calculates the minimum range of itself and virtual image object under visual angle.As shown in figure 3, real scene object mould
Type reconfiguration unit 1009 analyzes the real scene image that real scene object extracting unit 1002 provides, according under user perspective
Its spatial position in real scene of real scene object 2004,2002 and 2003 determines that it is using identical user
The spatial position being mapped in virtual scene constructed by visual angle;Also, in virtual scene image, also there is one or more
A virtual scene object, such as 3001,3002,3003 in Fig. 3.Real scene object model reconfiguration unit 1009 can analyze with
Under the visual angle of family, real scene object 2004,2002 and 2003 and the mutual distance of virtual scene object 3001-3003, and really
Minimum space distance in fixed each real scene object and the space length of surrounding virtual scene object.When the most narrow spacing
When from less than a scheduled distance threshold, show to hold since the distance between real scene object and virtual scene object are relatively close
Easily excessively attract the attention of user;Therefore, according to the weighing factor of the real scene object, its real scene object mould is reduced
The scale of type;Wherein weighing factor is bigger, and the degree reduced to it is bigger;And according to the real scene object model weight after diminution
New rendering can accordingly reduce the bright of real scene object model according to minimum space distance during rendering again
Degree, bright-colored degree and Texture complication, the minimum space is apart from smaller, and the weighing factor of real scene object is bigger, then
It is bigger to the reduction degree of brightness, bright-colored degree and Texture complication, to generate final real scene object image.
Virtual scene adjustment unit 1010, according to as fusion target true environment factor, determine the true environment because
The plain matching degree with initial virtual scene image;When matching degree is unsatisfactory for predefined matching threshold, the virtual scene is adjusted
The display parameters and associated scenario of image are set.For example, ambient brightness and weather are as the true of fusion target for this example
Environmental factor.Virtual scene adjustment unit 1008 judges the difference conduct of the overall brightness mean value and ambient brightness value of virtual scene
Matching degree is then turned up or turns down the overall brightness of virtual scene when difference is excessive to reduce difference;And virtual scene adjustment
Unit 1008 judges whether the weather setting in virtual scene is matching, for example, true ring according to the weather in true environment
Weather under border is eyeball, and the weather in current virtual scene is set as overcast and rainy;In the case where plot allows, then can change
Weather setting in virtual scene model, is revised as fine;After then virtual scene generation unit 1005 can be according to modification
Weather setting, call relevant weather background imaging model and reconstruct virtual scene.
Scene integrated unit 1011, for locating under user perspective according to the real scene object as fusion target
Final real scene object image is incident upon in the imaging space of virtual scene image by spatial position, thus and virtual scene
Image collectively forms display image.
The present invention provides a kind of method that virtual scene is merged with real scene in turn characterized by comprising
Real scene shoots step, for shooting real scene image with the shooting visual angle for approaching user perspective, and mentions
For indicating the parameter of the shooting visual angle of the real scene image;
Real scene object extracting step, for adjusting the real scene image, making to adjust on the basis of user perspective
The imaged viewing angle of real scene image afterwards is consistent with user perspective;And it is identified simultaneously from real scene image adjusted
Extract real scene object;
True environment parameter acquisition step, the true environment parameter for true environment where acquiring user;
User behavior shooting and identification step therefrom identify and extract the row of user for shooting the real-time pictures of user
To act, and determine the behavior act of user and the degree of association of real scene object;
Virtual scene generation step generates just for the imaged viewing angle using the user perspective as virtual scene image
The virtual scene image of beginning, the virtual scene image include at least one virtual image object;
Weighing factor calculate step, for according to the real scene object in the real scene image under user perspective
Present spatial position, and according to the behavior act of the user and the degree of association of real scene object, calculate each true
The weighing factor that real scenario objects perceive user;And according to scheduled empirical criterion calculation true environment parameter to user
The weighing factor of perception;
Target discrimination step is merged, for choosing the wherein higher real scene of weighing factor according to the weighing factor
Object and true environment factor are as the fusion target merged with virtual scene;
Real scene object model establishment step, for the real scene object as fusion target, for the real scene
Object is established real scene object model and is rendered, as initial real scene object image;
Real scene object model reconstruct step, for according to as fusion target real scene object in user perspective
Lower locating spatial position, calculates the minimum range of itself and virtual image object;When the minimum range is less than distance threshold,
According to the weighing factor of the real scene object, the scale of its real scene object model is reduced, and according to true after diminution
Scenario objects model renders again generates final real scene object image;
Virtual scene set-up procedure, according to as fusion target true environment factor, determine the true environment factor with
The matching degree of initial virtual scene image;When matching degree is unsatisfactory for predefined matching threshold, the virtual scene image is adjusted;
Scene fusion steps, for the space locating under user perspective according to the real scene object as fusion target
Final real scene object image is placed in the imaging space of virtual scene image by position, thus with virtual scene image
Collectively form display image.
To which the present invention selectively carries out a part of real scene object and void on the basis of imaged viewing angle consistency
The fusion of quasi- scenario objects, wherein determining to execute the true field merged to the influence degree that user perceives according to real scene object
Scape object, the real scene object priority high to user's sensation influence degree are merged, and are lower than one to user's sensation influence
The real scene object of degree is determined then without fusion, avoids the real scene object being fused excessive, excessively miscellaneous and weaken user
It is primarily focused among virtual scene;According to true environment parameter regulation virtual scene, make virtual scene and real world
The harmony of various environmental factors significantly improves;Fusion is formed by video vision effect of the real scene object in virtual reality
Fruit and virtual scene compatibility are strong;By the invention it is possible to keep user that can experience part true environment object and factor,
It is generated when effectively improving user's viewing virtual reality display simultaneously to immerse effect.
Above embodiments are merely to illustrate the present invention, and not limitation of the present invention, the common skill in relation to technical field
Art personnel can also make a variety of changes and modification without departing from the spirit and scope of the present invention, therefore all etc.
Same technical solution also belongs to scope of the invention, and scope of patent protection of the invention should be defined by the claims.
Claims (10)
1. the model system that a kind of virtual scene is merged with real scene characterized by comprising
Real scene shooting unit for shooting real scene image with the shooting visual angle for approaching user perspective, and provides table
Show the parameter of the shooting visual angle of the real scene image;
Real scene object extracting unit, for adjusting the real scene image, making adjusted on the basis of user perspective
The imaged viewing angle of real scene image is consistent with user perspective;And it identifies and extracts from real scene image adjusted
Real scene object out;
True environment parameter acquisition unit, the true environment parameter for true environment where acquiring user;
User behavior shooting and recognition unit, for shooting the real-time pictures of user, the behavior for therefrom identifying and extracting user is dynamic
Make, and determines the behavior act of user and the degree of association of real scene object;
Virtual scene generation unit generates initial for the imaged viewing angle using the user perspective as virtual scene image
Virtual scene image, the virtual scene image include at least one virtual image object;
Weighing factor computing unit, for locating in the real scene image under user perspective according to the real scene object
In spatial position, and according to the degree of association of the behavior act of the user and real scene object, calculate each true field
The weighing factor that scape object perceives user;And user is perceived according to scheduled empirical criterion calculation true environment parameter
Weighing factor;
Target discrimination unit is merged, is more than or equal to selection threshold value for according to the weighing factor, choosing wherein weighing factor
Real scene object and true environment factor are as the fusion target merged with virtual scene;
Real scene object model unit, for the real scene object as fusion target, for real scene object foundation
Real scene object model simultaneously renders, as initial real scene object image;
Real scene object model reconfiguration unit, for according to as fusion target real scene object under user perspective institute
The spatial position at place calculates the minimum range of itself and virtual image object;When the minimum range is less than distance threshold, according to this
The weighing factor of real scene object reduces the scale of its real scene object model, and according to the real scene pair after diminution
Final real scene object image is generated as model renders again;
Virtual scene adjustment unit determines the true environment factor and initial according to the true environment factor as fusion target
Virtual scene image matching degree;When matching degree is unsatisfactory for predefined matching threshold, the virtual scene image is adjusted;
Scene integrated unit, for the space bit locating under user perspective according to the real scene object as fusion target
It sets, final real scene object image is placed in the imaging space of virtual scene image, thus total with virtual scene image
Image is shown with constituting.
2. the model system that virtual scene according to claim 1 is merged with real scene, which is characterized in that the user
Behavior shooting and recognition unit determine the direction of visual lines of user using eyeball tracking technology from user's real-time pictures of shooting, and
Determine real scene object pointed by direction of visual lines;High close is assigned for real scene object pointed by user's direction of visual lines
Join angle value, and the real scene object not being directed toward for direction of visual lines assigns low association angle value;It is in the form of stored count value
Each real scene object counts corresponding total correlation angle value.
3. the model system that virtual scene according to claim 1 is merged with real scene, which is characterized in that the user
Behavior shooting and recognition unit extract the spatial position of its hand from user's real-time pictures of shooting, and the position is corresponded to
In the imaging space coordinate of real scene image;And then determine each true in user's hand spatial position and real scene image
The space length of real field scape object space position, it is when the space length is zero or distance is close within predetermined extent, then right
Corresponding real scene object assigns high association angle value;When the space length is not close within predetermined extent, then to corresponding
Real scene object assign low association angle value;It is respectively corresponding for each real scene object statistics in the form of stored count value
Total correlation angle value.
4. the model system that virtual scene according to claim 2 or 3 is merged with real scene, which is characterized in that described
Weighing factor computing unit judges that each real scene object is regarded in locating spatial position and user in real scene image
The degree of closeness in the sighting center region under angle, and be each real scene object definition initial effects power according to the degree of closeness
Weight values;Also, the behavior act of user and total pass of each real scene object are obtained from user behavior shooting and recognition unit
Join angle value, be modified using the initial effects weighted value that the total correlation angle value has each real scene object, is obtained each
The weighing factor that a real scene object perceives user.
5. the model system that virtual scene according to claim 4 is merged with real scene, which is characterized in that described true
Scenario objects model unit determines the object type of the real scene object as fusion target, and it is corresponding to extract the object type
Pre-determined model template, the scale parameter of real scene object is substituted into pre-determined model template and obtains the real scene object mould
Type.
6. a kind of method that virtual scene is merged with real scene characterized by comprising
Real scene shoots step, for shooting real scene image with the shooting visual angle for approaching user perspective, and provides table
Show the parameter of the shooting visual angle of the real scene image;
Real scene object extracting step, for adjusting the real scene image, making adjusted on the basis of user perspective
The imaged viewing angle of real scene image is consistent with user perspective;And it identifies and extracts from real scene image adjusted
Real scene object out;
True environment parameter acquisition step, the true environment parameter for true environment where acquiring user;
User behavior shooting and identification step, for shooting the real-time pictures of user, the behavior for therefrom identifying and extracting user is dynamic
Make, and determines the behavior act of user and the degree of association of real scene object;
Virtual scene generation step generates initial for the imaged viewing angle using the user perspective as virtual scene image
Virtual scene image, the virtual scene image include at least one virtual image object;
Weighing factor calculates step, for locating in the real scene image under user perspective according to the real scene object
In spatial position, and according to the degree of association of the behavior act of the user and real scene object, calculate each true field
The weighing factor that scape object perceives user;And user is perceived according to scheduled empirical criterion calculation true environment parameter
Weighing factor;
Target discrimination step is merged, is more than or equal to selection threshold value for according to the weighing factor, choosing wherein weighing factor
Real scene object and true environment factor are as the fusion target merged with virtual scene;
Real scene object model establishment step, for the real scene object as fusion target, for the real scene object
It establishes real scene object model and renders, as initial real scene object image;
Real scene object model reconstruct step, for according to as fusion target real scene object under user perspective institute
The spatial position at place calculates the minimum range of itself and virtual image object;When the minimum range is less than distance threshold, according to this
The weighing factor of real scene object reduces the scale of its real scene object model, and according to the real scene pair after diminution
Final real scene object image is generated as model renders again;
Virtual scene set-up procedure determines the true environment factor and initial according to the true environment factor as fusion target
Virtual scene image matching degree;When matching degree is unsatisfactory for predefined matching threshold, the virtual scene image is adjusted;
Scene fusion steps, for the space bit locating under user perspective according to the real scene object as fusion target
It sets, final real scene object image is placed in the imaging space of virtual scene image, thus total with virtual scene image
Image is shown with constituting.
7. the method that virtual scene according to claim 6 is merged with real scene, which is characterized in that the user behavior
Shooting and identification step utilize eyeball tracking technology to determine the direction of visual lines of user from user's real-time pictures of shooting, and determine
Real scene object pointed by direction of visual lines;The high degree of association is assigned for real scene object pointed by user's direction of visual lines
Value, and the real scene object not being directed toward for direction of visual lines assigns low association angle value;It is each in the form of stored count value
Real scene object counts corresponding total correlation angle value.
8. the method that virtual scene according to claim 6 is merged with real scene, which is characterized in that the user behavior
Shooting and identification step extract the spatial position of its hand from user's real-time pictures of shooting, and the position is corresponded to really
In the imaging space coordinate of scene image;And then determine each true field in user's hand spatial position and real scene image
The space length of scape object space position, when the space length is zero or distance is close within predetermined extent, then to corresponding
Real scene object assign high association angle value;When the space length is not close within predetermined extent, then to corresponding true
Real scenario objects assign low association angle value;It is corresponding total for each real scene object statistics in the form of stored count value
It is associated with angle value.
9. the method that virtual scene according to claim 7 or 8 is merged with real scene, which is characterized in that the influence
Weight calculation step judges that each real scene object is under locating spatial position and user perspective in real scene image
Sighting center region degree of closeness, and according to the degree of closeness be each real scene object definition initial effects weight
Value;Also, the behavior act of user and the total correlation of each real scene object are obtained by user behavior shooting and identification step
Angle value is modified using the initial effects weighted value that the total correlation angle value has each real scene object, is obtained each
The weighing factor that real scene object perceives user.
10. the method that virtual scene according to claim 9 is merged with real scene, which is characterized in that the true field
Scape object model establishment step determines the object type of the real scene object as fusion target, and extracts the object type phase
The scale parameter of real scene object is substituted into pre-determined model template and obtains the real scene object by the pre-determined model template answered
Model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610681884.0A CN106354251B (en) | 2016-08-17 | 2016-08-17 | A kind of model system and method that virtual scene is merged with real scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610681884.0A CN106354251B (en) | 2016-08-17 | 2016-08-17 | A kind of model system and method that virtual scene is merged with real scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106354251A CN106354251A (en) | 2017-01-25 |
CN106354251B true CN106354251B (en) | 2019-04-02 |
Family
ID=57844180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610681884.0A Expired - Fee Related CN106354251B (en) | 2016-08-17 | 2016-08-17 | A kind of model system and method that virtual scene is merged with real scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106354251B (en) |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980983A (en) | 2017-02-23 | 2017-07-25 | 阿里巴巴集团控股有限公司 | Service authentication method and device based on virtual reality scenario |
CN107066092B (en) * | 2017-03-20 | 2020-04-03 | 上海大学 | VR running space dynamic detection and parameterized virtual scene reconstruction system and method |
US10366540B2 (en) | 2017-03-23 | 2019-07-30 | Htc Corporation | Electronic apparatus and method for virtual reality or augmented reality system |
CN106896925A (en) * | 2017-04-14 | 2017-06-27 | 陈柳华 | The device that a kind of virtual reality is merged with real scene |
CN107320960A (en) * | 2017-06-29 | 2017-11-07 | 厦门游亨世纪科技有限公司 | A kind of game of mobile terminal and virtual reality conversion method |
CN107665133A (en) * | 2017-09-04 | 2018-02-06 | 北京小鸟看看科技有限公司 | Wear the loading method of the Run-time scenario of display device and wear display device |
CN108320333B (en) * | 2017-12-29 | 2022-01-11 | 中国银联股份有限公司 | Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method |
CN110852770B (en) * | 2018-08-21 | 2023-05-26 | 阿里巴巴集团控股有限公司 | Data processing method and device, computing device and display device |
CN109040619A (en) * | 2018-08-24 | 2018-12-18 | 合肥景彰科技有限公司 | A kind of video fusion method and apparatus |
CN109192111A (en) * | 2018-10-19 | 2019-01-11 | 林辉 | A kind of guide system that the scenic spot based on AR technology is visited |
CN109920063B (en) * | 2019-03-11 | 2023-04-28 | 中船第九设计研究院工程有限公司 | Construction method of ship segmented storage yard guiding system |
CN110188482B (en) * | 2019-05-31 | 2022-06-21 | 魔门塔(苏州)科技有限公司 | Test scene creating method and device based on intelligent driving |
CN110400334A (en) * | 2019-07-10 | 2019-11-01 | 佛山科学技术学院 | A kind of virtual reality fusion emulation experiment collecting method and system based on registration |
CN112242004B (en) * | 2019-07-16 | 2023-09-01 | 华中科技大学 | AR scene virtual engraving method and system based on illumination rendering |
CN110931111A (en) * | 2019-11-27 | 2020-03-27 | 昆山杜克大学 | Autism auxiliary intervention system and method based on virtual reality and multi-mode information |
CN113268014B (en) * | 2020-02-14 | 2024-07-09 | 阿里巴巴集团控股有限公司 | Carrier, facility control method, device, system and storage medium |
CN111862866B (en) * | 2020-07-09 | 2022-06-03 | 北京市商汤科技开发有限公司 | Image display method, device, equipment and computer readable storage medium |
CN112053446B (en) * | 2020-07-11 | 2024-02-02 | 南京国图信息产业有限公司 | Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS |
CN112419509A (en) * | 2020-11-27 | 2021-02-26 | 上海影创信息科技有限公司 | Virtual object generation processing method and system and VR glasses thereof |
CN112738366B (en) * | 2020-12-14 | 2022-08-26 | 上海科致电气自动化股份有限公司 | Test type camera shooting array control system and method |
CN113797525B (en) * | 2020-12-23 | 2024-03-22 | 广州富港生活智能科技有限公司 | Novel game system |
CN112698800B (en) * | 2020-12-29 | 2022-09-30 | 卡莱特云科技股份有限公司 | Method and device for recombining display sub-pictures and computer equipment |
CN112819968B (en) * | 2021-01-22 | 2024-04-02 | 北京智能车联产业创新中心有限公司 | Test method and device for automatic driving vehicle based on mixed reality |
CN113672084B (en) * | 2021-08-03 | 2024-08-16 | 歌尔科技有限公司 | AR display picture adjusting method and system |
CN114047817B (en) * | 2021-10-15 | 2023-04-07 | 中邮通建设咨询有限公司 | Virtual reality VR interactive system of meta universe |
CN117934777B (en) * | 2024-01-26 | 2024-08-30 | 扬州自在岛生态旅游投资发展有限公司 | Space arrangement system and method based on virtual reality |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102227748A (en) * | 2008-10-03 | 2011-10-26 | 3M创新有限公司 | Systems and methods for multi-perspective scene analysis |
CN102446192A (en) * | 2010-09-30 | 2012-05-09 | 国际商业机器公司 | Method and device for estimating attention in virtual world |
CN102667811A (en) * | 2010-03-08 | 2012-09-12 | 英派尔科技开发有限公司 | Alignment of objects in augmented reality |
CN102945564A (en) * | 2012-10-16 | 2013-02-27 | 上海大学 | True 3D modeling system and method based on video perspective type augmented reality |
WO2013119221A1 (en) * | 2012-02-08 | 2013-08-15 | Intel Corporation | Augmented reality creation using a real scene |
CN104599243A (en) * | 2014-12-11 | 2015-05-06 | 北京航空航天大学 | Virtual and actual reality integration method of multiple video streams and three-dimensional scene |
-
2016
- 2016-08-17 CN CN201610681884.0A patent/CN106354251B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102227748A (en) * | 2008-10-03 | 2011-10-26 | 3M创新有限公司 | Systems and methods for multi-perspective scene analysis |
CN102667811A (en) * | 2010-03-08 | 2012-09-12 | 英派尔科技开发有限公司 | Alignment of objects in augmented reality |
CN102446192A (en) * | 2010-09-30 | 2012-05-09 | 国际商业机器公司 | Method and device for estimating attention in virtual world |
WO2013119221A1 (en) * | 2012-02-08 | 2013-08-15 | Intel Corporation | Augmented reality creation using a real scene |
CN102945564A (en) * | 2012-10-16 | 2013-02-27 | 上海大学 | True 3D modeling system and method based on video perspective type augmented reality |
CN104599243A (en) * | 2014-12-11 | 2015-05-06 | 北京航空航天大学 | Virtual and actual reality integration method of multiple video streams and three-dimensional scene |
Also Published As
Publication number | Publication date |
---|---|
CN106354251A (en) | 2017-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106354251B (en) | A kind of model system and method that virtual scene is merged with real scene | |
US9842433B2 (en) | Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality | |
US10939034B2 (en) | Imaging system and method for producing images via gaze-based control | |
US20240265650A1 (en) | Head-mounted display with pass-through imaging | |
CN116485929B (en) | Augmented reality system and method of operating an augmented reality system | |
CN105556508B (en) | The devices, systems, and methods of virtual mirror | |
CN109416842A (en) | Geometric match in virtual reality and augmented reality | |
CN110413105A (en) | The tangible visualization of virtual objects in virtual environment | |
JP2004145448A (en) | Terminal device, server device, and image processing method | |
JP2013506226A (en) | System and method for interaction with a virtual environment | |
CN108305326A (en) | A method of mixing virtual reality | |
JP6461679B2 (en) | Video display system and video display method | |
CN104581119B (en) | A kind of display methods of 3D rendering and a kind of helmet | |
US11645823B2 (en) | Neutral avatars | |
CN105068648A (en) | Head-mounted intelligent interactive system | |
CN109255841A (en) | AR image presentation method, device, terminal and storage medium | |
WO2018176927A1 (en) | Binocular rendering method and system for virtual active parallax computation compensation | |
CN108064447A (en) | Method for displaying image, intelligent glasses and storage medium | |
CN107862718A (en) | 4D holographic video method for catching | |
JP2014182597A (en) | Virtual reality presentation system, virtual reality presentation device, and virtual reality presentation method | |
CN113552947A (en) | Virtual scene display method and device and computer readable storage medium | |
US20220366615A1 (en) | See-through display, method for operating a see-through display and computer program | |
US11107276B2 (en) | Scaling voxels in a virtual space | |
CN106408666B (en) | Mixed reality reality border demenstration method | |
US20180095347A1 (en) | Information processing device, method of information processing, program, and image display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190402 Termination date: 20190817 |