CN107957775A - Data object exchange method and device in virtual reality space environment - Google Patents
Data object exchange method and device in virtual reality space environment Download PDFInfo
- Publication number
- CN107957775A CN107957775A CN201610909519.0A CN201610909519A CN107957775A CN 107957775 A CN107957775 A CN 107957775A CN 201610909519 A CN201610909519 A CN 201610909519A CN 107957775 A CN107957775 A CN 107957775A
- Authority
- CN
- China
- Prior art keywords
- sight
- user
- shop
- information
- interaction area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Human Computer Interaction (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application discloses data object exchange method and device in virtual reality space environment, including:Client provides virtual reality shop object internal measurements, the shop object internal measurements include it is at least one can interaction data object;Determine user's sight focal position;When the sight focal position enters in the range of the interaction area of target data objects, and residence time reaches preset time threshold value, there is provided with the associated information content of the target data objects.By the embodiment of the present application, user experience can be lifted.
Description
Technical field
This application involves the data object interaction technique field in virtual reality space environment, more particularly to virtual reality
Data object exchange method and device in space environment.
Background technology
In existing electronic commerce transaction system, be typically based on PC machine, notebook, mobile terminal device etc. with
Family interacts, the process that user does shopping on line, actually selects and meets certainly from a series of commodity picture list
The commodity of own shopping need, then into the operation such as purchase that places an order on line.Although online shopping causes user to stay indoors i.e.
It can complete to do shopping, still, for being done shopping into market equivalent to reality, the process of this online shopping often seems not
It is enough lively.
Therefore, how so that online shopping flow is to need those skilled in the art to solve closer to doing shopping under real line
Technical problem certainly.
The content of the invention
This application provides the data object exchange method and device in virtual reality space environment, user's body can be lifted
Test.
This application provides following scheme:
A kind of data object exchange method in virtual reality space environment, it is characterised in that including:
Client provides virtual reality shop object internal measurements, and the shop object internal measurements include
It is at least one can interaction data object;
Determine user's sight focal position;
When the sight focal position enters in the range of the interaction area of target data objects, and residence time reaches pre-
When putting time threshold, there is provided with the associated information content of the target data objects.
A kind of data object exchange method in virtual reality space environment, including:
Server preserves virtual reality shop object internal measurements data, the shop object internal measurements number
According to include it is at least one can interaction data object, it is described can interaction data object be corresponding with interaction area scope, and be associated with
The preset information content;
The virtual reality shop object internal measurements data are supplied to client, so that the client provides
Virtual reality shop object internal measurements, and determine user's sight focal position, when the sight focal position enters mesh
In the range of the interaction area for marking data object, and when residence time reaches preset time threshold value, there is provided with the target data pair
As the associated information content.
A kind of data object exchange method in augmented reality environment, including:
Client obtains the three-dimensional space model of solid shop internal measurements;Include more than one piece inside the solid shop
Kinds of goods;
The three-dimensional space model and the solid shop are being carried out by space by preset augmented reality AR equipment
After matching somebody with somebody, provide the kinds of goods corresponding interaction area range information in user's scope of sight of the AR equipment;
Determine user's sight focus;
When the sight focal position enters in the range of the interaction area of target kinds of goods, and residence time reaches preset
Between threshold value when, provided in user's scope of sight of the AR equipment with the associated target data objects of interaction area scope
Interaction response information.
A kind of exchange method in augmented reality environment, including:
Server preserves the three-dimensional space model of solid shop internal measurements, includes more than one piece inside the solid shop
Kinds of goods, corresponding interaction area range information, and the interaction response with each associated data object of interaction area scope
Information;
The three-dimensional space model is supplied to client, so that the client is set by preset augmented reality AR
After the three-dimensional space model and the solid shop are carried out spatial match, in user's scope of sight of the AR equipment
There is provided the kinds of goods corresponding interaction area range information, and determine user's sight focus, when the sight focal position enters
In the range of the interaction area of target kinds of goods, and when residence time reaches preset time threshold value, regarded in the user of the AR equipment
Interaction response information with the associated target data objects of interaction area scope is provided in the range of domain.
A kind of data object interactive device in virtual reality space environment, applied to client, including:
First reality environment provides unit, for providing virtual reality shop object internal measurements, the shop
Paving object internal measurements include it is at least one can interaction data object;
First sight focus determination unit, for determining user's sight focal position;
First content provides unit, for entering the interaction area scope of target data objects when the sight focal position
It is interior, and when residence time reaches preset time threshold value, there is provided with the associated information content of the target data objects.
A kind of data object interactive device in virtual reality space environment, applied to server, including:
First virtual reality environment data storage unit, for preserving virtual reality shop object internal measurements number
According to, shop object internal measurements data include it is at least one can interaction data object, it is described can interaction data pair
As being corresponding with interaction area scope, and it is associated with the preset information content;
First virtual reality environment data provides unit, for by the virtual reality shop object internal measurements number
According to client is supplied to, so that the client provides virtual reality shop object internal measurements, and user's sight is determined
Focal position, when the sight focal position enters in the range of the interaction area of target data objects, and residence time reaches
During preset time threshold value, there is provided with the associated information content of the target data objects.
A kind of data object interactive device in augmented reality environment, applied to client, including:
First model obtaining unit, for obtaining the three-dimensional space model of solid shop internal measurements;The entity
Include more than one piece kinds of goods inside shop;
First interaction area information provider unit, for by preset augmented reality AR equipment by the three dimensions
After model carries out spatial match with the solid shop, the kinds of goods are provided in user's scope of sight of the AR equipment and are corresponded to
Interaction area range information;
First sight focus determination unit, for determining user's sight focus;
First interaction response provides unit, for entering the interaction area scope of target kinds of goods when the sight focal position
It is interior, and when residence time reaches preset time threshold value, provided and the interactive areas in user's scope of sight of the AR equipment
The interaction response information of the associated target data objects of domain scope.
A kind of interactive device in augmented reality environment, applied to server, including:
First model storage unit, for preserving the three-dimensional space model of solid shop internal measurements, the entity
Include more than one piece kinds of goods inside shop, corresponding interaction area range information, and with each associated number of interaction area scope
According to the interaction response information of object;
First model provision unit, for the three-dimensional space model to be supplied to client, so that the client exists
After the three-dimensional space model and the solid shop are carried out spatial match by preset augmented reality AR equipment, described
There is provided the kinds of goods corresponding interaction area range information in user's scope of sight of AR equipment, and determine user's sight focus,
When the sight focal position enters in the range of the interaction area of target kinds of goods, and residence time reaches preset time threshold value
When, provided in user's scope of sight of the AR equipment and interact sound with the associated target data objects of interaction area scope
Answer information.
The specific embodiment provided according to the application, this application discloses following technique effect:
By the embodiment of the present application, virtual reality shop object internal measurements can be provided, in this way, user can obtain
The experience in three-dimensional space environment must be placed oneself in the midst of, also, specific interaction flow can be realized by " sight fusing mechanism ",
For a user, it is only necessary to it can realize interaction by rotating head, without in input equipments such as external handles, because
This, can cause virtual reality space environment closer to solid shop under line, to lift user experience.
In addition, in interaction, a series of solution is provided also directed to various problems, can cause interactive stream
Journey is more smoothly.
Certainly, any product for implementing the application does not necessarily require achieving all the advantages described above at the same time.
Brief description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, below will be to institute in embodiment
Attached drawing to be used is needed to be briefly described, it should be apparent that, drawings in the following description are only some implementations of the application
Example, for those of ordinary skill in the art, without creative efforts, can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is system architecture schematic diagram provided by the embodiments of the present application;
Fig. 2 is the flow chart of first method provided by the embodiments of the present application;
Fig. 3-1,3-2 are the first interface schematic diagrams provided by the embodiments of the present application;
Fig. 4-1 to 4-3 is second contact surface schematic diagram provided by the embodiments of the present application;
Fig. 5 is the 3rd interface schematic diagram provided by the embodiments of the present application;
Fig. 6-1 to 6-8 is the 4th interface schematic diagram provided by the embodiments of the present application;
Fig. 7 is control location schematic diagram provided by the embodiments of the present application;
Fig. 8 is menu position schematic diagram provided by the embodiments of the present application;
Fig. 9 to Figure 27 is other methods flow chart provided by the embodiments of the present application;
Figure 28 to Figure 47 is each schematic device provided by the embodiments of the present application.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, the technical solution in the embodiment of the present application is carried out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, those of ordinary skill in the art's all other embodiments obtained, belong to the application protection
Scope.
In the embodiment of the present application, in order to enable the online shopping flow of user is more approached in actual solid shop lower online
Shopping experience, shop object internal measurements can be provided by way of virtual reality, wherein can include it is multiple can
Interactive data object (for example, merchandise items etc.), user can carry out data object in this virtual reality space environment
Browse, and interact.Since virtual reality space environment has the effect of 3 D stereo, the data that user sees
Object is no longer simple data object list, but as in the solid shop under line, it can be seen that the sky of 3 D stereo
Between, wherein with shelf etc., data object can be placed on shelf, and user can remove specific data object from shelf
Carry out checking its details, etc., therefore, the experience that can be done shopping closer to reality under online in solid shop.In addition, in this Shen
Please be in embodiment, in order to further lift user experience, during user and system interact, it may not be necessary to use hand
The external input devices such as handle, but the sight focus of user can be caught, then judged by a kind of " sight fusing " mechanism
The intention of user, and and then realize and user between interaction, in this way, the external input devices such as no handle auxiliary situation
Under, user can be caused to obtain the experience done shopping under closer line in solid shop, strengthen the feeling of immersion of user.
In order to realize above-mentioned technical proposal, in software aspects, can be realized by the framework of client-server, its
In, server is mainly used for providing data support, such as, it is possible to achieve the virtual reality internal measurements of multiple shop objects
Data, and it is supplied to client.Client is mainly used for the front end displaying of data, and interaction is carried out between user.Wherein,
The client can be independent application program, alternatively, can also be function module in some complex utility routines etc., example
Such as, can be " mobile phone Taobao ", function module in the mobile terminal App such as " Taobao ", etc..
In hardware aspect, a kind of virtual reality device can be used first, this virtual reality device can be collection storage,
" integral type " virtual reality device that the functions such as calculating, screen display are integrated, alternatively, can also be only has the function of the " outer of screen display
Connect formula " or " movable type " virtual reality device.Wherein, for integral type virtual reality device, since itself is carried
The functions such as storage, calculating, therefore, directly can install or be built in this virtual reality device by foregoing client-side program
In.And for " circumscribed " virtual reality device, due to when realizing dependent on external PC (personal computer) machine etc.
Equipment, therefore, can install client-side program with this PC machine, then circumscribed virtual reality device being connected to the PC machine
On, realize virtual reality space environment and interaction.In addition, for " movable type " virtual reality device, needed when realizing
It is used cooperatively with intelligent mobile terminal equipments such as mobile phones, for example, " VR glasses box " etc., when in use, can be by intelligence such as mobile phones
Energy mobile terminal device is put into glasses box, and the function such as specific storage, calculating is completed by intelligent mobile terminal equipment, " VR
Glasses box " is only used for realizing the correlation function of screen display.Therefore,, can be by visitor for this virtual reality device referring to Fig. 1
In the program installation of family end or built-in and intelligent mobile terminal equipment, then this intelligent mobile terminal equipment and movable type is virtual
Real world devices are attached, and the function in the embodiment of the present application is realized in both mutual cooperations.
First below from the angle of foregoing client, it is situated between in detail to specific implementation provided by the embodiments of the present application
Continue.
Embodiment one
In the embodiment one, the data object exchange method in a kind of virtual reality space environment is provide firstly, is joined
See Fig. 2, this method may comprise steps of:
S201:Client provides virtual reality shop object internal measurements, the shop object internal measurements
Include it is at least one can interaction data object;
Wherein, so-called shop object can refer to " shop " that businessman etc. opens up in online sales platform, in this Shen
Solid shop please can also be associated with this shop object is online, for example, supermarket, clothes brand shop etc. in embodiment.Tool
, can be by way of modeling when body is realized, layout, commodity display mode under artificial line in shop etc., generate the shop of simulation
Internal measurements are spread, alternatively, in order to further lift the authenticity experience of environment, can also be by into solid shop under line
The mode of recorded video is carried out, to establish shop object internal measurements, shop is provided a user in virtual reality device
During internal measurements, it can be carried out by way of playing this video, so that user can be obtained closer to existing
Real shopping experience.
Specifically, due to being supplied to the virtual reality space environment that user watches provided by the video of recording, and
In the video of recording, picture has globality, that is, each frame picture is all an entirety, therefore, in order to enable user can
Carried out with the data object in picture it is interactive, with realize it is follow-up check the flows such as details, purchase, can also be regarded to what is recorded
Frequency is handled.Specific processing procedure, that is, the data object to occurring in video are marked, for example, being drawn in video
Occur the position of commodity in certain in face, the identification informations such as the ID of the commodity can be inputted by modes such as mouse clicks, in this way,
Subsequently during interaction is carried out with user, it can determine that commodity ID is corresponding in detail by the database of backstage preservation
The contents such as feelings information.In this way, being handled by this marking operation, the data object that can to occur in video pictures has can
Interactivity.When providing a user shop object internal measurements, mark point can be provided in the position where data object,
For example, blue round dot etc. can be shown on commodity, user can be prompted by this mark point, corresponding data object is can
With interactive, while user can be prompted, when needing to interact with certain data object, sight can be directed to the data pair
The mark point of elephant, so that system can recognize that the intention of user.
During concrete implementation, in order to realize the displaying to above-mentioned mark point information, referring to Fig. 3-1, in the embodiment of the present application
In, an operation layer can be provided in environment layer and the middle of screen layer, that is, actually including three in user's scope of sight
Layer, bottom one layer is environment layer, is shown on video pictures inside the object of shop etc. in environment layer.Above environment layer
Operation layer, on the mark point added to data object, and subsequently the response message of interactive operation is shown etc., can
Realized in the operation layer, also, operation layer is static relative to environment layer, namely relative position relation between the two is simultaneously
It will not change, so that the label information of operation layer can accurately be illustrated in corresponding data object in environment layer all the time
On.Certainly, in specific implementation, in addition to showing the part of the information such as mark point, other regions of operation layer can be
Transparent, that is to say, that for a user, it may not be necessary to perceive the presence of the operation layer.
In addition, in the embodiment of the present application, in order to provide 360 degree of the stereo-picture on data object, and
So that stereo-picture has the bandwagon effect of authenticity, the data processing operation of early period can also include:By specific data pair
As corresponding kinds of goods are taken pictures from multiple shooting angle, and preserve the photo of multiple shooting angle shootings, in this way, subsequently with
When family has the demand of stereo-picture for checking multiple angles, it can be provided according to the multiple pictures of preservation.For example, in one kind
Under implementation, each data object can preserve 60 photos so that user can view data object from multiple angles
Picture detail.
S202:Determine user's sight focal position;
After the space environment inside the object of shop is entered, it is possible to identify the intention of user, and then realization and user
Between interaction.Specifically when identifying user view, the embodiment of the present application may not necessarily by external input devices such as handles,
The modes such as certain ray need not be launched by VR equipment to determine, but can be known by identifying user's sight focal position
The intention of other user.
During specific implementation, " aiming point " can be set at the preset position of said display layer, the position phase of the aiming point
It is constant for screen layer, the sight focus of user can be represented.For example, shown in Fig. 3-1, can be at the center of screen layer
Point equipotential installs aiming point, it is necessary to which explanation, under normal conditions, VR equipment is by simulating human eye " binocular vision "
To construct " depth of field ", so as to show the bandwagon effect of 3 D stereo.That is, why people is it can be seen that three-dimensional scenery,
It is that there is spacing in left and right two, causes a bit trickle difference in the visual angle of two because the eyes of the mankind each can independently see thing
Not, and such difference can allow the displacement of two scenery seen individually a little.And the difference of left eye and eye image claims
For parallax, the brain of the mankind produces the stereoscopic visual effect of spatial impression in brain very dexterously by the image co-registration of two
In.Therefore, the screen layer in the embodiment of the present application can also be divided into two parts in left and right and (laterally be divided into terminal device screen
Left and right two parts), the center position of each part can set an aiming point.In addition, during specific implementation, can guide
The horizontal mobile phone of user, is divided into 2 screens by picture on screen, shows the corresponding picture of right and left eyes respectively, then pass through VR glasses boxes
Convex lens is projected in human eye, and the thing for making one seen shows three-dimensional sense.Since screen layer can be with the upper bottom left of user
The head rotation of right all directions and change in location occurs, therefore, aiming point can also track the change, and operation layer can pass through
The displacement of aiming point is caught, to determine the direction of visual lines of user, and then is mapped on operation layer some specific position, the position
Put the sight focus that can serve as user.
Below under a kind of specific implementation, determining that the implementation method of user's sight focus describes in detail.
First, it is still assumed that virtual reality device belongs to the equipment such as portable VR glasses boxes, can be moved with mobile phone etc.
Dynamic terminal device is used cooperatively, and client is installed in mobile terminal device.Wherein, all figures are presented on mobile phone screen,
Graphical content includes:Virtual reality scenario (environment layer), GUI (operation layer) and client control (screen layer).
Wherein, environment layer is on three-dimensional sphere;Operation layer is the control for being used to show information and operation in the scene, is passed through
Drafting forms, and is superimposed upon above the specified coordinate of scene, can binding function function, trigger corresponding actions;Environment layer is to be plotted in
The control of the superiors, unrelated with the scene of virtual reality in mobile phone application, opposing stationary with mobile phone screen and head.
The real scene video shot to solid shop under line, can be with this is spherical by spherical Map to diameter of Spherical Volume
Three-dimensional system of coordinate is established, then each pixel of each frame has fixed three-dimensional coordinate in video.In addition, operation layer includes
Information panel, button, indication of goods point etc., are positioned at when they occur in above-mentioned three-dimensional system of coordinate.Such as drawing data object
Before mark point, first obtain scene in the data object, the coordinate using the coordinate of its central point as data object mark point, etc.
Deng.Screen layer includes the sight aiming point positioned at screen center, return push-button and other setting buttons etc..Wherein, it is most important
Be sight aiming point, its relative screen is static, and after mobile phone inserts VR glasses boxes, it follows head rotation and moves.
In order to determine in the terminal devices such as sight focus, mobile phone can equipped with head-tracking module, the module be in order to
The position of mobile phone screen central vision aiming point is followed the trail of, (existing intelligence can be completed using the gyroscope or accelerometer of mobile phone
Corresponding function is typically equipped with mobile phone).Wherein, gyroscope is exactly that interior of mobile phone has a gyro, its axis is due to top
Spiral shell effect is parallel with inceptive direction all the time, thus can by and the deviation of inceptive direction calculate actual direction.Gyroscope
The normative reference of measurement is the gyro that bosom is rotated on direction perpendicular to the ground.Pass through equipment and the folder of gyro
Angle obtains result.Because the acceleration of mobile phone movement is not high, accuracy also without too big requirement, therefore, is substituted with accelerometer
Gyroscope can also.
Gyroscope can play the role of being exactly that can perceive user's head position in three dimensions and angle at any time
Offset, these data are then acted on picture by calculating, make corresponding adjustment.In brief, if user to the left
This result, based on rotation angle and the rate curve turned, is transmitted to program, picture will be made respective corners by program by rotary head
The displacement of degree and speed.
So, it is assumed that vision collimation point and the distance of environment ball on mobile phone screen are r, when initial, the point of focus vision
The default value of coordinate is P0 (x0, y0, z0), during α angles of turning left right overhead, according to the value of r and α, can calculate current gaze
The coordinate value of focus point P in the environment:
X=x0-r*sin α
Y=y0
Z=z0-r*sin α * tan α
Program according to the coordinate of P whether can be entered with the accurate judgement point can interaction data object interaction area scope,
If having been enter into, corresponding process is triggered.It is each described can the object of interaction data in the interaction area scope (can be square
Shape region) in one group of coordinate of position correspondence, one group of coordinate includes four coordinate points, if the focal position of the sight aiming point
Coordinate is located in the range of coordinate described in one group, then corresponding data object is selected data object.
Realization on stereoscopic vision can be completed by special engine, in a word, it is only necessary to which mobile phone is put into VR
Mirror case, you can solid is produced by box and immerses vision.Eyeglass in VR glasses boxes is repaiied by focusing and secondary imaging
Image that just each eyes are seen, the angle for changing 2D images imitates the apperance of stereo 3 D image, and finally makes mobile phone screen
Image, shown by helmet.
Since in traditional computer graphics techniques, the change of visual field is realized by mouse or keyboard, user
Vision system and motion perception system be separated, and change regarding for image using head tracking in the embodiment of the present application
Angle, can connect between the vision system and motion perception system of user, can cause user's to feel more true to nature.
Further, since the user interface of plane form has very in the equipment such as virtual reality space environment and traditional mobile phone
Big difference, in virtual reality space environment, the environment below screen is wide three dimensions, and user can pass through movement
Or rotating virtual real world devices, carry out the ambient Property shown in conversion screen, therefore, the content that user sees may be not
Disconnected change.But this is likely to result in a kind of less positive consequence-user after a period of time in entering space environment, by
Excessively enriched in the content seen numerous and jumbled, may forget how this operates.To avoid the occurrence of the situation, in the embodiment of the present application
In, trailing type task guidance information can also be provided in screen layer, the guidance information and aiming point are opposing stationary, that is, no matter
How user rotates VR equipment, and the guidance information is forever visible all in the scope of sight of user.For example, as shown in figure 3-2,
" aiming at (mark point), check commodity details " shown in 301, etc..This mode is suitable for virtual environment space, be because,
In the embodiment of the present application, the interaction of Virtual Space is completed by aiming point, therefore, is believed with the opposing stationary guiding of aiming point
Breath does not affect normal interaction, meanwhile, this guiding can lose to avoid user in wide environment again, at any time can be with
Know what this does in next step for oneself.
S203:When the sight focal position enters in the range of the interaction area of target data objects, and residence time
When reaching preset time threshold value, there is provided with the associated information content of the target data objects.
When the sight focus for finding user is entered in the range of the interaction area of certain target data objects, it is possible to trigger
" fusing " mechanism, that is, timing can be proceeded by by a timer, if during stop in the range of the interaction area
Between reach certain time threshold (for example, 3S), then can be determined as the object that user wishes interaction, and be provided and the target
The associated information content of data object, for example, it may be detail information of target data objects etc..
As it can be seen that by the embodiment of the present application, shop object internal measurements can be provided by way of virtual reality,
Multiple data objects interacted can wherein be included, and it is possible to fuse by the seizure of user's sight focus and time
Mechanism determines that the interaction of user is intended to, without by means of external input devices such as handles, so that user obtains more
Close to the experience done shopping in solid shop.
It should be noted that in practical applications, before abovementioned steps S101, the preparation of some early periods can also be carried out
Work.For example, by taking " movable type " virtual reality device (for example, VR glasses box) as an example, can be set in mobile terminals such as mobile phones
Standby middle installation client provided by the embodiments of the present application, then, mobile phone is put into VR glasses boxes.In addition, in practical application
In, it is also possible to the configuration operation of the early periods such as ship-to is logged in, inputted there are some, in the embodiment of the present application, in order to ensure
User on tape after VR glasses box, can swimmingly do shopping, and can take eye in user by complicated configuration operation is preposition
Before mirror, it can include:Log in, add ship-to etc..For example, user is checked by " mobile phone Taobao " client on mobile phone
During " double 11 meeting-place " page, find including relevant virtual reality entrance, then by the entrance to enter, at this time, can
To judge whether user logs in, if added ship-to etc., if it is not, user can be guided to perform login, add
Add the operation of ship-to.Furthermore it is also possible to remind user that mobile phone is laterally disposed, also, it can also show preset petty action
Draw.It should be noted that in practical applications, except judging whether user logs in, it can also judge the mobile phone that user uses
Whether the type Deng mobile terminal device is supported, if it does not, can then be prompted.Furthermore it is also possible to judge current
Whether client release is supported, if it does not, user can be guided to carry out upgrading of client release, etc..In the flow
Final step, mobile phone can be put into VR glasses boxes.
It is multiple due to that may have with the shop object being shown in a manner of virtual reality, mobile phone inserts VR
After mirror case, main navigation space can also be initially entered, that is, the space environment for selecting shop object.It equivalent to
The homepage of website, but unlike, main navigation space is a three dimensions in the system, and user can be by rotating up and down
Explore it in head.Wherein, under a kind of concrete implementation mode, as shown in Fig. 4-1, in the main navigation space, sight is initial
Position can be a stifled photo wall, each dynamic photo represents some some national city the street in the world on wall
One shop, content shown in an action shot equally can be the StoreFront videos truly recorded, it may for example comprise outside shop front
Streetscape etc..
In order to enable user can from main navigation space selection target shop object, equally can be in this leading aviation
Between environment layer and screen layer between add operation layer.As shown in the Fig. 4-2, in the operation layer, can be set with respectively with respectively
The position of the dynamic picture of a shop object and size are corresponding " bounding box ", and using this " bounding box " as corresponding shop
The interaction area scope of object, in this way, user can by by focus vision in the corresponding interaction area model of some shop object
The mode such as in enclosing and stay for some time, to trigger the internal measurements into the shop object.
It should be noted that as a result of time fusing mechanism, therefore, enter interaction area model from user's sight focus
Actual triggering is enclosed into target shop object, it is necessary to undergo the time of one section of presetting length, for example, 3 seconds etc., in this process
In, in order to enable the target shop that user perceives its concern is captured by system, can also be in the sight focus position
When putting into the range of the interaction area of target shop object, start to play preset progress shape in the range of the interaction area
State changes animation, and when reaching the preset time threshold value, the progress status, which becomes, turns to completion status.For example, such as Fig. 4-3
It is shown, when user's sight focus is entered in the interactive region of certain target shop object, can start in target shop object
Interaction area in the range of show " water filling animation ", " water " of injection can increase over time, molten until reaching
The threshold length of time set in off line system, " water " fill, and trigger into the inner space of the target shop object.Pass through
This mode, can cause user directly to find the target shop object that will enter, without reaching it in time threshold
It is preceding to remove sight.
In arrival time threshold value, and trigger into after certain target shop object, can be in the operation of main navigation space environment
Layer, the corresponding position of target shop object, there is provided the prompt message of the shop object is gone to, for example, " going to Japan-autumn leaves
It is former ", etc., certainly, in practical applications, more language materials can also be shown, for example, showing that solid shop is real under corresponding line
The geographical location information on border, etc..
After triggering enters target shop object, client can be to the virtual of the server acquisition request target shop object
Realistic space environmental data, and carry out parsing and the operation such as render, therefore, enters target shop from triggering, to actually showing this
Target shop internal measurements interface, may undergo a period of time.In the embodiment of the present application, within the time, may be used also
Using main navigation space environment changing as with the associated interlude video of selected shop object, wherein, it is described interlude video use
The scenery that can be watched in solid shop way under the selected corresponding line of shop object is gone in displaying.For example, it can open up
" space-time shuttle " effect is shown, user, which can be felt from current navigation room, to come on the vehicles, and comes specific shop
The street outside paving, finally enters shop door, etc..Wherein, for different shop objects, by solid shop institute under corresponding line
Geographical location have nothing in common with each other, therefore, can each correspond to different interlude videos so that user can obtain truer
Experience.
After the completion of in shop, object internal measurements render, user can view the more of the interior displaying of three-dimensional space
It is a can interaction data object, for the ease of user mutual, mark point can be provided in display location that can be where interactive object, such as
In Fig. 3, some dots for being shown on operation layer, user's sight focus enters in this mark point region, and residence time
When reaching preset time threshold value, then it can provide and the associated information content of the target data objects.
It should be noted that in practical applications, in order to avoid causing to block to data object, the area of this mark point
Would generally be smaller, especially in data object when distant from user's visual starting point, the area of this mark point may
Smaller.Therefore, in order to enable the data object of its sight focal point of user has actually been arrived by system senses, only waiting
Ticket reserving time in experience fusing mechanism, which reaches, can just trigger interaction, can be in the sight focal position alignment target data pair
During the mark point of elephant, the mark point is amplified, and progress status is provided in the regional extent where mark point after amplification
Change animation, when reaching the preset time threshold value, the progress status, which becomes, turns to completion status.
During specific implementation, as shown in figure 5, each mark point can carry Rectangular Bounding Volume, (bounding box is than mark point area
Bigger, but can be sightless for user, namely the bounding box can be transparent), wherein, the first from left is initial in Fig. 5
State;When user's sight focus enters the bounding box (as shown in the second from left in Fig. 5), then the mark point is amplified, and takes the encirclement
Box, for example, being shown as the state on a left side three, and triggers fusing process, and after fusing certain time (for example, 1s etc.), system judges
User has the intention for checking the data object details, then can show the detail information of data object.Wherein, in fusing process
In, progress status change animation is provided after amplification in the regional extent where mark point that equally can be (for example, as in Fig. 5
Shown in the right side one, water filling animation etc. can be played), when reaching the preset time threshold value, the progress status, which becomes, have been turned to
Into state.
Specifically when providing the relevant information of target data objects, it can refer to the detail information of target data objects, example
Such as, under a kind of specific implementation, the stereo image information of target data objects, and the description of word property can be included
Information.As in Figure 6-1, can also show detail information from the mark point position of the data object in " divergent shape " by small change
Big process.Also, these detail informations can be the description letter of the word property centered on the stereo image information
Breath is shown in around the stereo-picture.That is, stereo image information can be separately shown with text information,
Also, can be provided for carrying out rotating first operational controls to the stereo-picture, in this way, when user's sight focus into
When entering to regional extent where first operational controls, the stereo-picture can be subjected to rotation process, can shown
Go out the stereo-picture at the multiple visual angles of the data object.Also, this first operational controls can include being used to revolve to different directions
The multiple operational controls turned, for example, it may be two direction of rotation, etc. clockwise and anticlockwise.
For example, as in fig. 6-2, after it is directed at certain alarm clock for user's sight focus, the details letter of the alarm clock showed
Breath (is divided into two width pictures of left and right, the binocular vision of corresponding human eye), it is seen then that and detail information includes the stereo-picture of the alarm clock,
It can show the multinomial detail information such as title, the price of data object at centrally located 601 at surrounding 602.And
And below the stereo-picture in center, two pivot controls 603 are additionally provided, it is respectively used to clockwise and counterclockwise
Direction rotary stereo image.It should be noted that the detail information content on specific reality in the Fig. 6-2, the application are implemented
It is not defined in example, therefore, in figure shows that the readability of text or pictorial information does not interfere with the embodiment of the present application
Protection domain.
As it was noted above, the stereo-picture of data object shown in detail information can be by being carried out to actual kinds of goods
Take pictures the photo generation of acquisition, the photo of real scene shooting can be from the multiple pictures of multiple viewing angles, and synthesis obtains three-dimensional vertical
The bandwagon effect of body, and visual angle, the understanding data pair comprehensive from multiple angles can be changed by rotating the stereo-picture
The details of elephant.
It should be noted that in practical applications, the above-mentioned detail information on target data objects when being shown,
First information panel can be firstly generated, then detail information is illustrated in the first information panel.The face of the information panel
Product can be less than the area of user's scope of sight, for the ease of exiting the displaying of the detail information of current data object, the application
Corresponding implementation is additionally provided in embodiment.Specifically, user equally can be worked as to the sight focus of user into line trace
Sight focal position from described information panel remove when, then described information panel can be closed, correspondingly, current data object
Detail information no longer show that user can continue to browse other data objects., can be with that is, in this manner
Operational controls such as " X buttons " need not be set in information panel, and user is only needed sight focus where the information panel
Region in remove, you can trigger the closing of information panel, so as to more aspect user operation.
In addition, when providing data object information details, can be provided for performing purchase to the target data objects
The second operational controls of operation are bought, " buying immediately " button as shown in Fig. 6-2 604 etc..When user's sight focus enters
Into the regional extent where second operational controls, and when residence time reaches preset time threshold value, then it can provide
With the relevant information content of acknowledgement of orders.That is, if user is after the details of target data objects are checked, it is believed that the number
The needs of meeting oneself according to object, then can produce the intention for buying the data object, at this time, sight focus can be aligned by user
Regional extent where second operational controls, and stopped after a certain period of time in sight focus, system may recognize that user's
The intention, and the relevant information for being used for carrying out acknowledgement of orders is provided.
Wherein, for above-mentioned second operational controls, due to belonging to main flow control, it is in the embodiment of the present application, right
Special designing can also be carried out in this operational controls so that it is more obvious in interaction.Specifically, second behaviour
It can be the sandwich construction arranged on depth of field direction to make control, when user's sight focus enters the second operation control
During regional extent where part, the sandwich construction is on depth of field direction to close to the direction (namely human eye direction) of visual starting point
It is mobile, so that second operational controls are exaggerated in visual effect, and control is more merged with environment, there is space
Three-dimensional sense.
For example, as shown in Fig. 6-3, operating space forms for multilayer UI (user interface), and is arranged successively on depth of field direction
Cloth, it is assumed that depth of field direction is forward direction close to the direction of visual starting point, and the direction away from visual starting point is negative sense, then positive top
Specific text is displayed for, for example, " buying immediately " etc..When system monitoring to user's sight focus enters control scope
When, as shown in Fig. 6-4, top layer UI is moved to positive direction successively to k1 millimeters of positive direction displacement distance, beneath each layer immediately, is presented
The effect gone out is that control amplifies immediately, and word appears vividly at the moment, and the stacking of whole control is clearly.Wherein, moved in multilayer UI
When dynamic, the displacement of top UI movements can be the largest, and the displacement of remaining each layer reduces step by step.That is, k1>k2>k3>k4.
The solid control to focus vision while feedback is made, if sight is not removed, starts fusing process, certain time (example
Such as, 3S) trigger buy immediately afterwards, into " placing an order " page.
So-called " placing an order " page namely carries the page of acknowledgement of orders information, which can pass through the second information faces
Plate is shown.Wherein, when carrying out acknowledgement of orders, it may be determined that many information, specifically can include determining that and receive
Whether address, the data object quantity bought, use virtual resources such as certain " reward vouchers ", etc..Therefore, can in the page
To be respectively provide for confirming the operational controls of every terms of information.
Wherein it is possible to including the 3rd operational controls for modifying to order relevant information, for example, it is assumed that user needs
To modify, then can be triggered by the 3rd operational controls to the ship-to of current default.Specifically, in this Shen
In embodiment, the mode of sight fusing please be again may be by be triggered.That is, when user's sight focus enters
In regional extent where 3rd operational controls, and when residence time reaches preset time threshold value, there is provided the 3rd information
Panel, and the content for being used for changing the order relevant information is provided in the 3rd information panel.Wherein, if user selects
Other ship-to are selected, then the 3rd information panel is automatically closed, if user is after each ship-to has been browsed, it is not necessary to
Modification, then can be by operating spaces such as the X buttons in the 3rd information panel, to close the 3rd information panel, this
When, continue the displaying of the second information panel., in the embodiment of the present application, can also be by this second for These characteristics
Contextual definition between information panel and the 3rd information panel is the relation of " father's panel " between " sub-panel ", that is to say, that
The displaying of " sub-panel " needs the triggering from " father's panel ", after " sub-panel " exits, returns still to " father's panel " and is shown.
It for such case, can be set with counter plate bandwagon effect, specifically, when providing three information panel, incited somebody to action
Second information panel is upwardly away from the direction movement of visual starting point in depth of field side, and the 3rd information panel is included
Top, after the modification of completion information or the 3rd information panel are closed, reduces the reality of second information panel
State.That is, referring to Fig. 6-5, by sight fuse triggering sub-panel occur when, father's panel on depth direction distally
Passage, and can gradually fog, sub-panel appears in top so that the area of sub-panel is more than father's panel, father's panel
It may be at the state that part is blocked.After exiting the sub-panel, father's panel is elapsed to the forward direction of the depth of field again, is reduced to father's panel
Displaying.
It can also include being used for the operational controls for determining data object quantity in the order page, that is to say, that for mesh
Mark data object quantity increase/reduce the 4th operational controls of operation, as shown in Fig. 6-6, when the sight focal position
Into the 4th operational controls, and when residence time reaches preset time threshold value, by the quantity of the target data objects
Performed on the basis of initial value plus the one/operation that subtracts one.In practical applications, may have the demand of purchase more than one piece, at this time, Yong Huxu
This 4th operational controls are operated continuously repeatedly, if not doing specially treated, user needs first to be directed at sight focus
4th operational controls, after stopping 3S, data object quantity adds 1, and then sight focus is removed from the control, is then entered back into
The control, and so on, it is necessary to repeat repeatedly, it is clear that this can take the more time, and can also user is felt not
It is enough convenient.
For this reason, in the embodiment of the present application, following specially treated can also be carried out:If the sight focal position enters
4th operational controls, and residence time reach preset time threshold value after do not leave yet, then continue to the target data
The quantity of object performs plus one/operation that subtracts one, and continue to execute plus one/operation that subtracts one when, the time interval of execution is less than described
Preset time threshold value.That is, user's focus vision this 4th operational controls (for example, perform " increase " operate control
Part) on when, trigger first time fusing process, after undergoing preset time threshold, by data object quantity on original basis
Upper to add 1, afterwards, if sight is not left, second of fusing of triggering, after fusing every time, data object quantity is all in original base
Add 1 on plinth again, also, the time of the follow-up experience of fusing process every time can shorten, for example, being for the first time 3S, for the second time
It can be 2S, can be 1S for the third time, and so on.It is, of course, also possible to shortest fusing time is limited, for example, most
It is short to be shorter than 1S, then mean the 4th time and follow-up each secondary fusing process can be the time for undergoing 1S.
In addition, the 5th operational controls for performing alternative operation are also possible that in the order page, for example, user's account
There may be virtual resources such as some available " reward vouchers " in family, in general, user can select to use in current order or
Person is without using this reward voucher, at this time, it is necessary to which user performs the selection of an alternative.In the prior art, usually can be with
The selection mode of " sliding block " formula of offer, user can be mouses or on the touchscreen by way of dragging sliding block come in two choosings
Make a choice between.But under the environment of Virtual Space, due to cannot by the distance of swimming of sliding block set it is long (otherwise can be right
Other information causes to block), therefore, it is difficult to by the consecutive variations of sight focus come the trailing process of analog slider.For this reason,
In the embodiment of the present application, corresponding solution is also provided.Specifically, it this can be used for the 5th behaviour for carrying out alternative
The first interaction area scope and the second interaction area scope can be included by making control, correspond to two kinds of optional options respectively, and
There is visual connectedness, by default, first hands between first interaction area scope and the second interaction area scope
Mutual regional extent is selected state.That is, visually seeming, the 5th operational controls are similar to laterally disposed
" hourglass ", as shown in fig. 6-7, in an initial condition, wherein side is selected state, described in the entrance of the sight focal position
When in the range of the second interaction area, as shown in figs 6-8, it can provide selected state gradually from the first interaction area scope to institute
The progress status change animation of the second interaction area scope transfer is stated, and when reaching preset time threshold value, second interaction
Regional extent is changed into selected state completely.In this way, the 5th operational controls and indicant perfect adaptation can be caused, both
Selected option can intuitively be identified, but can be characterized with the mode of animation trigger the operation also need to how long so that
Lift user experience.
After the confirmation operation of every terms of information in completing the order page, it is possible to submit order, enter follow-up payment
Etc. operating process.On payment process, I will not elaborate.
, can also be it should be noted that during user is browsed in the object internal measurements of shop
In the space environment " movement ", that is, the effect walked in shop can be showed., can be in shop in order to reach the purpose
The 6th operational controls moved in paving object internal measurements, when the sight focal position enters the described 6th operation
, can be according to shown in preset traveling track renewal shop object internal measurements when in the range of the interaction area of control
Can interaction data object.Specifically, if internal measurements are realized by way of recorded video, user is forward
Moving operation, can trigger the broadcasting forward of video, realizes user according to set video capture route " walking " with this.Wherein,
This 6th operational controls can occur in pairs, and direction is on the contrary, be respectively used in the shop object internal measurements just
To or reverse movement, when sight is left from this 6th operational controls, the broadcasting of video can be suspended so that virtual empty
Between the picture of static state is shown in environment, at this time, user can to occur in picture can interaction data object carry out sight
Focus on, execution checks that details etc. operate.
In order to enable user can only see the 6th operational controls in the range of normal visual field, scope of sight is transformed into can
During with the scene of reverse movement, it is seen that another, the display location of the 6th operational controls can be set.Specifically, by
In the generally spherical environment of three-dimensional space, therefore, the 6th operational controls can be included in the south half of the spherical environment
Ball, forms the position of the first presetting angle (for example, 25 degree) with the equatorial plane, as shown in Figure 7.
It is further to note that after in the internal measurements that user enters certain target shop object, each
May all have under kind state and return to the demands such as main interface, therefore, corresponding function can be provided to the user.But in void
Intend under realistic space environment, as should keep unobscured as possible, reduce user interface interference, being otherwise likely to result in user can not
It is immersed in completely in virtual environment, therefore, the menu or navigation mode of generally conventional application or website are in Virtual Space environment
In and do not apply to.For this reason, in the embodiment of the present application, following special processing mode can also be provided:Referring to Fig. 8, work as sight
Focal position enters the Southern Hemisphere of spherical environment, during with the angle of the equatorial plane more than the second presetting angle (for example, 45 degree when), carries
For main menu, and multiple functions option can be provided in main menu, including return to main interface, etc..That is, due to
Virtual reality space environment is actually sphere, and it is the picture of equator that utilization rate is highest, 30 degree or so of north and south latitude
Scope be the region for being easiest to be seen by the user.Based on the feature, the embodiment of the present application sets aforesaid way, that is, working as
Sight is in the Southern Hemisphere, and during with equator angle more than 45 degree, main menu occurs in 45 degree of Southern Hemisphere position.If sight
Continue to move down, then menu latitude is constant;Sight moves backward, and when being less than 45 ° with equator angle, menu disappears.Due to the Southern Hemisphere
When being more than 45 degree with the angle of the equatorial plane, the sight of user can fall on " ground " substantially, therefore, can be known as this menu
" ground menu ".In this way, main menu, therefore, Ke Yijin can quickly be aroused on the premise of guarantee is unobscured
One step improves the experience of user, strengthens the feeling of immersion of user.
In short, by the embodiment of the present application, virtual reality shop object internal measurements can be provided, in this way, user
The experience placed oneself in the midst of in three-dimensional space environment can be obtained, also, specific interaction flow can pass through " sight fusing mechanism "
To realize, for a user, it is only necessary to interaction can be realized by rotating head, without being inputted in external handle etc.
Equipment, therefore, can cause virtual reality space environment closer to solid shop under line, to lift user experience.
In addition, in interaction, a series of solution is provided also directed to various problems, can cause interactive stream
Journey is more smoothly.
Embodiment two
The embodiment two is corresponding with embodiment one, from the angle of server, to tool provided by the embodiments of the present application
Body scheme is introduced.
Specifically, referring to Fig. 9, which provides the data object interaction side in a kind of virtual reality space environment
Method, this method may comprise steps of:
S901:Server preserves virtual reality shop object internal measurements data, the shop object inner space
Environmental data include it is at least one can interaction data object, it is described can interaction data object be corresponding with interaction area scope, and
It is associated with the preset information content;
During specific implementation, server can gather associated video data in solid shop etc., and the processing such as be labeled
Operation, generation virtual reality shop object internal measurements data, and preserved.
S902:The virtual reality shop object internal measurements data are supplied to client, so as to the client
End provides virtual reality shop object internal measurements, and determines user's sight focal position, when the sight focal position
Into in the range of the interaction area of target data objects, and when residence time reaches preset time threshold value, there is provided with the target
The associated information content of data object.
The virtual reality shop object internal measurements data of generation can be supplied directly to client by server, or
Person, can also be supplied to client, client is obtaining accordingly when receiving the request of client, then by relevant data
After data, virtual reality space environment can be provided, and realize and interacting between user based on the environment, it is possible to achieve complete
Whole shopping process.
It may refer to the introduction in previous embodiment one on other specific implementations in the embodiment two, it is no longer superfluous here
State.
Embodiment three
Previous embodiment one and embodiment two are mainly in combination with " online shopping " this application scenarios, to virtual reality space
The flow such as check, buy in environment into line detail to data object, the embodiment of the present application is described in detail.And above-mentioned
In interaction, some specific settling modes actually can also apply to the other application field of virtual reality space environment
Scape, for example, game etc. can be included.
Wherein, for existing virtual reality technology, in order to realize interacting between user and system, it usually needs by
In some input equipments, for example, can include touch-tone handle, by this handle, the interaction of user and system is similar to
By the common plane formula page of mouse action, still, situations such as this can cause the feeling of immersion of user not strong.Another way,
The instrument that certain ray can be launched can be installed in virtual reality device, user's sight is represented by this ray, into
And determine user view.But this extra instrument obviously can produce the problems such as increase cost.In addition, the existing skill also having
Art to track the eye movement of user, the sight focus of user can be obtained with this, and handed over by equipment such as eye trackers
The triggering of interoperability.This obviously will also result in the raising of equipment cost.
And the interactive mode based on " sight fusing mechanism " provided in the embodiment of the present application, due to can need not be according to
Rely other external equipments such as handle, ray emission instrument, eye tracker, therefore, can solve existing in the prior art above-mentioned
Problem.
Specifically, the embodiment of the present application three additionally provides the exchange method in a kind of virtual reality space environment, referring to figure
10, this method may comprise steps of:
S1001:Client provide virtual reality space environment, the virtual reality space environment include it is at least one can
Interactive object;
S1002:Determine user's sight focal position;
S1003:If the sight focal position enter target can interactive object interaction area scope, and stop
Time reaches preset time threshold value, then providing can the associated response contents of interactive object with the target.
That is, in the embodiment three, for the various application scenarios of virtual reality space environment, can use
" sight fusing mechanism " realizes interacting between user and system.
Wherein, in order to determine user's sight focus, when providing virtual reality space environment, provided on the upper strata of environment layer
Operation layer, screen layer is provided on the upper strata of the operation layer, wherein, the operation layer is used to mark the interaction area scope to believe
Breath, and operation layer and environment layer are opposing stationary;The preset position of the screen layer is provided with aiming point, the aiming point relative to
The position of the screen layer is constant, and the screen layer follows the movement of virtual reality device screen and moves.Specifically determining to use
During the sight focus of family, it may be determined that the displacement occurred relative to the operation layer of the aiming point, then according to the displacement
Determine user's direction of visual lines, and the corresponding position of the operation layer is mapped to according to user's direction of visual lines, the position is true
It is set to user's sight focal position.By this implementation, trailing type task guidance information can also be provided in environment layer, directly
Complete, can so be lost to avoid user in virtual reality space environment, it is known that oneself need how to operate to task.
Wherein, according to specific application scenarios, can interactive object can have a variety of specific forms.For example, purchasing online
In the scene of thing, this can interactive object can include selectable shop object, at this time, if the sight focal position enter
To the interaction area scope of target shop object, and residence time reaches preset time threshold value, then provides the target shop pair
As internal measurements, the target shop object internal measurements include it is at least one can interaction data object.
Alternatively, it is described can interactive object include selectable data object, at this time, if the sight focal position enter
To the interaction area scope of target data objects, and residence time reaches preset time threshold value, then provides the target data pair
The detail information of elephant.
In addition, it is described can interactive object include being used to perform the operational controls of preset type operations, at this time, if described regard
Line focus position enters the interaction area scope of object run control, and residence time reaches preset time threshold value, then carries
For the associated response message of object run control.
Wherein, the operational controls include being used for the operational controls for performing main flow operation, which can be in scape
The sandwich construction arranged on deep direction, when the regional extent where user's sight focus enters the operational controls, institute
State sandwich construction to move to close to the direction of visual starting point on depth of field direction, so that the operational controls are put in visual effect
Greatly.
Alternatively, the operational controls include the operational controls there are continuous several times repetitive operation demand, for example, for increasing
Or reduce the control of data object quantity, etc..At this time, if the sight focal position enters the operational controls, and stop
Time reach preset time threshold value after do not leave yet, then continue to providing the operational controls associated response message, and continue
During response, the time interval per secondary response is less than the preset time threshold value.
Operational controls can also include:For performing the operational controls of alternative operation, in the embodiment of the present application, the behaviour
The first interaction area scope and the second interaction area scope can be included by making control, and the first interaction area scope is handed over second
There is visual connectedness between mutual regional extent, by default, the first interaction area scope is selected state.This
Sample, when the sight focal position enters in the range of second interaction area, can provide selected state gradually from the
One interaction area scope changes animation to the progress status that the second interaction area scope shifts, and is reaching preset time threshold
During value, the second interaction area scope is changed into selected state completely.
Furthermore the operational controls can also include:For the behaviour moved in the virtual reality space environment
Make control;At this time, can be according to preset when the sight focal position enters in the range of the interaction area of the operational controls
Traveling track updates can interactive object shown in the virtual reality space environment.
Wherein, the operational controls for being moved in the virtual reality space environment can include direction phase
Two anti-operational controls, are respectively used in the virtual reality space environment positive or reverse movement, wherein, in user
One of operational controls are only shown in scope of sight, when detect turn round operation when, show another operational controls.
In order to achieve the above object, it is described to be used in the case where the virtual reality space environment is spherical environment
The operational controls moved in the virtual reality space environment may be displayed on the spherical environment Southern Hemisphere, with the equatorial plane
Form the position of the first presetting angle.
In addition, when sight focal position enter spherical environment the Southern Hemisphere, with the equatorial plane formed be more than the second presetting angle
Position, and when residence time reaches preset time threshold value, preset function menu can be aroused, in this way, can avoid
Scope of sight is caused on the premise of blocking, quickly arouses a certain function.
Example IV
The example IV is corresponding with embodiment three, is introduced from server side.
Specifically, referring to Figure 11, which provides the exchange method in a kind of virtual reality space environment, the party
Method may comprise steps of:
S1101:Server preserve virtual reality space environmental data, the virtual reality space environmental data include to
Few one can interactive object, it is described can interactive object be corresponding with interaction area scope, and be associated with preset response contents;
S1102:The virtual reality space environmental data is supplied to client, so that the client provides virtually now
Real space environment, and determine user's sight focal position, when the sight focal position enter target can interactive object friendship
Mutual regional extent, and when residence time reaches preset time threshold value, there is provided can be in the associated response of interactive object with the target
Hold.
The example IV is corresponding with embodiment three, and therefore, relevant specific implementation may refer in embodiment three
Introduction, which is not described herein again.
In the interaction process flow mentioned in previous embodiment, it is related to some specific processing details, relative to existing
All have in technology and be significantly improved a little, and these improvements can also use in other virtual reality space environment.
That is even if be not the data object interaction in virtual store, or known using " sight fusing " mechanism
Other user view, can also realize interacting between user using above-mentioned improvement.Specifically introduced separately below.
Embodiment five
First, in the embodiment five, there is provided the sight focus in a kind of virtual reality space environment determines method, ginseng
See Figure 12, this method may comprise steps of:
S1201:There is provided virtual reality space environment, the virtual reality space environment include environment layer, operation layer with
And screen layer, wherein, the environment layer include it is at least one can interactive object, the operation layer is located at the upper strata of environment layer,
And it is opposing stationary with environment layer, for pair can the interaction area scope of interactive object be marked;The screen layer is positioned at described
The upper strata of operation layer, and preset position is provided with aiming point, the aiming point is constant relative to the position of the screen layer, institute
Screen layer is stated to follow the movement of virtual reality device screen and move;
S1202:When virtual reality device follows user's head to move, determine the aiming point relative to the operation
The displacement that layer occurs;
S1203:User's direction of visual lines is determined according to the displacement, and the behaviour is mapped to according to user's direction of visual lines
Make the corresponding position of layer, be user's sight focal position by the location determination;
S1204:According to user's sight focal position determine user pay close attention to can interactive object, in order to provide associated
Response contents.
That is, after sight focal position is determined, the meaning of user can be determined according to the sight focal position
Figure, including determine that user needs to pay close attention to can interactive object etc., and then the mechanism such as can be fused by sight, to trigger corresponding sound
Offer of content, etc. is provided.For example, described in previous embodiment one, when user's sight focus enters virtual reality space
Interaction area scope where target data objects in environment, and stop after regular hour threshold value, this can be provided
Detail information of target data objects, etc..
In specific implementation, trailing type task guidance information can also be provided in the environment layer, until task completion, institute
State task guidance information and the environment layer is opposing stationary.
Embodiment six
The embodiment six is a kind of virtual reality space environment corresponding with embodiment five, being provided from server side
In sight focus determine method, referring to Figure 13, this method can include:
S1301:Server includes virtual reality space environmental data, and the virtual reality space environmental data includes environment
Layer, operation layer and screen layer, wherein, the environment layer include it is at least one can interactive object, the operation layer is located at ring
The upper strata of border layer, it is and opposing stationary with environment layer, for pair can the interaction area scope of interactive object be marked;The screen
Layer is located at the upper strata of the operation layer, and preset position is provided with aiming point, and the aiming point is relative to the screen layer
Position is constant, and the screen layer follows the movement of virtual reality device screen and moves;
S1302:The virtual reality space environmental data is supplied to client, so that the client is in virtual reality
When equipment follows the user's head to move, the displacement occurred relative to the operation layer of the aiming point is determined, according to institute's rheme
Move and determine user's direction of visual lines, and the corresponding position of the operation layer is mapped to according to user's direction of visual lines, by the position
Be determined as user's sight focal position, according to user's sight focal position determine user pay close attention to can interactive object, so as to
Associated response contents are provided.
Embodiment seven
In the embodiment seven, on " ground menu ", this improvement is introduced in previous embodiment.Namely
Say, when the application scenarios of the embodiment of the present application are expanded in other various specific virtual display space environment, and not necessarily
Using the method for definite user's sight focus described in the embodiment of the present application, this improvement can also be used, is reached not
Body matter in space environment can be interfered, on the premise of keeping unobscured, realize the quick of preparatory function menu
Arouse.
Specifically, referring to Figure 14, the function menu which provides in a kind of virtual reality space environment arouses
Method, this method may comprise steps of:
S1401:Client provides virtual reality space environment, and the virtual reality space environment is spherical environment, wherein
Including it is at least one can interactive object;
S1402:With described user's sight focal position can be being determined during interactive object interacts;
S1403:When sight focal position enters the Southern Hemisphere of spherical environment, and exceed presetting angle with the angle of the equatorial plane
When, arouse preset function menu.
Wherein, during specific implementation, when arousing preset function menu, can the Southern Hemisphere of the spherical environment and with
The angle of the equatorial plane provides the function menu for the position of the presetting angle (for example, 45 degree etc.), can so realize use
The effect that family is seen to " ground " angle at virtual reality space station, due to " ground " region usually will not user displaying it is main
Can interactive object, therefore, when user's sight focus enters the region, it is believed that user needs to arouse preset function dish
It is single.In this way, under normal interaction mode, it may not be necessary to show the function menu, need not also show for arousing the function menu
Any visual object such as operational controls, it is only necessary to by identifying user's direction of visual lines, you can quickly arouse preset function dish
It is single, therefore, the body matter in space environment can will not interfered, on the premise of keeping unobscured, realized pre-
The quick of function menu is put to arouse.
During specific implementation, after preset function menu is aroused, it can also continue to user's sight focus into line trace,
If user's sight focal position enters in the regional extent where the function menu, and continues to the south mobile, then explanation is used
Family may need to perform further operation by the function menu, therefore, the position of the function menu can be kept constant.
And then if using " sight fusing " mechanism, can be in user's sight focus where the function menu region
In the range of residence time when reaching preset threshold value, corresponding function is performed, for example, returning to main interface etc..In addition, called out described
After playing preset function menu, if user's sight focal position is northwards moved, the function menu can be moved out to
Outside user's scope of sight, that is, function menu disappears, user can continue and interacting pair in virtual reality space environment
As interacting.
Embodiment eight
The embodiment eight is corresponding with embodiment seven, and a kind of virtual reality space ring is provided from the angle of server
Function menu awaking method in border, specifically, referring to Figure 15, this method may comprise steps of:
S1501:Server provides virtual reality space environmental data, and the virtual reality space environment is spherical environment,
Including it is at least one can interactive object;
S1502:The virtual reality space environmental data is supplied to client, so as to the client with it is described can
During interactive object interacts, user's sight focal position is determined, when sight focal position enters the south of spherical environment
Hemisphere, and when exceeding presetting angle with the angle of the equatorial plane, arouse preset function menu.
Embodiment nine
In the embodiment nine, it is situated between mainly for the improvement that there is a situation where " alternative operational controls " offer
Continue, in the embodiment nine, may not necessarily equally limit specific application scenarios, and how to determine sight focus, can
When there is use " alternative operational controls " under any scene, realized by mode provided by the embodiments of the present application.
Specifically, referring to Figure 16, which provides the operational controls processing in a kind of virtual reality space environment
Method, this method may comprise steps of:
S1601:Client provide virtual reality space environment, including it is at least one can interactive object, it is described to hand over
Mutual object includes being used for the operational controls for performing alternative operation, which includes the first interaction area scope and second
Interaction area scope, by default, the corresponding option of the first interaction area scope are selected state;
S1602:With described user's sight focal position can be being determined during interactive object interacts;
S1603:When the sight focal position enters in the range of second interaction area, and preset time threshold value is stopped
When, the corresponding option of the second interaction area scope is determined as selected state.
Can have during specific implementation, between the first interaction area scope and the second interaction area scope visual
Connectedness, then, can be in the sight focal position in order to enable user intuitively feels the time of experience needed for " fusing "
When in the range of into second interaction area, there is provided gradually hand over selected state from the first interaction area scope to described second
The progress status change animation of mutual regional extent transfer, and when reaching preset time threshold value, by the second interaction area model
The progress status enclosed is changed into selected state completely.
Embodiment ten
The embodiment ten is corresponding with embodiment nine, and a kind of virtual reality space ring is provided from the angle of server
Operational controls processing method in border, referring to Figure 17, this method may comprise steps of:
S1701:Server preserve virtual reality space environmental data, including it is at least one can interactive object, it is described
Can interactive object include be used for perform alternative operation operational controls, the operational controls include the first interaction area scope and
Second interaction area scope, by default, the corresponding option of the first interaction area scope are selected state;
S1702:The virtual reality space environmental data is supplied to client, so as to the client with it is described can
During interactive object interacts, user's sight focal position is determined, when the sight focal position enters described second
In the range of interaction area, and when stopping preset time threshold value, the corresponding option of the second interaction area scope is determined as selecting
Middle state.
Embodiment 11
The embodiment 11 is introduced for the improvement of " there are the operational controls of continuous several times repetitive operation demand ",
That is, under various application scenarios, no matter the method which kind of is determined user's sight focal position using, the application can be passed through
The mode that embodiment 11 provides is handled, with this can in the case where saving time cost and user's operation path,
The operation of certain Continuous property is quickly realized, for example, setting data object number etc..
Specifically, referring to Figure 18, this embodiment offers the operational controls processing side in a kind of virtual reality space environment
Method, including:
S1801:Client provide virtual reality space environment, including it is at least one can interactive object, it is described to hand over
Mutual object includes the operational controls there are continuous several times repetitive operation demand;
S1802:With described user's sight focal position can be being determined during interactive object interacts;
S1803:If the sight focal position enters the corresponding operating area of the operational controls, and residence time
Do not leave, then continue to providing the operational controls associated response message yet after reaching preset time threshold value.
During specific implementation, in the operation proceeded to respond to described in execution, the time interval per secondary response is less than described preset
Time threshold.
Embodiment 12
The embodiment 12 is corresponding with embodiment 11, and it is empty to provide a kind of virtual reality from the angle of server
Between operational controls processing method in environment, referring to Figure 19, this method can include:
S1901:Server preserve virtual reality space environmental data, including it is at least one can interactive object, it is described
Can interactive object include there are continuous several times repetitive operation demand operational controls;
S1902:The virtual reality space environmental data is supplied to client, so as to the client with it is described can
During interactive object interacts, user's sight focal position is determined, if the sight focal position enters the behaviour
Make the corresponding operating area of control, and residence time reach preset time threshold value after do not leave yet, then continue to providing the behaviour
Make the associated response message of control.
Embodiment 13
In the embodiment 13, for the existing certain operations control in virtual reality space environment, it can pass through
" mode of multilayer UI " is realized, and after user's sight focus is captured, can carry out the movement of multilayer UI, equally, this changes
It can be applied into point in plurality of application scenes, as long as that is, preserving certain operations control in virtual reality space environment, usually
It is main flow control, can be realized using mode provided by the embodiments of the present application.
Specifically, the embodiment of the present application provides the operational controls processing method in a kind of virtual reality space environment, tool
Body, referring to Figure 20, this method may comprise steps of:
S2001:Client provide virtual reality space environment, including it is at least one can interactive object, it is described to hand over
Mutual object includes operational controls, and the operational controls are the multi-layer user interface UI structures arranged on depth of field direction;
S2002:With described user's sight focal position can be being determined during interactive object interacts;
S2003:If the sight focal position enters the corresponding operating area of the operational controls, by the operation
The multilayer UI structures of control move on depth of field direction to close to the direction of visual starting point, so that the operational controls are imitated in vision
It is exaggerated on fruit.
Wherein, during the multilayer UI structures movement, the displacement of top UI movements is maximum, the displacement of remaining each layer UI is step by step
Reduce.
Embodiment 14
The embodiment 14 is introduction corresponding with embodiment 13, being carried out from server side, specifically, referring to
Figure 21, the embodiment 14 provide the operational controls processing method in a kind of virtual reality space environment, including:
S2101:Server provide virtual reality space environmental data, including it is at least one can interactive object, it is described
Can interactive object include operational controls, the operational controls are the multi-layer user interface UI structures arranged on depth of field direction;
S2102:The virtual reality space environmental data is supplied to client, so as to the client with it is described can
During interactive object interacts, user's sight focal position is determined, if the sight focal position enters the behaviour
Make the corresponding operating area of control, then by the multilayer UI structures of the operational controls on depth of field direction to close to visual starting point
Direction is moved, so that the operational controls are exaggerated in visual effect.
Embodiment 15
In the embodiment 15, due in virtual reality space environment there may be some can interactive object operating space
Domain range information, for example, for can interactive object mark point etc., be typically that the patterns such as some areas smaller " round dot " come
Be identified, user and system carry out it is interactive during, can be by the way that sight focus to be placed on to the region model of " round dot "
In enclosing, come trigger to correspondence can interactive object operation.But in order to avoid to specifically can interactive object cause to block, protect
The open of sight within sweep of the eye is held, the area of this " round dot " would generally be smaller so that during user's operation, it is necessary to will
Sight focus is directed toward in some less regional extent, in the case where needing using mechanism such as sight fusing, it is necessary to which user will
Sight focus rests on a period of time in the regional extent, this is likely to result in visual fatigue for a user, can also cause
User does not know whether its sight focus has been arrived by system acquisition, in addition, for system, be easy to cause system judgement yet
Error.For this reason, in the embodiment of the present application, can be it is same can interactive object set two kinds of operating area scopes, in initial shape
It under state, can provide area less first area range information in virtual reality space environment, iting is found that user's sight is burnt
When point is entered in the range of the first area, the second area range information that area is larger can be provided, in this way, can solve
State problem.Concrete implementation mode is introduced below.
Referring to Figure 22, which provides the operating area processing method in a kind of virtual reality space environment,
This method can include:
S2201:Client provide virtual reality space environment, including it is at least one can interactive object, it is described to hand over
Mutual object is corresponding with operating area range information, and the operating area range information includes first area scope and second area
Scope, wherein, the first area scope is less than second area scope, in an initial condition, in the virtual reality space ring
The first area range information is provided in border;
S2202:With described user's sight focal position can be being determined during interactive object interacts;
S2203:If the sight focal position enter target can the corresponding first area scope of interactive object, in institute
State the target is provided in virtual reality space environment can the corresponding second area scope of interactive object.
During specific implementation, the first area scope and described second go with scope to be border circular areas, specifically in institute
State provided in virtual reality space environment the target can interactive object corresponding second area scope when, can be in the void
Intend in realistic space environment, the center of circle position with the first area scope is the center of circle, there is provided the target can interact pair
As corresponding second area scope, in order to provide the display effect that the first area scope is exaggerated is gone out.
Furthermore it is also possible to progress status change animation is provided in the range of the second area, until user's sight focus
When residence time in the range of the second area reaches preset time threshold value, the progress status, which becomes, turns to completion status.
Embodiment 16
The embodiment 16 is corresponding with embodiment 15, from the angle of server, there is provided a kind of virtual reality
Operating area processing method in space environment, referring to Figure 23, which can specifically include:
S2301:Server preserve virtual reality space environmental data, including it is at least one can interactive object, it is described
Can interactive object be corresponding with operating area range information, the operating area range information includes first area scope and second
Regional extent, wherein, the first area scope is less than second area scope, in an initial condition, empty in the virtual reality
Between provide the first area range information in environment;
S2302:The virtual reality space environmental data is supplied to client, so as to the client with it is described can
During interactive object interacts, user's sight focal position is determined;If the sight focal position can into target
The corresponding first area scope of interactive object, then providing the target in the virtual reality space environment can interactive object pair
The second area scope answered.
It should be noted that on embodiment five to embodiment 16, concrete implementation and relevant technique effect etc.,
The record in previous embodiment one is may refer to, which is not described herein again.
Embodiment 17
Foregoing embodiments are all based on virtual reality technology and provide various specific solutions, and are implementing
When, augmented reality is also based on to realize.Wherein, can on the difference between virtual reality and augmented reality
To be simply interpreted as:In reality environment, the content of " environment layer " display is by simulation or in advance to entity space
Virtual environment that the shooting mode such as video provides, the information of " operation layer ", including pair can interactive object opereating specification mark, be right
Displaying of interaction response information etc., is realized based on content shown in this virtual environment layer;And augmented reality ring
In border, the content of " environment layer " display is content actual in entity space, the information in " operation layer ", including kinds of goods are operated
Markup information of regional extent etc., can be labeled based on the three-dimensional space model pre-established, first in actual displayed
Entity space and three-dimensional space model are first subjected to spatial match, then determined again by the computer vision ability of AR equipment
Position, markup information preserved in three-dimensional space model etc. is illustrated in the scope of sight of AR equipment, user actually sets through AR
The standby content seen includes the environment in entity space, and is superimposed upon the information of " enhancing " on the environment.
Therefore, in the embodiment of the present application, if providing corresponding technical solution based on augmented reality, then may be used
Establishment three-dimensional space model is carried out with the solid shop being directed in advance under line, then, is realized and used based on the three-dimensional space model
The information exchange at family.
Wherein, in order to establish three-dimensional space model, concrete implementation mode can have it is a variety of, for example, one kind side wherein
Under formula, the AR equipment such as AR glasses can be worn by staff and entered in the shop inner space of entity, and in shop expert
Walk.Since AR equipment carries the sensing equipments such as camera, this sensing equipment pair can be utilized during walking
Shop and its interior layout are scanned.After scanning result is obtained, it can imported into development environment.This development environment leads to
It can often support to be labeled scanning result, therefore, scanning result can be labeled by staff.If for example,
Shop includes multiple shelf, is used to place specific storage object (for example, " kinds of goods " etc.) on each shelf, then can be to goods
The corresponding data object tag of the numbering of frame, kinds of goods (such as commodity ID etc.) information etc. is labeled.After mark is completed, system
The information such as the numbering of each shelf, corresponding inventory information can be preserved, meanwhile, on each shelf, the position coordinates of kinds of goods
Etc. information, system can be automatically generated and preserved, and the corresponding three-dimensional space model in the shop is generated with this.In addition, on specific
During being interacted with user, it is desirable to provide the information such as data object details gone out, can pre-establish data object
Detail information database, when being labeled, ID that can also be according to the corresponding data object of specific kinds of goods in the database
It is labeled etc. mark, in this manner it is possible to by the detailed of the data object preserved in the kinds of goods and database in augmented reality space
Association is set up between feelings information.
,, can be advance on software view in order to realize the interactive function provided in the embodiment of the present application during specific implementation
Realize relevant client (including application program, or relevant function module etc.).Client can by diversified forms with
AR equipment is used cooperatively.Specifically, for integral type AR equipment (that is, by AR equipment independently undertake screen display, calculating,
The tasks such as storage) for, directly the client can be installed in AR equipment so that AR equipment has in the embodiment of the present application
Interactive function.Alternatively, for portable AR equipment, since portable AR equipment usually only undertakes screen display task,
, therefore, in this case, can also will in use, it is generally necessary to the mobile terminal devices such as mobile phone are connected with the AR equipment
Client is installed in the mobile terminal devices such as mobile phone, in this way, being mounted with that the mobile terminal device of the client is placed on AR
After in equipment, it is possible to so that AR equipment has the interactive function described in the embodiment of the present application.In addition, three-dimensional space model is given birth to
Cheng Hou, can be saved directly in the terminal device where client, alternatively, can also store in server, client exists
When needing to interact, then the corresponding three-dimensional space model in current shop is downloaded from server.In short, client no matter
It is that directly client is installed in AR equipment, is also mounted in mobile terminal device, AR equipment can be caused to be based on this
Kind client, with reference to the three-dimensional space model of foregoing generation, realizes specific interaction.
Specifically, referring to Figure 24, which provides the data object interaction side in a kind of augmented reality environment
Method, this method may comprise steps of:
S2401:Client obtains the three-dimensional space model of solid shop internal measurements;Bag inside the solid shop
Include more than one piece kinds of goods;
S2402:The three-dimensional space model and the solid shop are being carried out by preset augmented reality AR equipment
After spatial match, provide the kinds of goods corresponding interaction area range information in user's scope of sight of the AR equipment;
When consumer needs to be done shopping into solid shop as user, relevant AR equipment (including one can be worn
The AR equipment of body formula, or be connected to the portable AR equipment of mobile terminal device), enter in solid shop, afterwards, can be with
Start relevant client.After client is activated, initialization process can be carried out first, can specifically be included, by described three
Dimension space model carries out spatial match with the solid shop.So-called spatial match, that is to say, that can be by three dimensions mould
Type is matched with actual solid shop so that position in position and solid shop, direction in three dimensions etc. are opposite
Should, the information that can so " to strengthen " is accurately shown in the position corresponded in scope of sight where kinds of goods.
Specifically the method for progress spatial match can have a variety of, can be in advance in three-dimensional space for example, under a kind of mode
Between preserve some characteristic points in model, for example, four in space corner position etc., consumer-user wears the AR such as AR glasses
After equipment enters solid shop, start application program first, afterwards, can be empty in storage in the state of AR glasses are adorned oneself with
Between look about one week, the sensor device of AR glasses can be scanned parking space, afterwards, can utilize characteristic point and
Scanning result, characteristic point is matched with the point of physical location in parking space, to determine the position of three-dimensional space model
Put, direction etc., so as to complete spatial match, it is final so that in three-dimensional space model each point position, with corresponding to solid shop/brick and mortar store
Physical location, direction in paving is consistent.Certainly, in specific implementation, spatial match, example can also be realized by other means
Such as, it is possible to achieve automatically matching, etc., is no longer described in detail one by one here.
After spatial match is completed, it is possible to which it is corresponding in user's scope of sight of the AR equipment to provide the kinds of goods
Interaction area information, for example, shipment can be shown in scope of sight by increasing modes such as " operation layers " in scope of sight
The operable area of product, for example, in position display one " blue round dot " where each kinds of goods, etc..
S2403:Determine user's sight focus;
S2404:When the sight focal position enters in the range of the interaction area of target kinds of goods, and residence time reaches
During preset time threshold value, provided and the associated target data of interaction area scope in user's scope of sight of the AR equipment
The interaction response information of object.
After the corresponding interaction area information of kinds of goods is provided, in follow-up specific interaction, on user's sight
Focus determine, the offer of various interaction response information, and all types of operational controls offer and its interactive mode relative to
The improvement of the prior art, function menu arouse mode, etc., can be similar with the implementation in virtual reality space,
Which is not described herein again.
Embodiment 18
The embodiment 18 is introduction corresponding with embodiment 17, being carried out from server side, specifically, referring to
Figure 25, this embodiment offers the data object exchange method in a kind of augmented reality environment, this method can include:
S2501:Server preserves the three-dimensional space model of solid shop internal measurements, is wrapped inside the solid shop
Include more than one piece kinds of goods, corresponding interaction area range information, and the friendship with each associated data object of interaction area scope
Mutual response message;
S2502:The three-dimensional space model is supplied to client, so that the client shows by preset enhancing
After the three-dimensional space model and the solid shop are carried out spatial match by real AR equipment, in user's ken of the AR equipment
In the range of provide the kinds of goods corresponding interaction area, and user sight focus is determined, when the sight focal position enters mesh
In the range of the interaction area for marking kinds of goods, and when residence time reaches preset time threshold value, in user's ken of the AR equipment
In the range of provide and the interaction response information of the associated target data objects of interaction area scope.
Embodiment 19
The embodiment 19 is also based on augmented reality AR technologies and provides a kind of exchange method in augmented reality environment,
Only this method is no longer limited to the kinds of goods interaction in solid shop, and can be achieved in any parking space, with storage
Information exchange between object.
Specifically, referring to Figure 26, which provides the exchange method in a kind of augmented reality environment, this method
It can include:
S2601:Client obtains the three-dimensional space model of entity parking space internal measurements;The entity storage is empty
Between internal include more than one piece storage object;
S2602:By preset augmented reality AR equipment by the three-dimensional space model and the entity parking space
After carrying out spatial match, provide the storage object corresponding interaction area scope in user's scope of sight of the AR equipment
Information;
S2603:Determine user's sight focus;
S2604:When the sight focal position enters in the range of the interaction area of target storage object, and residence time
When reaching preset time threshold value, provided and the associated target of interaction area scope in user's scope of sight of the AR equipment
Can interactive object interaction response information.
Embodiment 20
The embodiment 20 is corresponding with embodiment 19, and a kind of augmented reality ring is provided from the angle of server
Exchange method in border, referring to Figure 27, this method may comprise steps of:
S2701:Server preserves the three-dimensional space model of entity parking space internal measurements, and the entity storage is empty
Between it is internal include more than one piece storage object, corresponding interaction area range information, and associated with each interaction area scope
Can interactive object interaction response information;
S2702:The three-dimensional space model is supplied to client, so that the client shows by preset enhancing
After the three-dimensional space model and the entity parking space are carried out spatial match by real AR equipment, in the user of the AR equipment
There is provided the storage object corresponding interaction area range information in scope of sight;User's sight focus is determined, when the sight
Focal position enters in the range of the interaction area of target storage object, and when residence time reaches preset time threshold value, in institute
State provided in user's scope of sight of AR equipment with the associated target of interaction area scope can the interaction response of interactive object believe
Breath.
On above-described embodiment 17 to embodiment 20, involved in specific information interactive process to realize details,
Including relative to improvement of the prior art etc., may refer to the record in foregoing individual embodiments, no longer going to live in the household of one's in-laws on getting married here
State.
Corresponding with embodiment one, the embodiment of the present application additionally provides the data object in a kind of virtual reality space environment
Interactive device, referring to Figure 28, which is applied to client, including:
First reality environment provides unit 2801, for providing virtual reality shop object internal measurements, institute
State shop object internal measurements include it is at least one can interaction data object;
First sight focus determination unit 2802, for determining user's sight focal position;
First content provides unit 2803, for entering the interaction area of target data objects when the sight focal position
In the range of, and when residence time reaches preset time threshold value, there is provided with the associated information content of the target data objects.
During specific implementation, first reality environment provides unit 2801 and is additionally operable to:
Before the object internal measurements of the offer virtual reality shop, there is provided for selecting the space of shop object
Environment;
The first content provides unit 2803 and is additionally operable to:
When the sight focal position enters in the range of the interaction area of target shop object, and residence time reaches pre-
Time threshold is put, then object is determined as selected shop object by the target shop.
Wherein, in the space environment for selecting shop object, opened up in the range of the interaction area of the shop object
It is shown with the dynamic picture on corresponding shop object.
The device can also include:
Animation provides unit, for entering when the sight focal position in the range of the interaction area of target shop object
When, start to play preset progress status change animation in the range of the interaction area, until reaching the preset time threshold
During value, the progress status, which becomes, turns to completion status.
Wherein, the shop object corresponds to wired lower solid shop, after the selected shop object is determined, institute
Device is stated to further include:
Video providing unit, for being used to select the space environment of shop object to be switched to and the selected shop by described
The associated interlude video of object is spread, the interlude video, which is used to show, goes to solid shop/brick and mortar store under the selected corresponding line of shop object
The scenery that can be watched in paving way.
Wherein, shown in the shop object internal measurements can interaction data object be corresponding with mark point, it is described
First content provides unit and specifically can be used for:
When the mark point of the sight focal position alignment target data object, and residence time reaches preset time threshold
During value, there is provided with the associated information content of the target data objects.
During specific implementation, which can also include:
Mark point magnification processing, for when the mark point of the sight focal position alignment target data object,
The mark point is amplified, and progress status change animation is provided in the regional extent where mark point after amplification, until
When reaching the preset time threshold value, the progress status, which becomes, turns to completion status.
During specific implementation, the first content provides unit and specifically can be used for:
The detail information of the target data objects is provided.
Wherein, the detail information of the target data objects includes the stereo image information of the target data objects, with
And the description information of word property.
Specifically it can be used for specifically, first content provides unit:
Centered on the stereo image information, the description information of the word property is shown in the week of the stereo-picture
Enclose.
In addition, the device can also include:
First operational controls provide unit, are controlled for providing to be used to operate stereo-picture progress rotating first
Part;
Rotating operation unit, for the regional extent where entering first operational controls when user's sight focus
When, the stereo-picture is subjected to rotation process.
Wherein, first operational controls include being used for the rotating multiple operational controls of different directions.
Wherein, the photo generation that the stereo-picture is obtained according to real scene shooting is carried out to corresponding entity object.
Specifically it can be used in addition, the first content provides unit:
First information panel is provided, and the details letter of the target data objects is provided in the first information panel
Breath.
At this time, which can also include:
Panel closing unit, for when user's sight focal position is removed from described information panel, by described information face
Plate is closed.
In addition, the device can also include:
Second operational controls provide unit, are used to perform the target data objects the second of purchase operation for providing
Operational controls;
Order contents provide unit, for the region model where entering second operational controls when user's sight focus
In enclosing, and when residence time reaches preset time threshold value, there is provided with the relevant information content of acknowledgement of orders.
Wherein, second operational controls are the multi-layer user interface UI structures arranged on depth of field direction, when the use
When family sight focus enters the regional extent where second operational controls, the multilayer UI structures on depth of field direction to
Moved close to the direction of visual starting point, so that second operational controls are exaggerated in visual effect.
When the multilayer UI structures move, the displacement of top UI movements is maximum, the displacement of remaining each layer UI subtracts step by step
It is small.
Wherein, order contents provide unit and specifically can be used for:
Second information panel is provided, and is provided and the relevant information content of acknowledgement of orders in second information panel.
Wherein, the relevant information content of the acknowledgement of orders includes:For modify to order relevant information the 3rd
Operational controls;
Described device further includes:
3rd panel provides unit, for the region model where entering the 3rd operational controls when user's sight focus
In enclosing, and when residence time reaches preset time threshold value, there is provided the 3rd information panel, and carried in the 3rd information panel
For the content for changing the order relevant information.
In addition, the device further includes:
Panel mobile unit, for when providing three information panel, by second information panel in the depth of field
Side is upwardly away from the direction movement of visual starting point, and the 3rd information panel is included in top;
Reduction unit, for after the modification of completion information or the 3rd information panel are closed, reducing described second
The dispaly state of information panel.
Wherein, it is described to include with the relevant information content of acknowledgement of orders:For target data objects quantity increase/
The 4th operational controls of operation are reduced, described device further includes:
Operating unit is added and subtracted, for entering the 4th operational controls, and residence time when the sight focal position
The quantity of the target data objects when reaching preset time threshold value, is performed on the basis of initial value to plus one/the operation that subtracts one.
Wherein, which further includes:
Continuous plus-minus operating unit, if entering the 4th operational controls for the sight focal position, and stops
Time reach preset time threshold value after do not leave yet, then continue to perform the quantity of the target data objects plus one/subtract one
Operation, and continue to execute plus one/operation that subtracts one when, the time interval of execution is less than the preset time threshold value.
Wherein, it is described to include with the relevant information content of acknowledgement of orders:For performing the 5th operation control of alternative operation
Part, the 5th operational controls include the first interaction area scope and the second interaction area scope, and the first interaction area model
Enclose has visual connectedness between the second interaction area scope, and by default, the first interaction area scope is choosing
Middle state;
Described device further includes:
Animation provides unit, for when the sight focal position enters in the range of second interaction area, there is provided
The progress status that selected state is gradually shifted from the first interaction area scope to the second interaction area scope changes animation,
And when reaching preset time threshold value, the second interaction area scope is changed into selected state completely.
In addition, the device can also include:
6th operational controls provide unit, are used to be moved in the shop object internal measurements for providing
The 6th operational controls;
Updating block, for entering when the sight focal position in the range of the interaction area of the 6th operational controls
When, being updated according to preset traveling track can interaction data object shown in the object internal measurements of shop.
Wherein, the 6th operational controls include two opposite operational controls of direction, are respectively used in the shop pair
As positive in internal measurements or reverse movement, one of operational controls are only shown in user's scope of sight, when regarding
When domain scope is transformed into the scene that can be moved backward, another operational controls is shown.
Wherein, the shop object internal measurements are spherical environment, and the 6th operational controls are shown in the ball
The Southern Hemisphere of shape environment, forms the position of the first presetting angle with the equatorial plane.
Wherein, the shop object internal measurements are spherical environment, and described device further includes:
Main menu provides unit, for entering the Southern Hemisphere of spherical environment, the angle with the equatorial plane when sight focal position
During more than the second presetting angle, there is provided main menu.
Wherein, the shop object internal measurements are according to the video recorded in advance under corresponding line in solid shop
Generation.
Wherein, when providing virtual reality shop object internal measurements, operation layer, institute are provided on the upper strata of environment layer
The upper strata for stating operation layer provides screen layer, and the operation layer is used to mark the interaction area range information, and operation layer and ring
Border layer is opposing stationary;The preset position of the screen layer is provided with aiming point, the aiming point relative to the screen layer position
Put constant, the screen layer follows the movement of virtual reality device screen and moves;
The first sight focus determination unit can be with:
Displacement determination subelement, for determining the displacement occurred relative to the operation layer of the aiming point;
Location determination subelement, for determining user's direction of visual lines according to the displacement, and according to the user sight side
It is user's sight focal position by the location determination to the corresponding position of the operation layer is mapped to.
In addition, the device can also include:
Guidance information provide unit, for the environment layer provide trailing type task guidance information, until task complete,
The task guidance information and the environment layer are opposing stationary.
Corresponding with embodiment two, the embodiment of the present application additionally provides the data object in a kind of virtual reality space environment
Interactive device, referring to Figure 29, applied to server, including:
First virtual reality environment data storage unit 2901, for preserving virtual reality shop object internal measurements
Data, the shop object internal measurements data include it is at least one can interaction data object, it is described can interaction data
Object is corresponding with interaction area scope, and is associated with the preset information content;
First virtual reality environment data provides unit 2902, for by virtual reality shop object inner space ring
Border data are supplied to client, so that the client provides virtual reality shop object internal measurements, and determine user
Sight focal position, when the sight focal position enters in the range of the interaction area of target data objects, and residence time
When reaching preset time threshold value, there is provided with the associated information content of the target data objects.
Corresponding with embodiment three, the embodiment of the present application additionally provides the data object interaction in a kind of augmented reality environment
Device, referring to Figure 30, applied to client, including:
First model obtaining unit 3001, for obtaining the three-dimensional space model of solid shop internal measurements;It is described
Include more than one piece kinds of goods inside solid shop;
First interaction area information provider unit 3002, for by preset augmented reality AR equipment by the three-dimensional
After spatial model carries out spatial match with the solid shop, the kinds of goods are provided in user's scope of sight of the AR equipment
Corresponding interaction area range information;
First sight focus determination unit 3003, for determining user's sight focus;
First interaction response provides unit 3004, for entering the interaction area of target kinds of goods when the sight focal position
In the range of, and when residence time reaches preset time threshold value, provided and the friendship in user's scope of sight of the AR equipment
The interaction response information of the associated target data objects of mutual regional extent.
Corresponding with example IV, the embodiment of the present application additionally provides the interactive device in a kind of augmented reality environment, ginseng
See Figure 31, applied to server, including:
First model storage unit 3101, it is described for preserving the three-dimensional space model of solid shop internal measurements
Include more than one piece kinds of goods, corresponding interaction area range information inside solid shop, and associated with each interaction area scope
Data object interaction response information;
First model provision unit 3102, for the three-dimensional space model to be supplied to client, so as to the client
Hold after the three-dimensional space model and the solid shop are carried out spatial match by preset augmented reality AR equipment,
There is provided the kinds of goods corresponding interaction area range information in user's scope of sight of the AR equipment, and determine that user's sight is burnt
Point, when the sight focal position enters in the range of the interaction area of target kinds of goods, and residence time reaches preset time threshold
During value, provide in user's scope of sight of the AR equipment and interacted with the associated target data objects of interaction area scope
Response message.
Corresponding with embodiment five, the embodiment of the present application additionally provides the interaction dress in a kind of virtual reality space environment
Put, referring to Figure 32, applied to client, including:
Second reality environment provides unit 3201, and for providing virtual reality space environment, the virtual reality is empty
Between environment include it is at least one can interactive object;
Second sight focus determination unit 3202, for determining user's sight focal position;
Second content providing unit 3203, for when the sight focal position enter target can interactive object interaction
Regional extent, and when residence time reaches preset time threshold value, there is provided can the associated response contents of interactive object with the target.
Wherein, when providing virtual reality space environment, operation layer is provided on the upper strata of environment layer, in the operation layer
Upper strata provides screen layer, wherein, the operation layer is used to mark the interaction area range information, and operation layer and environment layer phase
To static;The preset position of the screen layer is provided with aiming point, and the aiming point is constant relative to the position of the screen layer,
The screen layer follows the movement of virtual reality device screen and moves;
The second sight focus determination unit, including:
Displacement determination subelement, for determining the displacement occurred relative to the operation layer of the aiming point;
Location determination subelement, for determining user's direction of visual lines according to the displacement, and according to the user sight side
It is user's sight focal position by the location determination to the corresponding position of the operation layer is mapped to.
During specific implementation, which can also include:
Guidance information provide unit, for the environment layer provide trailing type task guidance information, until task complete,
The task guidance information and the environment layer are opposing stationary.
Wherein, it is described can interactive object include selectable shop object, second content providing unit is specifically used for:
If the sight focal position enters the interaction area scope of target shop object, and residence time reaches
Preset time threshold value, then provide the target shop object internal measurements, in the target shop object internal measurements
Including it is at least one can interaction data object.
Alternatively, it is described can interactive object include selectable data object, second content providing unit is specifically used for:
If the sight focal position enters the interaction area scope of target data objects, and residence time reaches
Preset time threshold value, then provide the detail information of the target data objects.
Alternatively, it is described can interactive object include being used to perform the operational controls of preset type operations, second content carries
It is specifically used for for unit:
If the sight focal position enters the interaction area scope of object run control, and residence time reaches
Preset time threshold value, then provide the associated response message of object run control.
Wherein, the operational controls include being used for the operational controls for performing main flow operation, and the operational controls are in depth of field side
The sandwich construction arranged upwards is described more when the regional extent where user's sight focus enters the operational controls
Rotating fields move on depth of field direction to close to the direction of visual starting point, so that the operational controls are exaggerated in visual effect.
Wherein, the operational controls include the operational controls there are continuous several times repetitive operation demand, and described device is also wrapped
Include:
Continuous response unit, if enter the corresponding operating area of the operational controls for the sight focal position, and
Residence time is not left yet after reaching preset time threshold value, then continues to providing the operational controls associated response message, and
When proceeding to respond to, the time interval per secondary response is less than the preset time threshold value.
The operational controls include:For performing the operational controls of alternative operation, which includes the first interaction
Regional extent and the second interaction area scope, and there is vision between the first interaction area scope and the second interaction area scope
On connectedness, by default, the first interaction area scope is selected state;
Second content providing unit is specifically used for:
When the sight focal position enters in the range of second interaction area, there is provided by selected state gradually from the
One interaction area scope changes animation to the progress status that the second interaction area scope shifts, and is reaching preset time threshold
During value, the second interaction area scope is changed into selected state completely.
The operational controls include:For the operational controls moved in the virtual reality space environment;
Second content providing unit is specifically used for:
When the sight focal position enters in the range of the interaction area of the operational controls, according to preset traveling track
Updating can interactive object shown in the virtual reality space environment.
The operational controls for being moved in the virtual reality space environment include opposite two in direction
Operational controls, are respectively used in the virtual reality space environment positive or reverse movement, wherein, in user's scope of sight
Inside only show one of operational controls, when detect turn round operation when, show another operational controls.
During specific implementation, the virtual reality space environment is spherical environment, described to be used in the virtual reality space
The operational controls moved in environment are shown in the spherical environment Southern Hemisphere, and the position of the first presetting angle is formed with the equatorial plane
Put.
When the shop object internal measurements are spherical environment, described device further includes:
Menu arouses unit, for entering the Southern Hemisphere of spherical environment when sight focal position, surpasses with the angle of the equatorial plane
When crossing the second presetting angle, arouse preset function menu.
Corresponding with embodiment six, the embodiment of the present application additionally provides the interaction dress in a kind of virtual reality space environment
Put, referring to Figure 33, applied to server, including:
Second virtual reality environment data storage unit 3301, for preserving virtual reality space environmental data, the void
Intend realistic space environmental data include it is at least one can interactive object, it is described can interactive object be corresponding with interaction area scope,
And it is associated with preset response contents;
Second virtual reality environment data provides unit 3302, for the virtual reality space environmental data to be supplied to
Client, so that the client provides virtual reality space environment, and determines user's sight focal position, when the sight is burnt
Point position enter target can interactive object interaction area scope, and when residence time reaches preset time threshold value, there is provided
Can the associated response contents of interactive object with the target.
Corresponding with embodiment seven, the embodiment of the present application additionally provides the sight focus in a kind of virtual reality space environment
Determining device, referring to Figure 34, applied to client, including:
3rd reality environment provides unit 3401, and for providing virtual reality space environment, the virtual reality is empty
Between environment include environment layer, operation layer and screen layer, wherein, the environment layer include it is at least one can interactive object,
The operation layer is located at the upper strata of environment layer, and opposing stationary with environment layer, for pair can interactive object interaction area scope
It is marked;The screen layer is located at the upper strata of the operation layer, and preset position is provided with aiming point, the aiming point phase
Constant for the position of the screen layer, the screen layer follows the movement of virtual reality device screen and moves;
Displacement determination unit 3402, for when virtual reality device follows user's head to move, determining the aiming point
Relative to the operation layer occur displacement;
3rd sight focus determination unit 3403, for determining user's direction of visual lines according to the displacement, and according to described
User's direction of visual lines is mapped to the corresponding position of the operation layer, is user's sight focal position by the location determination;
Can interactive object determination unit 3404, for according to user's sight focal position determine user pay close attention to hand over
Mutual object, in order to provide associated response contents.
Wherein, which can also include:
Task guidance information provides unit, for providing trailing type task guidance information in the environment layer, until task
Complete, the task guidance information and the environment layer are opposing stationary.
Corresponding with embodiment eight, the embodiment of the present application additionally provides the sight focus in a kind of virtual reality space environment
Determining device, referring to Figure 35, applied to server, including:
3rd virtual reality environment data storage unit 3501, for preserving virtual reality space environmental data, the void
Intend realistic space environmental data include environment layer, operation layer and screen layer, wherein, the environment layer include it is at least one can
Interactive object, the operation layer are located at the upper strata of environment layer, and opposing stationary with environment layer, for pair can interactive object interaction
Regional extent is marked;The screen layer is located at the upper strata of the operation layer, and preset position is provided with aiming point, described
Aiming point is constant relative to the position of the screen layer, and the screen layer follows the movement of virtual reality device screen and moves;
3rd virtual reality environment data provides unit 3502, for the virtual reality space environmental data to be supplied to
Client, so that the client is when virtual reality device follows user's head to move, determine the aiming point relative to
The displacement that the operation layer occurs, user's direction of visual lines is determined according to the displacement, and is mapped according to user's direction of visual lines
It is user's sight focal position by the location determination, according to user's sight focus position to the corresponding position of the operation layer
Put definite user concern can interactive object, in order to provide associated response contents.
Corresponding with embodiment nine, the embodiment of the present application additionally provides the function menu in a kind of virtual reality space environment
Arouse device, referring to Figure 36, applied to client, including:
4th reality environment provides unit 3601, and for providing virtual reality space environment, the virtual reality is empty
Between environment be spherical environment, including it is at least one can interactive object;
4th sight focus determination unit 3602, for can be determined with described during interactive object interacts
User's sight focal position;
Menu arouses unit 3603, for when sight focal position enter spherical environment the Southern Hemisphere, and with the equatorial plane
When angle exceedes presetting angle, arouse preset function menu.
Wherein, menu arouses unit 3603 and is specifically used for:
In the Southern Hemisphere of the spherical environment and with the angle of the equatorial plane work(is provided for the position of the presetting angle
Can menu.
It is described arouse preset function menu after, further include:
Position holding unit, if entering the regional extent where the function menu for user's sight focal position
It is interior, and continue movement to the south, then keep the position of the function menu constant.
Execution unit, if for user's sight focus when being stopped in the range of where the function menu region
Between reach preset threshold value, then perform corresponding function.
Menu hidden unit, if northwards moved for user's sight focal position, the function menu is moved out to
Outside user's scope of sight.
Corresponding with embodiment ten, the embodiment of the present application additionally provides the function menu in a kind of virtual reality space environment
Arouse device, referring to Figure 37, applied to server, including:
4th virtual reality environment data storage unit 3701, for providing virtual reality space environmental data, the void
Intend realistic space environment be spherical environment, including it is at least one can interactive object;
4th virtual reality environment data provides unit 3702, for the virtual reality space environmental data to be supplied to
Client, so that the client can determine user's sight focal position during interactive object interacts with described,
When sight focal position enters the Southern Hemisphere of spherical environment, and exceedes presetting angle with the angle of the equatorial plane, arouse preset
Function menu.
Corresponding with embodiment 11, the embodiment of the present application additionally provides the operation control in a kind of virtual reality space environment
Part processing unit, referring to Figure 38, applied to client, including:
5th reality environment provides unit 3801, for providing virtual reality space environment, including at least one
It is a can interactive object, it is described can interactive object include being used for the operational controls for performing alternative operation, which includes the
One interaction area scope and the second interaction area scope, by default, the corresponding option of the first interaction area scope is
Selected state;
5th sight focus determination unit 3802, for can be determined with described during interactive object interacts
User's sight focal position;
Selected state determination unit 3803, for entering the second interaction area scope when the sight focal position
It is interior, and when stopping preset time threshold value, the corresponding option of the second interaction area scope is determined as selected state.
Wherein, there is visual connectedness, institute between the first interaction area scope and the second interaction area scope
Device is stated to further include:
Animation provides unit, for when the sight focal position enters in the range of second interaction area, there is provided
The progress status that selected state is gradually shifted from the first interaction area scope to the second interaction area scope changes animation,
And when reaching preset time threshold value, the progress status of the second interaction area scope is changed into selected state completely.
Corresponding with embodiment 12, the embodiment of the present application additionally provides the operation control in a kind of virtual reality space environment
Part processing unit, referring to Figure 39, applied to server, including:
5th virtual reality environment data storage unit 3901, for preserving virtual reality space environmental data, wherein wrapping
Include it is at least one can interactive object, it is described can interactive object include be used for perform alternative operation operational controls, the operation control
Part includes the first interaction area scope and the second interaction area scope, and by default, the first interaction area scope corresponds to
Option be selected state;
5th virtual reality environment data provides unit 3902, for the virtual reality space environmental data to be supplied to
Client, so that the client can determine user's sight focal position during interactive object interacts with described,
When the sight focal position enters in the range of second interaction area, and stops preset time threshold value, by described second
The corresponding option of interaction area scope is determined as selected state.
Corresponding with embodiment 13, the embodiment of the present application additionally provides the operation control in a kind of virtual reality space environment
Part processing unit, referring to Figure 40, applied to client, including:
6th reality environment provides unit 4001, for providing virtual reality space environment, including at least one
It is a can interactive object, it is described can interactive object include there are continuous several times repetitive operation demand operational controls;
6th sight focus determination unit 4002, for can be determined with described during interactive object interacts
User's sight focal position;
Continuous response unit 4003, if entering the corresponding operating space of the operational controls for the sight focal position
Domain, and residence time reach preset time threshold value after do not leave yet, then continue to provide the operational controls it is associated response letter
Breath.
Corresponding with embodiment 14, the embodiment of the present application additionally provides the operation control in a kind of virtual reality space environment
Part processing unit, referring to Figure 41, applied to server, including:
6th virtual reality environment data storage unit 4101, for preserving virtual reality space environmental data, wherein wrapping
Include it is at least one can interactive object, it is described can interactive object include there are continuous several times repetitive operation demand operational controls;
6th virtual reality environment data provides unit 4102, for the virtual reality space environmental data to be supplied to
Client, so that the client can determine user's sight focal position during interactive object interacts with described,
If the sight focal position enters the corresponding operating area of the operational controls, and residence time reaches preset time threshold
Do not leave, then continue to providing the operational controls associated response message yet after value.
Corresponding with embodiment 15, the embodiment of the present application additionally provides the operation control in a kind of virtual reality space environment
Part processing unit, referring to Figure 42, applied to client, including:
7th reality environment provides unit 4201, for providing virtual reality space environment, including at least one
It is a can interactive object, it is described can interactive object include operational controls, the operational controls are the multilayer arranged on depth of field direction
User interface UI structures;
7th sight focus determination unit 4202, for can be determined with described during interactive object interacts
User's sight focal position;
UI mobile units 4203, if entering the corresponding operating space of the operational controls for the sight focal position
Domain, then move the multilayer UI structures of the operational controls on depth of field direction to close to the direction of visual starting point, so as to described
Operational controls are exaggerated in visual effect.
Corresponding with embodiment 16, the embodiment of the present application additionally provides the operation control in a kind of virtual reality space environment
Part processing unit, referring to Figure 43, applied to server, including:
7th virtual reality environment data storage unit 4301, for providing virtual reality space environmental data, wherein wrapping
Include it is at least one can interactive object, it is described can interactive object include operational controls, the operational controls is arrange on depth of field direction
The multi-layer user interface UI structures of cloth;
7th virtual reality environment data provides unit 4302, for the virtual reality space environmental data to be supplied to
Client, so that the client can determine user's sight focal position during interactive object interacts with described,
If the sight focal position enters the corresponding operating area of the operational controls, the multilayer UI of the operational controls is tied
Structure moves on depth of field direction to close to the direction of visual starting point, so that the operational controls are exaggerated in visual effect.
Corresponding with embodiment 17, the embodiment of the present application additionally provides the operating space in a kind of virtual reality space environment
Domain processing unit, referring to Figure 44, applied to client, including:
8th reality environment provides unit 4401, for providing virtual reality space environment, including at least one
It is a can interactive object, it is described can interactive object be corresponding with operating area range information, the operating area range information includes the
One regional extent and second area scope, wherein, the first area scope is less than second area scope, in original state
Under, the first area range information is provided in the virtual reality space environment;
8th sight focus determination unit 4402, for can be determined with described during interactive object interacts
User's sight focal position;
Interaction area scope processing unit 4403, if can interactive object pair into target for the sight focal position
The first area scope answered, then the target is provided in the virtual reality space environment can corresponding secondth area of interactive object
Domain scope.
Wherein, during the multilayer UI structures movement, the displacement of top UI movements is maximum, the displacement of remaining each layer UI is step by step
Reduce.
Corresponding with embodiment 18, the embodiment of the present application additionally provides the operating space in a kind of virtual reality space environment
Domain processing unit, referring to Figure 45, applied to server, including:
8th virtual reality environment data storage unit 4501, for preserving virtual reality space environmental data, wherein wrapping
Include it is at least one can interactive object, it is described can interactive object be corresponding with operating area range information, the operating area scope letter
Breath includes first area scope and second area scope, wherein, the first area scope is less than second area scope, first
Under beginning state, the first area range information is provided in the virtual reality space environment;
8th virtual reality environment data provides unit 4502, for the virtual reality space environmental data to be supplied to
Client, so that the client can determine user's sight focal position during interactive object interacts with described;
If the sight focal position enter target can the corresponding first area scope of interactive object, in the virtual reality space
The target is provided in environment can the corresponding second area scope of interactive object.
Corresponding with embodiment 19, the embodiment of the present application additionally provides the interactive device in a kind of augmented reality environment,
Referring to Figure 46, applied to client, including:
Second model obtaining unit 4601, for obtaining the three-dimensional space model of entity parking space internal measurements;
Include more than one piece storage object inside the entity parking space;
Second interaction area information provider unit 4602, for by preset augmented reality AR equipment by the three-dimensional
After spatial model carries out spatial match with the entity parking space, in user's scope of sight of the AR equipment described in offer
Store the corresponding interaction area range information of object;
9th sight focus determination unit 4603, for determining user's sight focus;
Second interaction response provides unit 4604, for entering the interaction of target storage object when the sight focal position
In regional extent, and when residence time reaches preset time threshold value, provided in user's scope of sight of the AR equipment with
The associated target of interaction area scope can interactive object interaction response information.
Corresponding with embodiment 20, the embodiment of the present application additionally provides the interactive device in a kind of augmented reality environment,
Referring to Figure 47, applied to server, including:
Second model storage unit 4701, for preserving the three-dimensional space model of entity parking space internal measurements,
Include more than one piece storage object, corresponding interaction area range information inside the entity parking space, and interacted with each
Regional extent it is associated can interactive object interaction response information;
Second model provision unit 4702, for the three-dimensional space model to be supplied to client, so as to the client
The three-dimensional space model and the entity parking space are being carried out spatial match by end by preset augmented reality AR equipment
Afterwards, provide the storage object corresponding interaction area range information in user's scope of sight of the AR equipment;Determine to use
Family sight focus, when the sight focal position enters in the range of the interaction area of target storage object, and residence time reaches
During to preset time threshold value, being provided in user's scope of sight of the AR equipment can with the associated target of interaction area scope
The interaction response information of interactive object.
As seen through the above description of the embodiments, those skilled in the art can be understood that the application can
Realized by the mode of software plus required general hardware platform.Based on such understanding, the technical solution essence of the application
On the part that contributes in other words to the prior art can be embodied in the form of software product, the computer software product
It can be stored in storage medium, such as ROM/RAM, magnetic disc, CD, including some instructions are used so that a computer equipment
(can be personal computer, server, either network equipment etc.) performs some of each embodiment of the application or embodiment
Method described in part.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment
Divide mutually referring to what each embodiment stressed is the difference with other embodiment.Especially for system or
For system embodiment, since it is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to method
The part explanation of embodiment.System and system embodiment described above is only schematical, wherein the conduct
The unit that separating component illustrates may or may not be it is physically separate, can be as the component that unit is shown or
Person may not be physical location, you can with positioned at a place, or can also be distributed in multiple network unit.Can root
Factually border needs to select some or all of module therein realize the purpose of this embodiment scheme.Ordinary skill
Personnel are without creative efforts, you can to understand and implement.
Above to the data object exchange method and device in virtual reality space environment provided herein, carry out
It is discussed in detail, specific case used herein is set forth the principle and embodiment of the application, above example
Illustrate that being only intended to help understands the present processes and its core concept;Meanwhile for those of ordinary skill in the art, according to
According to the thought of the application, there will be changes in specific embodiments and applications.In conclusion this specification content
It should not be construed as the limitation to the application.
Claims (38)
- A kind of 1. data object exchange method in virtual reality space environment, it is characterised in that including:Client provides virtual reality shop object internal measurements, and the shop object internal measurements are included at least One can interaction data object;Determine user's sight focal position;When the sight focal position enters in the range of the interaction area of target data objects, and residence time reaches preset Between threshold value when, there is provided with the associated information content of the target data objects.
- 2. according to the method described in claim 1, it is characterized in that, in the offer virtual reality shop object inner space ring Before border, the method further includes:The space environment for being used for selecting shop object is provided;Determine user's sight focal position;When the sight focal position enters in the range of the interaction area of target shop object, and residence time reaches preset Between threshold value, then by the target shop, object is determined as selected shop object.
- 3. according to the method described in claim 2, it is characterized in that, described be used to select in the space environment of shop object, institute State the dynamic picture that displaying in the range of the interaction area of shop object is related to corresponding shop object.
- 4. according to the method described in claim 2, it is characterized in that, further include:When the sight focal position enters in the range of the interaction area of target shop object, start in the interaction area model The preset progress status change animation of interior broadcasting is enclosed, when reaching the preset time threshold value, the progress status, which becomes, to be turned to Completion status.
- 5. according to the method described in claim 2, it is characterized in that, the shop object corresponds to wired lower solid shop, true After making the selected shop object, the method further includes:By it is described be used for select shop object space environment be switched to the associated interlude video of selected shop object, The interlude video is used to show the scape gone to and can watched in solid shop way under the selected corresponding line of shop object Thing.
- 6. according to the method described in claim 1, it is characterized in that, being shown in the shop object internal measurements to hand over Mutual data object is corresponding with mark point, the interaction area scope that target data objects are entered when the sight focal position It is interior, and when residence time reaches preset time threshold value, there is provided with the associated information content of the target data objects, including:When the mark point of the sight focal position alignment target data object, and residence time reaches preset time threshold value When, there is provided with the associated information content of the target data objects.
- 7. according to the method described in claim 6, it is characterized in that, further include:When the mark point of the sight focal position alignment target data object, the mark point is amplified, and after amplification Mark point where regional extent in provide progress status change animation, it is described when reaching the preset time threshold value Progress status, which becomes, turns to completion status.
- 8. according to the method described in claim 1, it is characterized in that, in the offer and the associated information of the target data objects Hold, including:The detail information of the target data objects is provided.
- 9. according to the method described in claim 8, it is characterized in that, the detail information of the target data objects includes the mesh Mark the stereo image information of data object, and the description information of word property.
- 10. according to the method described in claim 9, it is characterized in that, the detail information for providing the target data objects, Including:Centered on the stereo image information, the description information of the word property is shown in around the stereo-picture.
- 11. according to the method described in claim 9, it is characterized in that, further include:There is provided and be used to carry out rotating first operational controls to the stereo-picture;When the regional extent where user's sight focus enters first operational controls, the stereo-picture is revolved Turn operation.
- 12. according to the method for claim 11, it is characterised in that first operational controls include being used for different directions Rotating multiple operational controls.
- 13. according to the method for claim 11, it is characterised in that the stereo-picture according to corresponding entity object into The photo generation that row real scene shooting obtains.
- 14. according to the method described in claim 8, it is characterized in that, the detail information for providing the target data objects, Including:First information panel is provided, and the detail information of the target data objects is provided in the first information panel.
- 15. according to the method for claim 14, it is characterised in that further include:When user's sight focal position is removed from described information panel, described information panel is closed.
- 16. according to the method described in claim 8, it is characterized in that, when providing the detail information of the target data objects, Further include:The second operational controls for being used for that the target data objects to be performed with purchase operation are provided;When in the regional extent where user's sight focus enters second operational controls, and residence time reaches preset During time threshold, there is provided with the relevant information content of acknowledgement of orders.
- 17. according to the method for claim 16, it is characterised in that second operational controls are to arrange on depth of field direction Multi-layer user interface UI structures, the regional extent where user's sight focus enters second operational controls When, the multilayer UI structures move on depth of field direction to close to the direction of visual starting point, so that second operational controls exist It is exaggerated in visual effect.
- 18. according to the method for claim 17, it is characterised in that when the multilayer UI structures move, top UI is moved Dynamic displacement is maximum, the displacement of remaining each layer UI reduces step by step.
- 19. according to the method for claim 16, it is characterised in that the offer and the relevant information content of acknowledgement of orders, Including:Second information panel is provided, and is provided and the relevant information content of acknowledgement of orders in second information panel.
- 20. according to the method for claim 19, it is characterised in that the relevant information content of acknowledgement of orders includes:With In the 3rd operational controls modified to order relevant information;The method further includes:When in the regional extent where user's sight focus enters the 3rd operational controls, and residence time reaches preset During time threshold, there is provided the 3rd information panel, and provided in the 3rd information panel and be used to change the related letter of the order The content of breath.
- 21. according to the method for claim 20, it is characterised in that further include:When providing three information panel, second information panel is being upwardly away to the side of visual starting point in depth of field side Include to movement, and by the 3rd information panel in top;After the modification of completion information or the 3rd information panel are closed, the display shape of second information panel is reduced State.
- 22. according to the method for claim 19, it is characterised in that described to include with the relevant information content of acknowledgement of orders: For to target data objects quantity increase/reduce the 4th operational controls of operation, the method further includes:, will when the sight focal position enters the 4th operational controls, and residence time reaches preset time threshold value The quantity of the target data objects performs plus one on the basis of initial value/operation that subtracts one.
- 23. according to the method for claim 22, it is characterised in that further include:If the sight focal position enters the 4th operational controls, and residence time reaches after preset time threshold value still Do not leave, then continue to perform the quantity of the target data objects plus one/the operation that subtracts one, and continue to execute plus one/the behaviour that subtracts one When making, the time interval of execution is less than the preset time threshold value.
- 24. according to the method for claim 19, it is characterised in that described to include with the relevant information content of acknowledgement of orders: For performing the 5th operational controls of alternative operation, the 5th operational controls include the first interaction area scope and second Interaction area scope, and there is visual connectedness between the first interaction area scope and the second interaction area scope, silent Recognize under state, the first interaction area scope is selected state;The method further includes:When the sight focal position enters in the range of second interaction area, there is provided selected state is gradually handed over from first Mutual regional extent changes animation to the progress status that the second interaction area scope shifts, and is reaching preset time threshold value When, the second interaction area scope is changed into selected state completely.
- 25. according to the method described in claim 1, it is characterized in that, the method further includes:The 6th operational controls for being used for being moved in the shop object internal measurements are provided;When the sight focal position enters in the range of the interaction area of the 6th operational controls, according to preset traveling rail Can interaction data object shown in road renewal shop object internal measurements.
- 26. according to the method for claim 25, it is characterised in that the 6th operational controls include opposite two in direction Operational controls, are respectively used in the shop object internal measurements positive or reverse movement, in user's scope of sight Inside only show one of operational controls, when scope of sight is transformed into the scene that can be moved backward, show another operation Control.
- 27. according to the method for claim 25, it is characterised in that the shop object internal measurements are ball collar Border, the 6th operational controls are shown in the Southern Hemisphere of the spherical environment, and the position of the first presetting angle is formed with the equatorial plane.
- 28. according to claim 1 to 27 any one of them method, it is characterised in that the shop object internal measurements For spherical environment, the method further includes:When sight focal position enters the Southern Hemisphere of spherical environment, during with the angle of the equatorial plane more than the second presetting angle, there is provided Main menu.
- 29. according to claim 1 to 27 any one of them method, it is characterised in that the shop object internal measurements Generated according to the video recorded in advance under corresponding line in solid shop.
- 30. according to claim 1 to 27 any one of them method, it is characterised in that providing in the object of virtual reality shop During portion's space environment, operation layer is provided on the upper strata of environment layer, the upper strata of the operation layer provides screen layer, and the operation layer is used In the mark interaction area range information, and operation layer and environment layer are opposing stationary;The preset position of the screen layer is set There is aiming point, the aiming point is constant relative to the position of the screen layer, and the screen layer follows virtual reality device screen Movement and move;Definite user's sight focal position, including:Determine the displacement occurred relative to the operation layer of the aiming point;User's direction of visual lines is determined according to the displacement, and it is corresponding according to user's direction of visual lines to be mapped to the operation layer Position, is user's sight focal position by the location determination.
- 31. according to the method for claim 30, it is characterised in that further include:Trailing type task guidance information is provided in the environment layer, until task completion, the task guidance information and the ring Border layer is opposing stationary.
- A kind of 32. data object exchange method in virtual reality space environment, it is characterised in that including:Server preserves virtual reality shop object internal measurements data, in the shop object internal measurements data Including it is at least one can interaction data object, it is described can interaction data object be corresponding with interaction area scope, and be associated with preset The information content;The virtual reality shop object internal measurements data are supplied to client, so that the client provides virtually Real shop object internal measurements, and determine user's sight focal position, when the sight focal position enters number of targets In the range of the interaction area of object, and when residence time reaches preset time threshold value, there is provided closed with the target data objects The information content of connection.
- A kind of 33. data object exchange method in augmented reality environment, it is characterised in that including:Client obtains the three-dimensional space model of solid shop internal measurements;Include more than one piece goods inside the solid shop Product;After the three-dimensional space model and the solid shop are carried out spatial match by preset augmented reality AR equipment, There is provided the kinds of goods corresponding interaction area range information in user's scope of sight of the AR equipment;Determine user's sight focus;When the sight focal position enters in the range of the interaction area of target kinds of goods, and residence time reaches preset time threshold During value, provide in user's scope of sight of the AR equipment and interacted with the associated target data objects of interaction area scope Response message.
- A kind of 34. exchange method in augmented reality environment, it is characterised in that including:Server preserves the three-dimensional space model of solid shop internal measurements, includes more than one piece goods inside the solid shop Product, corresponding interaction area range information, and believe with the interaction response of each associated data object of interaction area scope Breath;The three-dimensional space model is supplied to client, so that the client is incited somebody to action by preset augmented reality AR equipment After the three-dimensional space model carries out spatial match with the solid shop, provided in user's scope of sight of the AR equipment The corresponding interaction area range information of the kinds of goods, and determine user's sight focus, when the sight focal position enters target In the range of the interaction area of kinds of goods, and when residence time reaches preset time threshold value, in user's ken model of the AR equipment Interior offer and the interaction response information of the associated target data objects of interaction area scope are provided.
- A kind of 35. data object interactive device in virtual reality space environment, it is characterised in that applied to client, including:First reality environment provides unit, for providing virtual reality shop object internal measurements, the shop pair As internal measurements include it is at least one can interaction data object;First sight focus determination unit, for determining user's sight focal position;First content provides unit, for entering when the sight focal position in the range of the interaction area of target data objects, And residence time is when reaching preset time threshold value, there is provided with the associated information content of the target data objects.
- A kind of 36. data object interactive device in virtual reality space environment, it is characterised in that applied to server, including:First virtual reality environment data storage unit, for preserving virtual reality shop object internal measurements data, institute State shop object internal measurements data include it is at least one can interaction data object, it is described can interaction data object correspond to There is interaction area scope, and be associated with the preset information content;First virtual reality environment data provides unit, for the virtual reality shop object internal measurements data to be carried Client is supplied, so that the client provides virtual reality shop object internal measurements, and determines user's sight focus Position, when the sight focal position enters in the range of the interaction area of target data objects, and residence time reaches preset During time threshold, there is provided with the associated information content of the target data objects.
- A kind of 37. data object interactive device in augmented reality environment, it is characterised in that applied to client, including:First model obtaining unit, for obtaining the three-dimensional space model of solid shop internal measurements;The solid shop Inside includes more than one piece kinds of goods;First interaction area information provider unit, for by preset augmented reality AR equipment by the three-dimensional space model After carrying out spatial match with the solid shop, provide the kinds of goods corresponding friendship in user's scope of sight of the AR equipment Mutual regional extent information;First sight focus determination unit, for determining user's sight focus;First interaction response provides unit, for entering when the sight focal position in the range of the interaction area of target kinds of goods, And residence time is provided and the interaction area model in user's scope of sight of the AR equipment when reaching preset time threshold value Enclose the interaction response information of associated target data objects.
- A kind of 38. interactive device in augmented reality environment, it is characterised in that applied to server, including:First model storage unit, for preserving the three-dimensional space model of solid shop internal measurements, the solid shop Inside includes more than one piece kinds of goods, corresponding interaction area range information, and with each associated data pair of interaction area scope The interaction response information of elephant;First model provision unit, for the three-dimensional space model to be supplied to client, so that the client is passing through After the three-dimensional space model and the solid shop are carried out spatial match by preset augmented reality AR equipment, set in the AR There is provided the kinds of goods corresponding interaction area range information in standby user's scope of sight, and determine user's sight focus, work as institute Sight focal position is stated into the range of the interaction area of target kinds of goods, and when residence time reaches preset time threshold value, There is provided in user's scope of sight of the AR equipment and believe with the interaction response of the associated target data objects of interaction area scope Breath.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610909519.0A CN107957775B (en) | 2016-10-18 | 2016-10-18 | Data object interaction method and device in virtual reality space environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610909519.0A CN107957775B (en) | 2016-10-18 | 2016-10-18 | Data object interaction method and device in virtual reality space environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107957775A true CN107957775A (en) | 2018-04-24 |
CN107957775B CN107957775B (en) | 2021-09-21 |
Family
ID=61954296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610909519.0A Active CN107957775B (en) | 2016-10-18 | 2016-10-18 | Data object interaction method and device in virtual reality space environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107957775B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108563395A (en) * | 2018-05-07 | 2018-09-21 | 北京知道创宇信息技术有限公司 | The visual angles 3D exchange method and device |
CN108628565A (en) * | 2018-05-07 | 2018-10-09 | 维沃移动通信有限公司 | A kind of mobile terminal operating method and mobile terminal |
CN108922085A (en) * | 2018-07-18 | 2018-11-30 | 北京七鑫易维信息技术有限公司 | A kind of monitoring method, device, custodial care facility and storage medium |
CN109726954A (en) * | 2018-12-11 | 2019-05-07 | 维沃移动通信有限公司 | A kind of information processing method, device and mobile terminal |
CN110070614A (en) * | 2019-04-30 | 2019-07-30 | 深圳微新创世科技有限公司 | The method of animation is embedded in a kind of 3D virtual scene |
CN110658907A (en) * | 2018-06-28 | 2020-01-07 | 阿里健康信息技术有限公司 | Method and device for acquiring user behavior data |
WO2020014798A1 (en) * | 2018-07-18 | 2020-01-23 | 朱恩辛 | Augmented reality interactive transaction system and method therefor |
CN111127627A (en) * | 2019-11-20 | 2020-05-08 | 贝壳技术有限公司 | Model display method and device in three-dimensional house model |
CN111127029A (en) * | 2019-12-27 | 2020-05-08 | 上海诺亚投资管理有限公司 | VR video-based payment method and system |
CN111242682A (en) * | 2020-01-10 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Article display method |
CN111414074A (en) * | 2019-01-08 | 2020-07-14 | 北京京东尚科信息技术有限公司 | Screen browsing data processing method, device, medium and electronic equipment |
CN111522442A (en) * | 2020-04-09 | 2020-08-11 | 中国电子科技集团公司第三十八研究所 | interaction method and device for ARKit augmented reality environment on iOS device |
CN111880662A (en) * | 2020-02-06 | 2020-11-03 | 北京师范大学 | Eye movement control system applied to interactive map |
CN111932272A (en) * | 2020-08-14 | 2020-11-13 | 中国工商银行股份有限公司 | Payment method and device based on VR equipment |
CN111949113A (en) * | 2019-05-15 | 2020-11-17 | 阿里巴巴集团控股有限公司 | Image interaction method and device applied to virtual reality VR scene |
CN113298598A (en) * | 2020-09-15 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Method and device for providing shop object information and electronic equipment |
CN113534959A (en) * | 2021-07-27 | 2021-10-22 | 咪咕音乐有限公司 | Screen display method, screen display device, virtual reality equipment and program product |
CN113641246A (en) * | 2021-08-25 | 2021-11-12 | 兰州乐智教育科技有限责任公司 | Method and device for determining user concentration degree, VR equipment and storage medium |
WO2023165362A1 (en) * | 2022-03-04 | 2023-09-07 | 北京字跳网络技术有限公司 | Information display method and apparatus, and head-mounted display device and storage medium |
CN117111734A (en) * | 2023-07-04 | 2023-11-24 | 深圳云天励飞技术股份有限公司 | VR display method and device for road diseases, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739633A (en) * | 2008-11-18 | 2010-06-16 | 上海旺城网络科技有限公司 | Method for realizing interactive three-dimensional virtual city e-commerce platform |
CN103049081A (en) * | 2012-12-05 | 2013-04-17 | 上海量明科技发展有限公司 | Method, client and system for visually triggering opening object |
US20130293530A1 (en) * | 2012-05-04 | 2013-11-07 | Kathryn Stone Perez | Product augmentation and advertising in see through displays |
CN103761667A (en) * | 2014-01-09 | 2014-04-30 | 贵州宝森科技有限公司 | Virtual reality e-commerce platform system and application method thereof |
CN103942696A (en) * | 2013-11-20 | 2014-07-23 | 中网一号电子商务有限公司 | 3D virtual shop and physical store correlation method |
CN104616190A (en) * | 2015-03-05 | 2015-05-13 | 广州新节奏智能科技有限公司 | Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet |
CN105718046A (en) * | 2014-12-23 | 2016-06-29 | 联发科技股份有限公司 | Head-Mount Display for Eye Tracking based on Mobile Device |
CN105913299A (en) * | 2015-06-15 | 2016-08-31 | 金荣德 | Travel destination one stop shopping system based on 3D panoramic image and control method thereof |
CN105955471A (en) * | 2016-04-26 | 2016-09-21 | 乐视控股(北京)有限公司 | Virtual reality interaction method and device |
-
2016
- 2016-10-18 CN CN201610909519.0A patent/CN107957775B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739633A (en) * | 2008-11-18 | 2010-06-16 | 上海旺城网络科技有限公司 | Method for realizing interactive three-dimensional virtual city e-commerce platform |
US20130293530A1 (en) * | 2012-05-04 | 2013-11-07 | Kathryn Stone Perez | Product augmentation and advertising in see through displays |
CN103049081A (en) * | 2012-12-05 | 2013-04-17 | 上海量明科技发展有限公司 | Method, client and system for visually triggering opening object |
CN103942696A (en) * | 2013-11-20 | 2014-07-23 | 中网一号电子商务有限公司 | 3D virtual shop and physical store correlation method |
CN103761667A (en) * | 2014-01-09 | 2014-04-30 | 贵州宝森科技有限公司 | Virtual reality e-commerce platform system and application method thereof |
CN105718046A (en) * | 2014-12-23 | 2016-06-29 | 联发科技股份有限公司 | Head-Mount Display for Eye Tracking based on Mobile Device |
CN104616190A (en) * | 2015-03-05 | 2015-05-13 | 广州新节奏智能科技有限公司 | Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet |
CN105913299A (en) * | 2015-06-15 | 2016-08-31 | 金荣德 | Travel destination one stop shopping system based on 3D panoramic image and control method thereof |
CN105955471A (en) * | 2016-04-26 | 2016-09-21 | 乐视控股(北京)有限公司 | Virtual reality interaction method and device |
Non-Patent Citations (2)
Title |
---|
徐婧澜: "阿里巴巴抢滩VR市场虚拟现实购物指日可待", 《中国传媒科技》 * |
沈朝魁 等: "虚拟现实技术(VR)在网上购物中的应用研究", 《科技视界》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108628565A (en) * | 2018-05-07 | 2018-10-09 | 维沃移动通信有限公司 | A kind of mobile terminal operating method and mobile terminal |
CN108563395A (en) * | 2018-05-07 | 2018-09-21 | 北京知道创宇信息技术有限公司 | The visual angles 3D exchange method and device |
CN108628565B (en) * | 2018-05-07 | 2020-01-10 | 维沃移动通信有限公司 | Mobile terminal operation method and mobile terminal |
CN110658907A (en) * | 2018-06-28 | 2020-01-07 | 阿里健康信息技术有限公司 | Method and device for acquiring user behavior data |
WO2020014798A1 (en) * | 2018-07-18 | 2020-01-23 | 朱恩辛 | Augmented reality interactive transaction system and method therefor |
CN108922085A (en) * | 2018-07-18 | 2018-11-30 | 北京七鑫易维信息技术有限公司 | A kind of monitoring method, device, custodial care facility and storage medium |
CN108922085B (en) * | 2018-07-18 | 2020-12-18 | 北京七鑫易维信息技术有限公司 | Monitoring method, device, monitoring equipment and storage medium |
CN109726954A (en) * | 2018-12-11 | 2019-05-07 | 维沃移动通信有限公司 | A kind of information processing method, device and mobile terminal |
CN109726954B (en) * | 2018-12-11 | 2021-01-08 | 维沃移动通信有限公司 | Information processing method and device and mobile terminal |
CN111414074A (en) * | 2019-01-08 | 2020-07-14 | 北京京东尚科信息技术有限公司 | Screen browsing data processing method, device, medium and electronic equipment |
WO2020143641A1 (en) * | 2019-01-08 | 2020-07-16 | 北京京东尚科信息技术有限公司 | Screen browsing data processing method and apparatus, medium and electronic device |
CN110070614A (en) * | 2019-04-30 | 2019-07-30 | 深圳微新创世科技有限公司 | The method of animation is embedded in a kind of 3D virtual scene |
CN111949113B (en) * | 2019-05-15 | 2024-10-29 | 阿里巴巴集团控股有限公司 | Image interaction method and device applied to Virtual Reality (VR) scene |
CN111949113A (en) * | 2019-05-15 | 2020-11-17 | 阿里巴巴集团控股有限公司 | Image interaction method and device applied to virtual reality VR scene |
CN111127627B (en) * | 2019-11-20 | 2020-10-27 | 贝壳找房(北京)科技有限公司 | Model display method and device in three-dimensional house model |
US10853992B1 (en) | 2019-11-20 | 2020-12-01 | Ke.Com (Beijing) Technology Co., Ltd. | Systems and methods for displaying a virtual reality model |
CN111127627A (en) * | 2019-11-20 | 2020-05-08 | 贝壳技术有限公司 | Model display method and device in three-dimensional house model |
CN111127029A (en) * | 2019-12-27 | 2020-05-08 | 上海诺亚投资管理有限公司 | VR video-based payment method and system |
CN111242682A (en) * | 2020-01-10 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Article display method |
WO2021139353A1 (en) * | 2020-01-10 | 2021-07-15 | 腾讯科技(深圳)有限公司 | Item display method and apparatus, computer device, and storage medium |
US11954710B2 (en) | 2020-01-10 | 2024-04-09 | Tencent Technology (Shenzhen) Company Limited | Item display method and apparatus, computer device, and storage medium |
CN111242682B (en) * | 2020-01-10 | 2023-10-17 | 腾讯科技(深圳)有限公司 | Article display method |
CN111880662A (en) * | 2020-02-06 | 2020-11-03 | 北京师范大学 | Eye movement control system applied to interactive map |
CN111522442A (en) * | 2020-04-09 | 2020-08-11 | 中国电子科技集团公司第三十八研究所 | interaction method and device for ARKit augmented reality environment on iOS device |
CN111932272B (en) * | 2020-08-14 | 2024-02-27 | 中国工商银行股份有限公司 | VR (virtual reality) -device-based payment method and device |
CN111932272A (en) * | 2020-08-14 | 2020-11-13 | 中国工商银行股份有限公司 | Payment method and device based on VR equipment |
CN113298598A (en) * | 2020-09-15 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Method and device for providing shop object information and electronic equipment |
CN113534959A (en) * | 2021-07-27 | 2021-10-22 | 咪咕音乐有限公司 | Screen display method, screen display device, virtual reality equipment and program product |
CN113641246A (en) * | 2021-08-25 | 2021-11-12 | 兰州乐智教育科技有限责任公司 | Method and device for determining user concentration degree, VR equipment and storage medium |
WO2023165362A1 (en) * | 2022-03-04 | 2023-09-07 | 北京字跳网络技术有限公司 | Information display method and apparatus, and head-mounted display device and storage medium |
CN117111734A (en) * | 2023-07-04 | 2023-11-24 | 深圳云天励飞技术股份有限公司 | VR display method and device for road diseases, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107957775B (en) | 2021-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107957774A (en) | Exchange method and device in virtual reality space environment | |
CN107957775A (en) | Data object exchange method and device in virtual reality space environment | |
Nayyar et al. | Virtual Reality (VR) & Augmented Reality (AR) technologies for tourism and hospitality industry | |
US9390563B2 (en) | Augmented reality device | |
CN105393284B (en) | Space engraving based on human body data | |
JP7133470B2 (en) | System and method for network augmented reality representation | |
CN103186922B (en) | Use Augmented Reality display represent before the time period time place method and individual audiovisual (A/V) device | |
KR102185804B1 (en) | Mixed reality filtering | |
EP3258671B1 (en) | System and method for augmented and virtual reality | |
CN105518574B (en) | Method and system for the delivering of mixed reality rating information | |
CN103493106B (en) | Come hand is optionally covered to the method and apparatus on the virtual projection on physical surface using bone tracking | |
US20100259610A1 (en) | Two-Dimensional Display Synced with Real World Object Movement | |
US20130178257A1 (en) | System and method for interacting with virtual objects in augmented realities | |
CN106683197A (en) | VR (virtual reality) and AR (augmented reality) technology fused building exhibition system and VR and AR technology fused building exhibition method | |
CN103472909A (en) | Realistic occlusion for a head mounted augmented reality display | |
CN105212418A (en) | Augmented reality intelligent helmet based on infrared night viewing function is developed | |
CN109696961A (en) | Historical relic machine & equipment based on VR technology leads reward and realizes system and method, medium | |
CN103760972B (en) | Cross-platform augmented reality experience | |
CN106600333A (en) | Virtual reality advertisement position content exposure monitoring method and device | |
Adhikarla et al. | Freehand interaction with large-scale 3D map data | |
US10846901B2 (en) | Conversion of 2D diagrams to 3D rich immersive content | |
CN107945270A (en) | A kind of 3-dimensional digital sand table system | |
Kanade et al. | Mobile and location based service using augmented reality: a review | |
Zarzycki | Teaching and Designing for Augmented Reality | |
Predescu et al. | ARMAX: A Mobile geospatial augmented reality platform for serious gaming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |