CN110009714A - The method and device of virtual role expression in the eyes is adjusted in smart machine - Google Patents
The method and device of virtual role expression in the eyes is adjusted in smart machine Download PDFInfo
- Publication number
- CN110009714A CN110009714A CN201910164747.3A CN201910164747A CN110009714A CN 110009714 A CN110009714 A CN 110009714A CN 201910164747 A CN201910164747 A CN 201910164747A CN 110009714 A CN110009714 A CN 110009714A
- Authority
- CN
- China
- Prior art keywords
- virtual role
- eyes
- destination virtual
- expression
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 133
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000000007 visual effect Effects 0.000 claims abstract description 56
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 230000003993 interaction Effects 0.000 claims description 43
- 230000008569 process Effects 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 239000000203 mixture Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of methods and device for adjusting virtual role.This method comprises: with the presence or absence of one or more target objects in the visual range of detection destination virtual role;The expression in the eyes of destination virtual role is set to follow one or more target objects in visual range.This method can make the expression in the eyes of virtual role follow other target objects in the visual range in locating scene, thus it is right, not dull that the expression in the eyes performance of the destination virtual role in pictured scene can be made to derive from, to improve the usage experience of user.
Description
Technical field
The present invention, which discloses, to be related to electronic technology field more particularly to a kind of adjusts virtual role expression in the eyes in smart machine
Method and device thereof.
Background technique
On the line of general smart machine in social application program or games, multiple virtual roles are in Same Scene
When middle, the expression in the eyes of virtual role is all motionless or only does deliberate action, it appears very dull.Therefore how to be determined according to current scene
The expression in the eyes of virtual role is the project for being worth research with the On-site Experience sense for enhancing user.
Summary of the invention
The object of the present invention is to provide a kind of methods and device for adjusting virtual role expression in the eyes, can be according to virtual role
The current scene at place adjusts the expression in the eyes of virtual role, to make to show that the picture in scene is truer true to nature, enhancing makes
The On-site Experience sense of user.
According to the embodiment of the first aspect of the invention, a kind of side adjusting virtual role expression in the eyes in smart machine is provided
Method, wherein the smart machine includes display device, wherein the described method includes:
A. it detects in the visual range of the destination virtual role in the display scene presented in the display device and whether deposits
In one or more target objects;
B. in the visual range of the destination virtual role there are when one or more of target objects, described in adjustment
The expression in the eyes of destination virtual role is to follow one or more of target objects in visual range.
A preferred embodiment according to the first aspect of the invention, the step b further include: at the destination virtual angle
When any target object being not present in the visual range of color, the expression in the eyes of the destination virtual role is adjusted for direct-view state or by pre-
Regular movements of establishing rules is made.
Another preferred embodiment according to the first aspect of the invention, the step b include: to work as to detect the target
There are when a target object in the visual range of virtual role, the expression in the eyes for adjusting the destination virtual role is detected with following
The one target object arrived.
Another preferred embodiment according to the first aspect of the invention, the step b include: when the destination virtual angle
There are the expressions in the eyes for when multiple target objects, adjusting the destination virtual role in the visual range of color to follow multiple target objects
In interaction target object.Wherein, the interaction target object includes and the destination virtual role or other mutual moving-targets pair
As the target object interacted.
Another preferred embodiment according to the first aspect of the invention, the visual model as the destination virtual role
Enclose that interior there are the expressions in the eyes for when multiple target objects, adjusting the destination virtual role to follow the interaction mesh in multiple target objects
The step of marking object includes: to adjust institute there are when multiple interaction target objects in the visual range of the destination virtual role
The expression in the eyes of destination virtual role is stated to follow and interact target object or mutual apart from recently multiple with it apart from nearest with it
It is moved between moving-target object.
Another preferred embodiment according to the first aspect of the invention, the method also includes:
The image of the eyes of the destination virtual role is subjected to UV texture coordinate map;According to the destination virtual role
Current UV coordinate, the purpose UV coordinate of expression in the eyes, calculate mobile UV coordinate using preset interpolation algorithm, wherein the movement
UV coordinate includes UV coordinate of the image of the eyes of the destination virtual role in moving process;
The expression in the eyes of the destination virtual role is adjusted according to the mobile UV coordinate calculated.
Embodiment according to the second aspect of the invention provides one kind in smart machine for adjusting virtual role
The device of expression in the eyes, wherein the smart machine includes display device, wherein it is described adjustment virtual role expression in the eyes device include:
Detection device, for detecting the visual model of the destination virtual role in the display scene presented in the display device
With the presence or absence of one or more target objects in enclosing;
Following device, for there are one or more of target objects in the visual range of the destination virtual role
When, the expression in the eyes of the destination virtual role is adjusted to follow one or more of target objects in visual range.
A preferred embodiment according to the second aspect of the invention, the following device further include: the first adjustment module,
When for any target object to be not present in the visual range of the destination virtual role, adjust the destination virtual role's
Expression in the eyes is direct-view state or acts by predetermined rule.
Another preferred embodiment according to the second aspect of the invention, the following device include: second adjustment module,
For when, there are when a target object, adjusting the destination virtual angle in the visual range for detecting the destination virtual role
The expression in the eyes of color is to follow detected one target object.
Another preferred embodiment according to the second aspect of the invention, the following device include: third adjustment module,
For when there are the eyes for when multiple target objects, adjusting the destination virtual role in the visual range of the destination virtual role
Mind is to follow the interaction target object in the multiple target object;Wherein, the interaction target object includes and the target
The target object that virtual role or other moving target objects are interacted.
Another preferred embodiment according to the second aspect of the invention, the third adjustment module includes: adjustment unit,
For, there are when multiple interaction target objects, adjusting the destination virtual role in the visual range of the destination virtual role
Expression in the eyes to follow and its distance is nearest interact target object or interact between target object in its distance recently multiple
It is mobile.
The device of another preferred embodiment according to the second aspect of the invention, the adjustment virtual role expression in the eyes also wraps
It includes:
Mapping device, for the image of the eyes of the destination virtual role to be carried out UV texture coordinate map;And it uses
In current UV coordinate, purpose UV coordinate according to the destination virtual role expression in the eyes, shifting is calculated using preset interpolation algorithm
Dynamic UV coordinate, wherein the mobile UV coordinate includes UV of the image of the eyes of the destination virtual role in moving process
Coordinate;
Device for moving and adjusting, for adjusting the eye of the destination virtual role according to the mobile UV coordinate calculated
Mind.
Embodiment according to the third aspect of the present invention provides a kind of VR (Virtual Reality, virtual reality)
Equipment, the VR equipment include the device of adjustment virtual role expression in the eyes as the aforementioned.
Compared with prior art, the embodiment of the present invention has the advantage that adjustment virtual role eye provided by the invention
The method of mind makes the expression in the eyes of destination virtual role follow other target objects in the visual range in locating scene, therefore energy
Derive from the expression in the eyes performance of the destination virtual role in pictured scene right, not dull, to improve the usage experience of user.
Detailed description of the invention
By reading made detailed description of non-limiting embodiments referring to the drawings, of the invention is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 shows the block diagram for being suitable for the exemplary computer system/server for being used to realize embodiment of the present invention;
Fig. 2 is the flow diagram of the method for the adjustment virtual role expression in the eyes of the embodiment of the present invention;
Fig. 3 is the flow diagram of the method for the adjustment virtual role expression in the eyes of one embodiment of the present of invention;
Fig. 4 is the schematic diagram of the device for adjusting virtual role expression in the eyes of the embodiment of the present invention;
Fig. 5 is the schematic diagram of the device for adjusting virtual role expression in the eyes of one embodiment of the present of invention.
The same or similar appended drawing reference represents the same or similar component in attached drawing.
Specific embodiment
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail
The processing or method described as flow chart.Although operations are described as the processing of sequence by flow chart, therein to be permitted
Multioperation can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of operations can be rearranged.When it
The processing can be terminated when operation completion, it is also possible to have the additional step being not included in attached drawing.The processing
It can correspond to method, function, regulation, subroutine, subprogram etc..
Method (some of them are illustrated by process) discussed hereafter can be by hardware, software, firmware, centre
Part, microcode, hardware description language or any combination thereof are implemented.Implement when with software, firmware, middleware or microcode
When, program code or code segment to implement necessary task can be stored in machine or computer-readable medium and (for example deposit
Storage media) in.Necessary task can be implemented in (one or more) processor.
Specific structure and function details disclosed herein are only representative, and are for describing the present invention show
The purpose of example property embodiment.But the present invention can be implemented by many alternative forms, and be not interpreted as
It is limited only by the embodiments set forth herein.
It should be understood that when a unit referred to as " connects " or when " coupled " to another unit, can directly connect
Another unit is connect or be coupled to, or may exist temporary location.In contrast, " directly connect when a unit is referred to as
Connect " or " direct-coupling " to another unit when, then temporary location is not present.It should explain in a comparable manner and be used to retouch
State the relationship between unit other words (such as " between being in ... " compared to " between being directly in ... ", " and with ... it is adjacent
Closely " compared to " with ... be directly adjacent to " etc.).
Although it should be understood that may have been used term " first ", " second " etc. herein to describe each unit,
But these units should not be limited by these terms.The use of these items is only for by a unit and another unit
It distinguishes.For example, without departing substantially from the range of exemplary embodiment, it is single that first unit can be referred to as second
Member, and similarly second unit can be referred to as first unit.Term "and/or" used herein above include one of them or
Any and all combinations of more listed associated items.
Term used herein above is not intended to limit exemplary embodiment just for the sake of description specific embodiment.Unless
Context clearly refers else, otherwise singular used herein above "one", " one " also attempt to include plural number.Also answer
When understanding, term " includes " and/or "comprising" used herein above provide stated feature, integer, step, operation,
The presence of unit and/or component, and do not preclude the presence or addition of other one or more features, integer, step, operation, unit,
Component and/or combination thereof.
It should further be mentioned that the function action being previously mentioned can be attached according to being different from some replace implementations
The sequence indicated in figure occurs.For example, related function action is depended on, the two width figures shown in succession actually may be used
Substantially simultaneously to execute or can execute in a reverse order sometimes.
Present invention is further described in detail with reference to the accompanying drawing.
Fig. 1 shows the block diagram for being suitable for the exemplary computer system/server for being used to realize embodiment of the present invention.Figure
The computer system/server 12 of 1 display is only an example, should not function and use scope band to the embodiment of the present invention
Carry out any restrictions.
As shown in Figure 1, computer system/server 12 is showed in the form of universal computing device.Computer system/service
The component of device 12 can include but is not limited to: one or more processor or processing unit 16, system storage 28, connection
The bus 18 of different system components (including system storage 28 and processing unit 16).
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts
For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC)
Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Computer system/server 12 typically comprises a variety of computer system readable media.These media, which can be, appoints
What usable medium that can be accessed by computer system/server 12, including volatile and non-volatile media, it is moveable and
Immovable medium.
Memory 28 may include the computer system readable media of form of volatile memory, such as random access memory
Device (RAM) 30 and/or cache memory 32.Computer system/server 12 may further include it is other it is removable/no
Movably, volatile/non-volatile computer system storage medium.Only as an example, storage system 34 can be used for reading and writing
Immovable, non-volatile magnetic media (Fig. 1 is not shown, commonly referred to as " hard disk drive ").It, can although being not shown in Fig. 1
To provide the disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk "), and it is non-volatile to moving
Property CD (such as CD-ROM, DVD-ROM or other optical mediums) read and write CD drive.In these cases, each drive
Dynamic device can be connected by one or more data media interfaces with bus 18.Memory 28 may include at least one program
Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform the present invention
The function of each embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store in such as memory 28
In, such program module 42 includes --- but being not limited to --- operating system, one or more application program, other programs
It may include the realization of network environment in module and program data, each of these examples or certain combination.Program mould
Block 42 usually executes function and/or method in embodiment described in the invention.
Computer system/server 12 can also be (such as keyboard, sensing equipment, aobvious with one or more external equipments 14
Show device 24 etc.) communication, it is logical that the equipment interacted with the computer system/server 12 can be also enabled a user to one or more
Letter, and/or with the computer system/server 12 any is set with what one or more of the other calculating equipment was communicated
Standby (such as network interface card, modem etc.) communicates.This communication can be carried out by input/output (I/O) interface 22.And
And computer system/server 12 can also pass through network adapter 20 and one or more network (such as local area network
(LAN), wide area network (WAN) and/or public network, such as internet) communication.As shown, network adapter 20 passes through bus
18 communicate with other modules of computer system/server 12.It should be understood that computer can be combined although being not shown in Fig. 1
Systems/servers 12 use other hardware and/or software module, including but not limited to: microcode, device driver, at redundancy
Manage unit, external disk drive array, RAID system, tape drive and data backup storage system etc..
The program that processing unit 16 is stored in memory 28 by operation, thereby executing various function application and data
Processing.
For example, being stored with the computer program for executing various functions and processing of the invention in memory 28, handle
When unit 16 executes corresponding computer program, the present invention is implemented in the identification that network-side is intended to incoming call.
Concrete function/step of the apparatus/method of adjustment virtual role expression in the eyes of the invention described in detail below.
Fig. 2 is the flow diagram of the method for the adjustment virtual role expression in the eyes of the embodiment of the present invention.It is according to the present invention
The method of the adjustment virtual role expression in the eyes of embodiment can be realized in smart machine.Smart machine includes but is not limited to computer
The electronic equipments such as equipment and VR equipment.Computer equipment refers to that numerical value can be executed by operation preset program or instruction to be calculated
And/or the electronic equipment of the predetermined process process such as logic calculation, it may include processor and memory, executed by processor
The survival instruction prestored in memory executes book office to execute predetermined process process, or by hardware such as ASIC, FPGA, DSP
Reason process, or realized by said two devices combination.Computer equipment includes but is not limited to server, PC, notebook
Computer, tablet computer, smart phone etc..Server includes but is not limited to individual server, multiple servers composition or is based on cloud
The cloud being made of a large amount of computers or server calculated, wherein cloud computing is one kind of distributed computing, by the loose coupling of a group
One super virtual computer of the computer set composition of conjunction.VR equipment includes but is not limited to the VR helmet, VR glasses, VR one
The VR electronic equipments such as machine, VR game station, VR terminal device, VR user equipment.Smart machine includes display device, display device
Including various displays, or the display equipment based on VR or AR.It should be noted that smart machine, including computer equipment
It is only for example with VR equipment etc., other intelligence of the method with adjustment virtual role expression in the eyes that are existing or being likely to occur from now on
Equipment is such as applicable to the present invention, should also be included within the scope of protection of the present invention, and is incorporated herein by reference.
As shown in Fig. 2, the method for the adjustment virtual role expression in the eyes of embodiment according to the present invention includes step S1 and step
S2.In step sl, it detects and whether is deposited in the visual range of the destination virtual role in the display scene presented in display device
In one or more target objects.In an embodiment of the present invention, virtual role refers to the online social activity provided in smart machine
Or in the application programs such as game, according to the role of the virtual role parameter generation of user's selection;Or application program is according to role
The role of template generation;Either user oneself will be placed in role used in the scene that application program provides.Virtual role
Expression in the eyes include virtual role eye sight line direction or angle, watch focus attentively distance etc. it is related to the eyes of virtual role
Parameter information.Destination virtual role refers to the display scene presented in the display device of smart machine in the embodiment of the present invention
In expression in the eyes to be adjusted virtual role.Target object includes its other than the destination virtual role in display scene of the invention
His virtual role or other static or movement objects.Target object can be one or multiple target objects.Intelligence is set
It is standby to be detected in the visual range of destination virtual role, seeing is according to preset period, such as 0.1s, 0.2s etc.
It is no to there are other target objects.Smart machine can also be detected in such a way that event triggers, for example, in destination virtual
It is detected when the locating scene changes of role, or in the current location of destination virtual role apart from upper primary test point
It is detected when being more than predetermined threshold threshold value position.Detection content is determined in the field range of destination virtual role,
With the presence or absence of the target object other than destination virtual role.
In step s 2, when there is one or more target objects in the visual range of destination virtual role, mesh is adjusted
The expression in the eyes of virtual role is marked to follow one or more target objects in visual range.When target object is in destination virtual role
Visual range in when moving, the expression in the eyes of destination virtual role follows the movement of target object and moves;When target object exceeds
When in the visual range of destination virtual role, the expression in the eyes of destination virtual role is not followed by the target object;When target object has
When multiple, smart machine can allow the expression in the eyes of destination virtual role that can randomly follow one of target object, or
Periodically switch between multiple target objects and one of target object or event triggering is followed to make destination virtual angle in turn
The expression in the eyes of color switches between multiple target objects.
In a preferred embodiment, step S2 includes step S21 (not shown): in the visual of destination virtual role
When any target object being not present in range, the expression in the eyes of adjustment destination virtual role is for direct-view state or by predetermined rule movement.
When detect destination virtual role within sweep of the eye without any target object when, make the eye gaze face of destination virtual role
The front of portion's direction.Or the various regularity of the expression in the eyes of pre-set virtual role act in smart machine, such as period
Or the direction for watching the expression in the eyes of virtual role attentively is adjusted after irregular certain interval of time and changes a fixation or random variation
Angle, in the visual range of destination virtual role be not present any target object when, adjust mesh according to scheduled rule
Mark the expression in the eyes of virtual role.
In a further advantageous embodiment, step S2 includes step S22 (not shown): when detecting destination virtual angle
There are when a target object, adjust the expression in the eyes of destination virtual role to follow a detected mesh in the visual range of color
Mark object.If detecting when there is a target object within sweep of the eye of destination virtual role in step sl,
The expression in the eyes for making destination virtual role is followed this detected target object by smart machine in step S22.Work as target object
When mobile, the expression in the eyes of destination virtual role is also moved;When target object stop is motionless, the expression in the eyes of destination virtual role is stopped
It stays in motionless with target object.
In a further advantageous embodiment, step S2 includes step S23 (not shown): as destination virtual role can
Depending on there are when multiple target objects, adjust the expression in the eyes of destination virtual role to follow the interaction mesh in multiple target objects in range
Mark object;Wherein, interaction target object include with destination virtual role or other interact the target pair that target object interacted
As.Interact target object be the target object for having interaction between destination virtual role, or with other mutual moving-targets
The target object that object is interacted.When interact target object only one when, smart machine by destination virtual role expression in the eyes
Only follow the interaction target object;When interacting target object has multiple, the eye of the adjustable destination virtual role of smart machine
Mind allows to follow one in multiple interaction target objects at random, or between multiple interaction target objects periodically
Switching follows one of interaction target object in turn, or makes the expression in the eyes of destination virtual role according to event triggering switching
Follow one of interaction target object.In a preferred embodiment, step S23 includes: in the visual of destination virtual role
There are when multiple interaction target objects in range, the expression in the eyes of adjustment destination virtual role follows the mutual moving-target nearest with its distance
Object moves between target object in its distance nearest multiple interact.When multiple target objects all with destination virtual role
Interaction, and these interaction target objects all destination virtual role within sweep of the eye when, the expression in the eyes of destination virtual role follows
Target object is interacted with that of destination virtual role distance recently.If the distance of multiple interaction target objects is all empty with target
Quasi- role distance is identical and nearest all at a distance from destination virtual role in all interaction target objects, and intelligence is set at this time
The expression in the eyes of standby adjustment destination virtual role moves between target object in these multiple interact nearest with its distance.Mobile side
Formula includes periodically switching, following in turn, switching at random or according to modes such as event triggering switchings.
Fig. 3 is the flow diagram of the method for the adjustment virtual role expression in the eyes of one embodiment of the present of invention.Such as Fig. 3 institute
Show, the embodiment of the present invention includes step S1, step S2, step S3 and step S4.Step S1, the step S2 identified as shown in the figure
It is identical as front step S1 referring to described in Fig. 2, the content of step S2.In preferred embodiment shown in Fig. 3, step S3 packet
It includes: the image of the eyes of destination virtual role is subjected to UV texture coordinate map;According to the current UV of destination virtual role's expression in the eyes
Coordinate, purpose UV coordinate calculate mobile UV coordinate using preset interpolation algorithm, wherein the mobile UV coordinate includes institute
State UV coordinate of the image of the eyes of destination virtual role in moving process.
UV Texture Coordinates define the information of the position of each point on image, these points are to connect each other with 3D model
, to determine the position of surface texture mapping.In an embodiment of the present invention, smart machine is in the eye for determining destination virtual role
When refreshing, the image of the eyes of destination virtual role is subjected to UV texture coordinate map first, each point of eye image is true
Its fixed corresponding UV coordinate value.Since in step s 2, smart machine has determined that the expression in the eyes of destination virtual role is followed by visually
Target object in range, this has also determined that the purpose UV coordinate of the expression in the eyes of destination virtual role, therefore smart machine can be with
According to the current UV coordinate of destination virtual role's expression in the eyes, purpose UV coordinate, and pre-set interpolation algorithm is used, obtained
Each UV coordinate value in destination virtual role's expression in the eyes moving process.
In the embodiments of figure 3 further include step S4: destination virtual role is adjusted according to mobile UV coordinate calculated
Expression in the eyes.What the mobile UV being calculated in step s3 sat target value representative is the expression in the eyes moving process of destination virtual role
In, the value of the UV texture coordinate map of eye image, therefore in step s 4, smart machine corresponds to movement UV coordinate value
UV texture coordinate of the eye image of destination virtual role in moving process, so that it is determined that destination virtual role is being moved through
Expression in the eyes in journey.
In the method for the adjustment virtual role expression in the eyes of the embodiment of the present invention, the expression in the eyes of destination virtual role is made to follow institute
Locate other target objects in the visual range in scene, therefore the expression in the eyes performance of the destination virtual role in pictured scene can be made
Derived from so, it is not dull, to improve the usage experience of user.
Fig. 4 is the schematic diagram of the device for adjusting virtual role expression in the eyes of the embodiment of the present invention.It is according to the present invention
The device for adjusting virtual role expression in the eyes of embodiment can be realized in smart machine.Smart machine includes but is not limited to count
Calculate the electronic equipments such as machine equipment and VR equipment.Computer equipment, which refers to, can execute numerical value by operation preset program or instruction
The intelligent electronic device of the predetermined process process such as calculating and/or logic calculation, may include processor and memory, by handling
Device executes the survival prestored in memory and instructs to execute predetermined process process, or is held by hardware such as ASIC, FPGA, DSP
Row predetermined process process, or realized by said two devices combination.Computer equipment includes but is not limited to server, personal electricity
Brain, laptop, tablet computer, smart phone etc..Server includes but is not limited to individual server, multiple servers composition
Or the cloud being made of a large amount of computers or server based on cloud computing, wherein cloud computing is one kind of distributed computing, by one
One super virtual computer of the computer set composition of group's loose couplings.VR equipment include but is not limited to the VR helmet, VR glasses,
The VR electronic equipments such as VR all-in-one machine, VR game station, VR terminal device, VR user equipment.Smart machine includes display device, is shown
Showing device includes various displays, or the display equipment based on VR or AR.It should be noted that smart machine, including calculating
Machine equipment and VR equipment etc. are only for example, other are existing or what is be likely to occur from now on has for adjusting virtual role expression in the eyes
The smart machine of device, is such as applicable to the present invention, should also be included within the scope of protection of the present invention, and includes by reference
In this.
As shown in figure 4, embodiment according to the present invention includes detection device for adjusting the device of virtual role expression in the eyes
101 and following device 102.Detection device 101 is used to detect the destination virtual role in the display scene presented in display device
Visual range in the presence or absence of one or more target objects.In an embodiment of the present invention, virtual role refers in intelligence
In the application programs such as the online social or game that equipment provides, according to the role of the virtual role parameter generation of user's selection;Or
It is the role that application program is generated according to role template;Either user oneself will be placed in the scene that application program provides and make
Role.The expression in the eyes of virtual role include the eye sight line of virtual role direction or angle, watch focus attentively distance etc. with
The relevant parameter information of the eyes of virtual role.Destination virtual role refers in the embodiment of the present invention to be filled in the display of smart machine
Set the virtual role of the expression in the eyes to be adjusted in the display scene of presentation.Target object includes the mesh in display scene of the invention
Mark other virtual roles or other static or movement objects other than virtual role.Target object can be one, or
Multiple target objects.Detection device 101 can according to preset period, such as 0.1s, 0.2s etc., in destination virtual role can
Depending on being detected in range, see with the presence or absence of other target objects.Detection device 101 can also be in such a way that event triggers
It is detected, for example, being detected in the locating scene changes of destination virtual role, or working as in destination virtual role
It is detected when being more than predetermined threshold threshold value the position of the last test point of front position distance.Detection content is determined in mesh
In the field range for marking virtual role, if there are the target objects other than destination virtual role.
Following device 102 is adjusted when for there is one or more target objects in the visual range of destination virtual role
The expression in the eyes of whole destination virtual role is to follow one or more target objects in visual range.When target object is in destination virtual
When moving in the visual range of role, following device 102 makes the expression in the eyes of destination virtual role follow the movement of target object and move
It is dynamic;When in visual range of the target object beyond destination virtual role, following device 102 makes the expression in the eyes of destination virtual role not
Followed by the target object;When target object has it is multiple when, following device 102 can allow the expression in the eyes of destination virtual role can be with
One of target object is followed to machine, or periodically switches between multiple target objects and follows one of mesh in turn
Mark object or event triggering switch the expression in the eyes of destination virtual role between multiple target objects.
In a preferred embodiment, following device 102 includes 1021 (not shown) of the first adjustment module.First adjusts
When mould preparation block 1021 is used to that any target object to be not present in the visual range of destination virtual role, destination virtual role is adjusted
Expression in the eyes be direct-view state or by predetermined rule movement.When detect destination virtual role within sweep of the eye without any target
When object, the first adjustment module 1021 makes the front of the eye gaze face orientation of destination virtual role.Or it is adjusted first
The various regularity movements of the expression in the eyes of pre-set virtual role in mould preparation block 1021, such as period or irregular one section of interval
The angle that the direction that adjusting after time watches the expression in the eyes of virtual role attentively changes a fixation or changes at random, at destination virtual angle
When any target object being not present in the visual range of color, the first adjustment module 1021 adjusts target void according to scheduled rule
The expression in the eyes of quasi- role.
In a further advantageous embodiment, following device 102 includes 1022 (not shown) of second adjustment module.Second
Module 1022 is adjusted to be used for when, there are when a target object, adjustment target is empty in the visual range for detecting destination virtual role
The expression in the eyes of quasi- role is to follow a detected target object.If detecting destination virtual angle in detection device 101
Color within sweep of the eye exist a target object when, then second adjustment module 1022 will make the expression in the eyes of destination virtual role with
With this detected target object.When target object is mobile, the expression in the eyes of destination virtual role is also moved;Work as target
When object stops motionless, the expression in the eyes of destination virtual role rests on motionless with target object.
In a further advantageous embodiment, following device 102 includes that third adjusts 1023 (not shown) of module.Third
Module 1023 is adjusted to be used for when there are when multiple target objects, adjusting destination virtual role in the visual range of destination virtual role
Expression in the eyes to follow the interaction target object in multiple target objects;Wherein, interaction target object includes and destination virtual role
Or the target object that other interaction target objects are interacted.Interacting target object is that friendship is interacted between destination virtual role
The target object of stream, or interact with other target object that target object is interacted.When interaction target object only has one
When a, third adjustment module 1023 allows the expression in the eyes of destination virtual role only to follow the interaction target object;When interaction target object
When having multiple, third adjusts the expression in the eyes of the adjustable destination virtual role of module 1023, allows to follow multiple interactions at random
One in target object, or periodically switch between multiple interaction target objects and follow one of interaction mesh in turn
Object is marked, or so that the expression in the eyes of destination virtual role is followed one of interaction target object according to event triggering switching.One
In a preferred embodiment, it includes adjustment unit 10231 that third, which adjusts module 1023,.Adjustment unit 10231 is used in destination virtual
There are when multiple interaction target objects in the visual range of role, the expression in the eyes of adjustment destination virtual role is followed with its distance recently
Interaction target object or moved between target object in its distance nearest multiple interact.When multiple target objects all with mesh
Mark virtual role interaction, and these interaction target objects all destination virtual role within sweep of the eye when, adjustment unit 10231
It follows the expression in the eyes of destination virtual role and interacts target object with that of destination virtual role distance recently.If multiple interactions
The distance of target object is all identical as destination virtual role distance, and in all interaction target objects all with destination virtual angle
The distance of color is nearest, and it is nearest with its distance multiple at these to adjust the expression in the eyes of destination virtual role for adjustment unit 10231 at this time
It is moved between interaction target object.Mobile mode includes periodically switching, following in turn, switching at random or triggered according to event
The modes such as switching.
Fig. 5 is the schematic diagram of the device for adjusting virtual role expression in the eyes of one embodiment of the present of invention.Such as Fig. 5 institute
Show, the embodiment of the present invention includes detection device 101, following device 102, mapping device 103 and device for moving and adjusting 104.Figure
Shown in the detection device 101 of mark, following device 102 and front detection device 101, following device referring to described in Fig. 4
102 content is identical.In preferred embodiment shown in Fig. 3, mapping device 103 is used for the figure of the eyes of destination virtual role
As carrying out UV texture coordinate map;And it for current UV coordinate, the purpose UV coordinate according to destination virtual role's expression in the eyes, uses
Preset interpolation algorithm calculates mobile UV coordinate, wherein mobile UV coordinate includes that the image of the eyes of destination virtual role exists
UV coordinate in moving process.
UV Texture Coordinates define the information of the position of each point on image, these points are to connect each other with 3D model
, to determine the position of surface texture mapping.In an embodiment of the present invention, mapping device 103 is determining destination virtual role
Expression in the eyes when, the images of the eyes of destination virtual role is subjected to UV texture coordinate map first, by each point of eye image
All determine its corresponding UV coordinate value.Due to being determined that the expression in the eyes of destination virtual role is followed by visually in following device 102
Target object in range, this has also determined that the purpose UV coordinate of the expression in the eyes of destination virtual role, therefore mapping device 103 can
To obtain according to the current UV coordinate of destination virtual role's expression in the eyes, purpose UV coordinate, and the pre-set interpolation algorithm of use
To each UV coordinate value in destination virtual role's expression in the eyes moving process.
It further include device for moving and adjusting 104 in Fig. 5: for adjusting target void according to mobile UV coordinate calculated
The expression in the eyes of quasi- role.What the mobile UV being calculated in mapping device 103 sat target value representative is the eye of destination virtual role
In refreshing moving process, the value of the UV texture coordinate map of eye image, therefore sit movement UV in device for moving and adjusting 104
Scale value corresponds to UV texture coordinate of the eye image of destination virtual role in moving process, so that it is determined that destination virtual angle
Expression in the eyes of the color in moving process.
In the device for adjusting virtual role expression in the eyes of the embodiment of the present invention, make the expression in the eyes of destination virtual role with
With other target objects in the visual range in locating scene, therefore the expression in the eyes of the destination virtual role in pictured scene can be made
Performance derive from so, it is not dull, to improve the usage experience of user.
It should be noted that the present invention can be carried out in the assembly of software and/or software and hardware, for example, this hair
Specific integrated circuit (ASIC) can be used in bright each circuit or any other is realized similar to hardware device.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims
Variation is included in the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.This
Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.That states in system claims is multiple
Unit or device can also be implemented through software or hardware by a unit or device.The first, the second equal words are used to table
Show title, and does not indicate any particular order.
Claims (16)
1. a kind of method for adjusting virtual role expression in the eyes in smart machine, wherein the smart machine includes display device,
In, which comprises
A. it detects in the visual range of the destination virtual role in the display scene presented in the display device with the presence or absence of one
A or multiple target objects;
B. the target is adjusted there are when one or more of target objects in the visual range of the destination virtual role
The expression in the eyes of virtual role is to follow one or more of target objects in visual range.
2. according to the method described in claim 1, wherein, the step b further include:
When any target object being not present in the visual range of the destination virtual role, adjust the destination virtual role's
Expression in the eyes is direct-view state or acts by predetermined rule.
3. according to the method described in claim 1, wherein, the step b includes:
When, there are when a target object, adjusting the destination virtual angle in the visual range for detecting the destination virtual role
The expression in the eyes of color is to follow detected one target object.
4. according to the method described in claim 1, wherein, the step b includes:
When there are the eyes for when multiple target objects, adjusting the destination virtual role in the visual range of the destination virtual role
Mind is to follow the interaction target object in the multiple target object;Wherein, the interaction target object includes and the target
The target object that virtual role or other moving target objects are interacted.
5. described when there are multiple in the visual range of the destination virtual role according to the method described in claim 4, wherein
When target object, the expression in the eyes of the destination virtual role is adjusted to follow the interaction target object in the multiple target object
Step includes:
There are when multiple interaction target objects in the visual range of the destination virtual role, the destination virtual role is adjusted
Expression in the eyes to follow and its distance is nearest interact target object or interact between target object in its distance recently multiple
It is mobile.
6. according to claim 1 to method described in 5, wherein the method also includes:
The image of the eyes of the destination virtual role is subjected to UV texture coordinate map;According to the destination virtual role expression in the eyes
Current UV coordinate, purpose UV coordinate, calculate mobile UV coordinate using preset interpolation algorithm, wherein the mobile UV is sat
Mark includes UV coordinate of the image of the eyes of the destination virtual role in moving process;
The expression in the eyes of the destination virtual role is adjusted according to the mobile UV coordinate calculated.
7. a kind of device for being used to adjust virtual role expression in the eyes in smart machine, wherein the smart machine includes display dress
Set, wherein it is described adjustment virtual role expression in the eyes device include:
Detection device, in the visual range for detecting the destination virtual role in the display scene presented in the display device
With the presence or absence of one or more target objects;
Following device, in the visual range of the destination virtual role there are when one or more of target objects,
The expression in the eyes of the destination virtual role is adjusted to follow one or more of target objects in visual range.
8. the device of adjustment virtual role expression in the eyes according to claim 7, wherein the following device further include:
The first adjustment module, when for any target object to be not present in the visual range of the destination virtual role, adjustment
The expression in the eyes of the destination virtual role is direct-view state or acts by predetermined rule.
9. the device of adjustment virtual role expression in the eyes according to claim 7, wherein the following device includes:
Second adjustment module, for when in the visual range for detecting the destination virtual role there are when a target object,
The expression in the eyes of the destination virtual role is adjusted to follow detected one target object.
10. the device of adjustment virtual role expression in the eyes according to claim 7, wherein the following device includes:
Third adjusts module, for when there are when multiple target objects, adjusting institute in the visual range of the destination virtual role
The expression in the eyes of destination virtual role is stated to follow the interaction target object in the multiple target object;Wherein, the mutual moving-target
Object includes the target object interacted with the destination virtual role or other moving target objects.
11. the device of adjustment virtual role expression in the eyes according to claim 10, wherein the third adjusts module and includes:
Adjustment unit, for, there are when multiple interaction target objects, adjusting institute in the visual range of the destination virtual role
The expression in the eyes of destination virtual role is stated to follow and interact target object or mutual apart from recently multiple with it apart from nearest with it
It is moved between moving-target object.
12. adjusting the device of virtual role expression in the eyes according to claim 7 to 11, wherein the adjustment virtual role eye
The device of mind further include:
Mapping device, for the image of the eyes of the destination virtual role to be carried out UV texture coordinate map;And it is used for root
According to the current UV coordinate of the destination virtual role expression in the eyes, purpose UV coordinate, mobile UV is calculated using preset interpolation algorithm
Coordinate, wherein the mobile UV coordinate includes UV coordinate of the image of the eyes of the destination virtual role in moving process;
Device for moving and adjusting, for adjusting the expression in the eyes of the destination virtual role according to the mobile UV coordinate calculated.
13. a kind of VR equipment, including the dress of the adjustment virtual role expression in the eyes as described in any one of claim 7 to 12
It sets.
14. a kind of computer equipment, the computer equipment include:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device executes the method as described in any in claim 1 to 7.
15. a kind of computer readable storage medium, the computer-readable recording medium storage has computer code, when the meter
Calculation machine code is performed, and the method as described in any one of claims 1 to 7 is performed.
16. a kind of computer program product, when the computer program product is executed by computer equipment, such as claim 1
It is performed to method described in any one of 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910164747.3A CN110009714A (en) | 2019-03-05 | 2019-03-05 | The method and device of virtual role expression in the eyes is adjusted in smart machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910164747.3A CN110009714A (en) | 2019-03-05 | 2019-03-05 | The method and device of virtual role expression in the eyes is adjusted in smart machine |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110009714A true CN110009714A (en) | 2019-07-12 |
Family
ID=67166411
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910164747.3A Withdrawn CN110009714A (en) | 2019-03-05 | 2019-03-05 | The method and device of virtual role expression in the eyes is adjusted in smart machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110009714A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401921A (en) * | 2020-03-05 | 2020-07-10 | 成都威爱新经济技术研究院有限公司 | Remote customer service method based on virtual human |
CN111722712A (en) * | 2020-06-09 | 2020-09-29 | 三星电子(中国)研发中心 | Method and device for controlling interaction in augmented reality |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000339489A (en) * | 1999-05-27 | 2000-12-08 | Toshiba Corp | Device and method for texture mapping |
US20010040575A1 (en) * | 1997-02-18 | 2001-11-15 | Norio Haga | Image processing device and image processing method |
FR2828572A1 (en) * | 2001-08-13 | 2003-02-14 | Olivier Cordoleani | Method for creating a virtual three-dimensional person representing a real person in which a database of geometries, textures, expression, etc. is created with a motor then used to manage movement and expressions of the 3-D person |
US20080309671A1 (en) * | 2007-06-18 | 2008-12-18 | Brian Mark Shuster | Avatar eye control in a multi-user animation environment |
CN102184562A (en) * | 2011-05-10 | 2011-09-14 | 深圳大学 | Method and system for automatically constructing three-dimensional face animation model |
CN102693091A (en) * | 2012-05-22 | 2012-09-26 | 深圳市环球数码创意科技有限公司 | Method for realizing three dimensional virtual characters and system thereof |
CN103034330A (en) * | 2012-12-06 | 2013-04-10 | 中国科学院计算技术研究所 | Eye interaction method and system for video conference |
CN103198508A (en) * | 2013-04-07 | 2013-07-10 | 河北工业大学 | Human face expression animation generation method |
JP2014166564A (en) * | 2014-04-25 | 2014-09-11 | Konami Digital Entertainment Co Ltd | Game device, control method of the same, and program |
CN105812709A (en) * | 2016-03-18 | 2016-07-27 | 合肥联宝信息技术有限公司 | Method for realizing virtual camera by using cameras |
WO2016161553A1 (en) * | 2015-04-07 | 2016-10-13 | Intel Corporation | Avatar generation and animations |
US20170285737A1 (en) * | 2016-03-31 | 2017-10-05 | Verizon Patent And Licensing Inc. | Methods and Systems for Gaze-Based Control of Virtual Reality Media Content |
JP2017191377A (en) * | 2016-04-11 | 2017-10-19 | 株式会社バンダイナムコエンターテインメント | Simulation control apparatus and simulation control program |
CN107393017A (en) * | 2017-08-11 | 2017-11-24 | 北京铂石空间科技有限公司 | Image processing method, device, electronic equipment and storage medium |
US20180061116A1 (en) * | 2016-08-24 | 2018-03-01 | Disney Enterprises, Inc. | System and method of gaze predictive rendering of a focal area of an animation |
CN107854840A (en) * | 2017-12-06 | 2018-03-30 | 北京像素软件科技股份有限公司 | Eyes analogy method and device |
CN108334832A (en) * | 2018-01-26 | 2018-07-27 | 深圳市唯特视科技有限公司 | A kind of gaze estimation method based on generation confrontation network |
WO2018137456A1 (en) * | 2017-01-25 | 2018-08-02 | 迈吉客科技(北京)有限公司 | Visual tracking method and device |
JP2018140176A (en) * | 2018-03-08 | 2018-09-13 | 株式会社バンダイナムコエンターテインメント | Program and game system |
CN108671539A (en) * | 2018-05-04 | 2018-10-19 | 网易(杭州)网络有限公司 | Target object exchange method and device, electronic equipment, storage medium |
CN109410297A (en) * | 2018-09-14 | 2019-03-01 | 重庆爱奇艺智能科技有限公司 | It is a kind of for generating the method and apparatus of avatar image |
-
2019
- 2019-03-05 CN CN201910164747.3A patent/CN110009714A/en not_active Withdrawn
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010040575A1 (en) * | 1997-02-18 | 2001-11-15 | Norio Haga | Image processing device and image processing method |
JP2000339489A (en) * | 1999-05-27 | 2000-12-08 | Toshiba Corp | Device and method for texture mapping |
FR2828572A1 (en) * | 2001-08-13 | 2003-02-14 | Olivier Cordoleani | Method for creating a virtual three-dimensional person representing a real person in which a database of geometries, textures, expression, etc. is created with a motor then used to manage movement and expressions of the 3-D person |
US20080309671A1 (en) * | 2007-06-18 | 2008-12-18 | Brian Mark Shuster | Avatar eye control in a multi-user animation environment |
CN102184562A (en) * | 2011-05-10 | 2011-09-14 | 深圳大学 | Method and system for automatically constructing three-dimensional face animation model |
CN102693091A (en) * | 2012-05-22 | 2012-09-26 | 深圳市环球数码创意科技有限公司 | Method for realizing three dimensional virtual characters and system thereof |
CN103034330A (en) * | 2012-12-06 | 2013-04-10 | 中国科学院计算技术研究所 | Eye interaction method and system for video conference |
CN103198508A (en) * | 2013-04-07 | 2013-07-10 | 河北工业大学 | Human face expression animation generation method |
JP2014166564A (en) * | 2014-04-25 | 2014-09-11 | Konami Digital Entertainment Co Ltd | Game device, control method of the same, and program |
WO2016161553A1 (en) * | 2015-04-07 | 2016-10-13 | Intel Corporation | Avatar generation and animations |
CN105812709A (en) * | 2016-03-18 | 2016-07-27 | 合肥联宝信息技术有限公司 | Method for realizing virtual camera by using cameras |
US20170285737A1 (en) * | 2016-03-31 | 2017-10-05 | Verizon Patent And Licensing Inc. | Methods and Systems for Gaze-Based Control of Virtual Reality Media Content |
JP2017191377A (en) * | 2016-04-11 | 2017-10-19 | 株式会社バンダイナムコエンターテインメント | Simulation control apparatus and simulation control program |
US20180061116A1 (en) * | 2016-08-24 | 2018-03-01 | Disney Enterprises, Inc. | System and method of gaze predictive rendering of a focal area of an animation |
WO2018137456A1 (en) * | 2017-01-25 | 2018-08-02 | 迈吉客科技(北京)有限公司 | Visual tracking method and device |
CN107393017A (en) * | 2017-08-11 | 2017-11-24 | 北京铂石空间科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN107854840A (en) * | 2017-12-06 | 2018-03-30 | 北京像素软件科技股份有限公司 | Eyes analogy method and device |
CN108334832A (en) * | 2018-01-26 | 2018-07-27 | 深圳市唯特视科技有限公司 | A kind of gaze estimation method based on generation confrontation network |
JP2018140176A (en) * | 2018-03-08 | 2018-09-13 | 株式会社バンダイナムコエンターテインメント | Program and game system |
CN108671539A (en) * | 2018-05-04 | 2018-10-19 | 网易(杭州)网络有限公司 | Target object exchange method and device, electronic equipment, storage medium |
CN109410297A (en) * | 2018-09-14 | 2019-03-01 | 重庆爱奇艺智能科技有限公司 | It is a kind of for generating the method and apparatus of avatar image |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401921A (en) * | 2020-03-05 | 2020-07-10 | 成都威爱新经济技术研究院有限公司 | Remote customer service method based on virtual human |
CN111722712A (en) * | 2020-06-09 | 2020-09-29 | 三星电子(中国)研发中心 | Method and device for controlling interaction in augmented reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11430192B2 (en) | Placement and manipulation of objects in augmented reality environment | |
US9342921B2 (en) | Control apparatus, electronic device, control method, and program | |
CN107771309A (en) | Three dimensional user inputs | |
US10607403B2 (en) | Shadows for inserted content | |
US9286713B2 (en) | 3D design and collaboration over a network | |
JP2022545851A (en) | VIRTUAL OBJECT CONTROL METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM | |
CN104899563A (en) | Two-dimensional face key feature point positioning method and system | |
CN108427595B (en) | Method and device for determining display position of user interface control in virtual reality | |
CN106774821B (en) | Display method and system based on virtual reality technology | |
CN102968180A (en) | User interface control based on head direction | |
Henze et al. | Evaluation of an off-screen visualization for magic lens and dynamic peephole interfaces | |
CN110473293A (en) | Virtual objects processing method and processing device, storage medium and electronic equipment | |
CN106066688B (en) | A kind of virtual reality exchange method and device based on wearable gloves | |
Debarba et al. | Disambiguation canvas: A precise selection technique for virtual environments | |
EP3991142A1 (en) | Fast hand meshing for dynamic occlusion | |
CN110009714A (en) | The method and device of virtual role expression in the eyes is adjusted in smart machine | |
Schemali et al. | Design and evaluation of mouse cursors in a stereoscopic desktop environment | |
CN109559370A (en) | A kind of three-dimensional modeling method and device | |
CN106227482A (en) | Game picture refreshing control method and related equipment | |
CN106681506B (en) | Interaction method for non-VR application in terminal equipment and terminal equipment | |
CN111651069A (en) | Virtual sand table display method and device, electronic equipment and storage medium | |
CN110191316A (en) | A kind of information processing method and device, equipment, storage medium | |
CN106657976B (en) | A kind of visual range extension method, device and virtual reality glasses | |
Stastny et al. | Augmented reality usage for prototyping speed up | |
CN109218252A (en) | A kind of display methods of virtual reality, device and its equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190712 |