CN114327059B - Gesture processing method, device, equipment and storage medium - Google Patents
Gesture processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114327059B CN114327059B CN202111601419.9A CN202111601419A CN114327059B CN 114327059 B CN114327059 B CN 114327059B CN 202111601419 A CN202111601419 A CN 202111601419A CN 114327059 B CN114327059 B CN 114327059B
- Authority
- CN
- China
- Prior art keywords
- gesture
- virtual scene
- visual
- entity
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 14
- 230000000007 visual effect Effects 0.000 claims abstract description 86
- 230000003993 interaction Effects 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 23
- 230000000694 effects Effects 0.000 claims description 40
- 238000000034 method Methods 0.000 claims description 15
- 230000001965 increasing effect Effects 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 abstract description 10
- 230000014509 gene expression Effects 0.000 description 19
- 238000004590 computer program Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 101100291369 Mus musculus Mip gene Proteins 0.000 description 7
- 101150116466 PALM gene Proteins 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000008451 emotion Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure provides a gesture processing method, a gesture processing device, gesture processing equipment and a gesture processing storage medium, relates to the technical field of data processing, and particularly relates to the technology of virtual reality and computer vision. The specific implementation scheme is as follows: identifying a gesture towards an entity in the virtual scene, determining a gesture type, and determining a visual effect corresponding to the gesture type; visual effects are displayed in the virtual scene. According to the technology disclosed by the invention, the problem that the interaction mode in the virtual scene is single and hard is further avoided.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of virtual reality and computer vision technologies.
Background
Virtual Reality (VR) is a computer technology developed along with multimedia technology, which generates a three-dimensional realistic Virtual environment by using a three-dimensional graphics generation technology, a multi-sensor interaction technology, and a high-resolution display technology, so that a user can interact and perceive with three-dimensional entities in the Virtual environment.
Disclosure of Invention
The disclosure provides a gesture processing method, device, equipment and storage medium.
According to an aspect of the present disclosure, there is provided a gesture processing method, including:
Identifying a gesture towards an entity in the virtual scene, determining a gesture type;
Based on the gesture type, determining a visual special effect corresponding to the gesture type;
Visual effects are displayed in the virtual scene.
According to another aspect of the present disclosure, there is provided a gesture processing apparatus including:
The recognition module is used for recognizing gestures towards entities in the virtual scene and determining gesture types;
the processing module is used for determining a visual special effect corresponding to the gesture type based on the gesture type;
and the first display module is used for displaying the visual special effects in the virtual scene.
According to another aspect of the present disclosure, there is provided an electronic device including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the gesture processing methods of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform any one of the gesture processing methods of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements any of the gesture processing methods of the embodiments of the present disclosure.
By utilizing the scheme of the embodiment of the disclosure, the gesture interaction effect in the virtual scene can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a gesture processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a gesture processing method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of a gesture processing method according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a gesture processing apparatus according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a gesture processing apparatus according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of an electronic device for implementing a gesture processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
FIG. 1 is a flow chart of a gesture processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
s101, recognizing gestures towards entities in a virtual scene, and determining gesture types;
s102, determining a visual special effect corresponding to a gesture type based on the gesture type;
s103, displaying the visual special effect in the virtual scene.
By way of example, the entity in the virtual scene may be a person, an animal, etc., displayed in the virtual scene. Gesture types may include: a heart-compare gesture, a praise gesture, a waving gesture, a lifting gesture, a clapping gesture, etc. In this embodiment, when the user is in the virtual reality scene, the user gesture may be acquired, and the user gesture is displayed in the virtual scene. When the user gesture is towards the entity in the virtual scene, the gesture type of the user gesture is identified because the user needs to show the gesture to the entity in the virtual scene. It should be noted that, by moving the user gesture in the virtual scene, the entity toward which the user gesture is directed may be changed.
Illustratively, the visual effect may be a static emoticon or a dynamic animated emoticon. One gesture type may correspond to one visual effect, where the visual effect corresponding to each gesture type may be the same or different. The gesture type can also correspond to a plurality of different visual effects, the plurality of different visual effects are displayed in the virtual scene, and the corresponding visual effects are determined according to the selection result of the user. For example, a mapping relationship between the gesture type and the visual special effect may be preset, so that the visual special effect corresponding to the gesture type may be directly determined through the mapping relationship.
For example, in the case where the gesture type is a heart comparing gesture, the visual effect may be an loving symbol, an animated expression with a heart pattern, or the like; in the case where the gesture type is a praise gesture, the visual effect may be an animated expression representing praise, or the like; in the case that the gesture type is a hand waving gesture, the visual effect may be an animation expression of palm waving, or the like.
In the technical scheme of the disclosure, the gesture towards the entity in the virtual scene is identified, the gesture type is determined, the corresponding visual special effect is determined based on the gesture type, and because the visual special effect corresponding to the gesture type is displayed in the virtual scene, the display mode in the virtual scene is richer, the interaction mode is diversified, so that the interaction interestingness is increased, and the problem that in the prior art, the interaction mode is single and vivid only by clicking the expression symbol in the virtual scene is avoided.
In one embodiment, as shown in fig. 2, displaying visual effects in a virtual scene includes:
S201, determining the enhancement level of the display attribute of the visual special effect based on the duration time of the gesture;
s202, displaying visual special effects in the virtual scene based on the enhancement level of the display attribute.
For example, when the duration of the gesture increases, it is indicated that the emotion expressed by the user is stronger, so that the enhancement level of the display attribute of the visual special effect is adjusted according to the duration of the gesture, thereby enhancing the visual effect of the gesture in the virtual scene, enriching the display mode in the virtual scene, and more truly expressing the emotion of the user. It should be noted that the enhancement level of the display attribute may be determined by setting a parameter of the display attribute, and the larger the parameter, the higher the enhancement level.
In one embodiment, the display attribute includes at least one of a number, a moving speed, and a size of the visual effect.
By way of example, display attributes of different visual effects can be set according to different gesture types, so that interaction modes in a virtual scene are more diversified, and user experience is improved. For example, in the case where the gesture type is a heart-comparison gesture, the display attribute of the visual effect may be the number and speed. Thus, as the duration of the heart-comparing gesture increases, the number of heart symbols transmitted increases, and the speed of movement increases (i.e., the level of enhancement of the display properties increases).
For another example, where the gesture type is a hand waving gesture, the display attribute of the visual effect may be size. Thus, as the duration of the hand-waving gesture increases, the animation expression of the palm waving gradually increases (i.e., the enhancement level of the display attribute is increased).
For another example, where the gesture type is a praise gesture, the display attribute of the visual effect may be size and number. Thus, as the duration of the praise gesture increases, the number of animated expressions representing praise increases (i.e., the enhancement level of the display attribute increases).
In one embodiment, displaying visual effects in a virtual scene includes:
s301, displaying a visual special effect in a first preset area in a virtual scene; and/or the number of the groups of groups,
S302, displaying the visual special effect towards the entity in the virtual scene.
In step S301, the first preset area may be a gesture area of the user in the virtual scene, and may also be other areas in the virtual scene.
For example, when the gesture type is a heart comparison gesture, if it is determined that the visual effect corresponding to the heart comparison gesture is a heart preference symbol, the heart preference symbol is displayed in the gesture area. And under the condition that the gesture type is a hand waving gesture, determining that the visual special effect corresponding to the hand waving gesture is an animation expression of palm waving, and displaying the animation expression of palm waving in a gesture area. And under the condition that the gesture type is the praise gesture, if the visual special effect corresponding to the praise gesture is determined to be the animation expression representing the praise, displaying the animation expression representing the praise in the gesture area. Therefore, after the visual special effects are determined, the visual special effects are displayed in the gesture area, so that the interactive interestingness can be increased.
In step S302, illustratively, in the case where the gesture type is a heart comparison gesture, if it is determined that the visual effect corresponding to the heart comparison gesture is a heart preference symbol, the heart preference symbol is transmitted toward the entity. And under the condition that the gesture type is the praise gesture, if the visual special effect corresponding to the praise gesture is determined to be the animation expression representing the praise, sending the animation expression representing the praise to the entity. Therefore, after the visual special effect is determined, the visual special effect is displayed towards the entity, so that attention towards the entity can be drawn, the interactive interestingness can be increased, and the user experience is improved.
In one embodiment, the method further comprises:
And displaying the visual effect in a second preset area associated with the entity in the case that the display area of the visual effect reaches the display area of the entity.
The display area of the entity in the virtual scene may be set around the entity, or may be other areas in the virtual scene. The second preset area may be the same area as the display area of the entity, or may be a partial area in the display area of the entity.
In this embodiment, the gesture interaction distance between the entities is preset, and the size of the gesture interaction distance may be set according to the social distance in real life, for example, 5 meters. The interaction in the virtual scene is enabled to be closer to reality, and therefore user experience is improved. When the distance between the position of the user in the virtual scene and the position of the entity is not greater than the social distance, the display area of the visual special effect can reach the display area of the entity, the visual features can be displayed in a second preset area associated with the entity, so that the display mode of the virtual scene is richer, and meanwhile, the entity can feel the emotion of the user more strongly, and the relationship among the entities in the virtual scene is pulled.
For example, in the case where the gesture type is a heart-comparing gesture, when the distance between the position of the user in the virtual scene and the position of the entity is not greater than the social distance, the heart-loving symbol may be displayed by moving from a preset first area (for example, a gesture area of the user in the virtual scene) to a second preset area (for example, an area around the entity), or the heart-loving symbol may be displayed directly in the second preset area (for example, an area around the entity).
For ease of understanding, the following is illustrative:
in a virtual scene presented based on a virtual reality technology, when a user enters a gesture interaction mode of the virtual scene, receiving a user gesture acquired by an acquisition device, identifying a gesture type corresponding to the user gesture, simultaneously determining a target person oriented by the user gesture in the virtual scene, and determining the duration of the gesture.
In example one, when the gesture type of the user gesture is a heart comparing gesture, the visual effect is a heart loving symbol, so the heart loving symbol appears in the gesture area, and in addition, the heart loving symbol can be sent to the target person. As the duration of the gesture increases, the number of loving symbols transmitted increases and the speed of movement increases. If the target person is within the gesture interaction distance, the second preset area associated with the target person displays an love symbol, thereby enhancing the happy atmosphere. The target person may be replaced by a movement specific gesture.
In the second example, when the gesture type of the gesture of the user is a waving gesture, the visual effect is an animation expression of waving the palm, so that the animation expression of waving the palm appears in the gesture area. Because the hand waving gesture is used for calling the target person, in order to prevent the distance from being too far or the scene from being too rich, the hand waving gesture is ignored, and the animation expression of palm waving is amplified along with the increase of the duration of the gesture, so that the attention of the target person is brought, and the interaction effect of the gesture is better.
In the third example, when the gesture type of the gesture of the user is a praise gesture, the visual effect is an animation expression representing praise, so that the animation expression representing praise can appear in the gesture area. As the duration of the gesture increases, the number of animated expressions representing praise increases, so that the target person can feel the recognized emotion, resulting in a greater diversification of gesture interactions.
FIG. 4 is a block diagram of a gesture processing device according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include:
A recognition module 401, configured to recognize a gesture towards an entity in the virtual scene, and determine a gesture type;
a processing module 402, configured to determine, based on the gesture type, a visual special effect corresponding to the gesture type;
The first display module 403 is configured to display a visual effect in the virtual scene.
FIG. 5 is a block diagram of a gesture processing device according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus may include:
a recognition module 501, configured to recognize a gesture towards an entity in the virtual scene, and determine a gesture type;
the processing module 502 is configured to determine, based on the gesture type, a visual special effect corresponding to the gesture type;
The first display module 503 includes:
A determining unit 504, configured to determine an enhancement level of a display attribute of the visual effect based on a duration of the gesture;
a display unit 505 for displaying visual effects in the virtual scene based on the enhancement level of the display attribute.
In one embodiment, the display attribute includes at least one of a number, a moving speed, and a size of the visual effect.
In one embodiment, the first display module 503 is further configured to:
Displaying the visual special effect in a first preset area in the virtual scene; and/or the number of the groups of groups,
Visual effects are displayed toward the entity in the virtual scene.
In one embodiment, the apparatus further comprises:
The second display module 506 is configured to display the visual effect in a second preset area associated with the entity, when the display area of the visual effect reaches the display area of the entity.
In this way, the device of the embodiment of the disclosure identifies the gesture type of the entity facing the virtual scene, determines the corresponding visual special effect based on the gesture type, and because the visual special effect corresponding to the gesture type is displayed in the virtual scene, the display mode in the virtual scene is richer, the interaction mode is diversified, thereby increasing the interest of interaction, and further avoiding the problems of single and hard interaction mode caused by only clicking the expression symbol in the virtual scene in the prior art.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as gesture handlers. For example, in some embodiments, the gesture process may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM603 and executed by computing unit 601, one or more of the steps of the gesture handler described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the gesture processing by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (6)
1. A gesture processing method, comprising:
Identifying a gesture towards an entity in the virtual scene, determining a gesture type; wherein the virtual scene is presented based on a virtual reality technique, the entity towards which the gesture is directed being changed by moving the gesture in the virtual scene;
based on the gesture type, determining a visual special effect corresponding to the gesture type;
Displaying the visual effect in the virtual scene;
Wherein said displaying said visual effect in said virtual scene comprises:
determining an enhancement level of a display attribute of the visual effect based on a duration of the gesture; wherein the enhancement level of the display attribute is increased in the event that the duration of the gesture increases;
Displaying the visual effect in the virtual scene based on the enhancement level of the display attribute;
Displaying the visual special effects in a first preset area in the virtual scene; and/or the number of the groups of groups,
Displaying the visual effect in the virtual scene towards the entity;
Displaying the visual effect in a second preset area associated with the entity under the condition that the display area of the visual effect reaches the display area of the entity; the gesture interaction distance between the entities is preset, the gesture interaction distance is set according to the social distance in the actual life, and when the distance between the position of the user in the virtual scene and the position of the entity is not greater than the social distance, the display area of the visual special effect reaches the display area of the entity, and the visual special effect is displayed in a second preset area associated with the entity.
2. The method of claim 1, wherein the display attribute comprises at least one of a number, a speed of movement, and a size of visual effects.
3. A gesture processing apparatus, comprising:
The recognition module is used for recognizing gestures towards entities in the virtual scene and determining gesture types; wherein the virtual scene is presented based on a virtual reality technique, the entity towards which the gesture is directed being changed by moving the gesture in the virtual scene;
The processing module is used for determining a visual special effect corresponding to the gesture type based on the gesture type;
The first display module is used for displaying the visual special effects in the virtual scene;
the first display module is further used for displaying the visual special effects in a first preset area in the virtual scene; and/or displaying the visual effect towards the entity in the virtual scene;
Wherein, the first display module includes:
A determining unit configured to determine an enhancement level of a display attribute of the visual effect based on a duration of the gesture; wherein the enhancement level of the display attribute is increased in the event that the duration of the gesture increases;
a display unit configured to display the visual effect in the virtual scene based on an enhancement level of the display attribute;
Wherein, the gesture processing device further includes:
the second display module is used for displaying the visual special effect in a second preset area associated with the entity under the condition that the display area of the visual special effect reaches the display area of the entity; the gesture interaction distance between the entities is preset, the gesture interaction distance is set according to the social distance in the actual life, and when the distance between the position of the user in the virtual scene and the position of the entity is not greater than the social distance, the display area of the visual special effect reaches the display area of the entity, and the visual special effect is displayed in a second preset area associated with the entity.
4. The apparatus of claim 3, wherein the display attribute comprises at least one of a number, a moving speed, and a size of visual effects.
5. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-2.
6. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111601419.9A CN114327059B (en) | 2021-12-24 | 2021-12-24 | Gesture processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111601419.9A CN114327059B (en) | 2021-12-24 | 2021-12-24 | Gesture processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114327059A CN114327059A (en) | 2022-04-12 |
CN114327059B true CN114327059B (en) | 2024-08-09 |
Family
ID=81013990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111601419.9A Active CN114327059B (en) | 2021-12-24 | 2021-12-24 | Gesture processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114327059B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111640202A (en) * | 2020-06-11 | 2020-09-08 | 浙江商汤科技开发有限公司 | AR scene special effect generation method and device |
WO2021114710A1 (en) * | 2019-12-09 | 2021-06-17 | 上海幻电信息科技有限公司 | Live streaming video interaction method and apparatus, and computer device |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9354702B2 (en) * | 2013-06-03 | 2016-05-31 | Daqri, Llc | Manipulation of virtual object in augmented reality via thought |
CN207722354U (en) * | 2014-01-28 | 2018-08-14 | 马卡里 | A kind of on-line off-line real-time interactive games system |
CN107817893A (en) * | 2016-09-13 | 2018-03-20 | 南京美卡数字科技有限公司 | Three-dimensional interactive virtual reality system |
CN106792214B (en) * | 2016-12-12 | 2021-06-18 | 福建凯米网络科技有限公司 | Live broadcast interaction method and system based on digital audio-visual place |
US10445935B2 (en) * | 2017-05-26 | 2019-10-15 | Microsoft Technology Licensing, Llc | Using tracking to simulate direct tablet interaction in mixed reality |
CN109976519B (en) * | 2019-03-14 | 2022-05-03 | 浙江工业大学 | Interactive display device based on augmented reality and interactive display method thereof |
CN111104021B (en) * | 2019-12-19 | 2022-11-08 | 腾讯科技(深圳)有限公司 | Control method and device of virtual prop, storage medium and electronic device |
CN112148188A (en) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | Interaction method and device in augmented reality scene, electronic equipment and storage medium |
CN112121434B (en) * | 2020-09-30 | 2022-05-10 | 腾讯科技(深圳)有限公司 | Interaction method and device of special effect prop, electronic equipment and storage medium |
CN112348968B (en) * | 2020-11-06 | 2023-04-25 | 北京市商汤科技开发有限公司 | Display method and device in augmented reality scene, electronic equipment and storage medium |
CN112866562B (en) * | 2020-12-31 | 2023-04-18 | 上海米哈游天命科技有限公司 | Picture processing method and device, electronic equipment and storage medium |
CN113325954B (en) * | 2021-05-27 | 2022-08-26 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and medium for processing virtual object |
-
2021
- 2021-12-24 CN CN202111601419.9A patent/CN114327059B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021114710A1 (en) * | 2019-12-09 | 2021-06-17 | 上海幻电信息科技有限公司 | Live streaming video interaction method and apparatus, and computer device |
CN111640202A (en) * | 2020-06-11 | 2020-09-08 | 浙江商汤科技开发有限公司 | AR scene special effect generation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN114327059A (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114549710A (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN115879469B (en) | Text data processing method, model training method, device and medium | |
CN115393488B (en) | Method and device for driving virtual character expression, electronic equipment and storage medium | |
CN112862934B (en) | Method, apparatus, device, medium, and product for processing animation | |
CN112947916A (en) | Method, device, equipment and storage medium for realizing online canvas | |
CN114327059B (en) | Gesture processing method, device, equipment and storage medium | |
CN117033587A (en) | Man-machine interaction method and device, electronic equipment and medium | |
CN113327311B (en) | Virtual character-based display method, device, equipment and storage medium | |
CN112817463A (en) | Method, equipment and storage medium for acquiring audio data by input method | |
CN114581586A (en) | Method and device for generating model substrate, electronic equipment and storage medium | |
CN114549785A (en) | Method and device for generating model substrate, electronic equipment and storage medium | |
CN114638919A (en) | Virtual image generation method, electronic device, program product and user terminal | |
CN114629800A (en) | Visual generation method, device, terminal and storage medium for industrial control network target range | |
CN113344620A (en) | Method, device and storage medium for issuing welfare information | |
CN112861504A (en) | Text interaction method, device, equipment, storage medium and program product | |
CN112667196B (en) | Information display method and device, electronic equipment and medium | |
CN112528000B (en) | Virtual robot generation method and device and electronic equipment | |
CN113408300B (en) | Model training method, brand word recognition device and electronic equipment | |
CN114416233B (en) | Weather interface display method and device, electronic equipment and storage medium | |
CN116842156B (en) | Data generation method, device, equipment and medium | |
CN116385829B (en) | Gesture description information generation method, model training method and device | |
CN115145672A (en) | Information presentation method, information presentation apparatus, information presentation device, storage medium, and program product | |
CN117076619A (en) | Robot dialogue method and device, storage medium and electronic equipment | |
KR20230044153A (en) | Text display methods and devices, electronic devices, storage media and computer programs | |
CN117421005A (en) | Object interaction method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |