CN114327059A - Gesture processing method, device, equipment and storage medium - Google Patents

Gesture processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114327059A
CN114327059A CN202111601419.9A CN202111601419A CN114327059A CN 114327059 A CN114327059 A CN 114327059A CN 202111601419 A CN202111601419 A CN 202111601419A CN 114327059 A CN114327059 A CN 114327059A
Authority
CN
China
Prior art keywords
gesture
special effect
virtual scene
display
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111601419.9A
Other languages
Chinese (zh)
Inventor
廖加威
谢佳晟
张毅
张艺媛
刘瑜
黄熙
任晓华
黄晓琳
赵慧斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111601419.9A priority Critical patent/CN114327059A/en
Publication of CN114327059A publication Critical patent/CN114327059A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides a gesture processing method, a gesture processing device, gesture processing equipment and a storage medium, and relates to the technical field of data processing, in particular to virtual reality and computer vision technologies. The specific implementation scheme is as follows: recognizing a gesture facing an entity in a virtual scene, determining a gesture type, and determining a visual special effect corresponding to the gesture type; and displaying the visual special effect in the virtual scene. According to the technology disclosed by the invention, the problem that the interaction mode is single and hard in the virtual scene is further avoided.

Description

Gesture processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technology, and in particular, to the field of virtual reality and computer vision technology.
Background
Virtual Reality (VR) is a computer technology developed along with multimedia technology, which uses a three-dimensional graphics generation technology, a multi-sensor interaction technology, and a high-resolution display technology to generate a three-dimensional realistic Virtual environment, so that a user can interact with and perceive three-dimensional entities in the Virtual environment.
Disclosure of Invention
The disclosure provides a gesture processing method, a gesture processing device, a gesture processing apparatus and a storage medium.
According to an aspect of the present disclosure, there is provided a gesture processing method including:
recognizing a gesture towards an entity in the virtual scene, and determining a gesture type;
determining a visual special effect corresponding to the gesture type based on the gesture type;
and displaying the visual special effect in the virtual scene.
According to another aspect of the present disclosure, there is provided a gesture processing apparatus including:
the recognition module is used for recognizing gestures facing to entities in the virtual scene and determining gesture types;
the processing module is used for determining a visual special effect corresponding to the gesture type based on the gesture type;
the first display module is used for displaying the visual special effect in the virtual scene.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the gesture processing methods of the disclosed embodiments.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform any one of the gesture processing methods in the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements any one of the gesture processing methods in the embodiments of the present disclosure.
By means of the scheme of the embodiment of the disclosure, gesture interaction effects in the virtual scene can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a gesture processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram of a gesture processing method according to an embodiment of the present disclosure;
FIG. 3 is a flow diagram of a gesture processing method according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a gesture processing apparatus according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a gesture processing apparatus according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of an electronic device for implementing a gesture processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a gesture processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
s101, recognizing gestures facing to entities in a virtual scene, and determining gesture types;
s102, determining a visual special effect corresponding to the gesture type based on the gesture type;
and S103, displaying the visual special effect in the virtual scene.
Illustratively, the entities in the virtual scene may be people, animals, etc. displayed in the virtual scene. The gesture types may include: a hearts gesture, a like gesture, a waving gesture, a lifting gesture, a clapping gesture, etc. In this embodiment, when the user is in the virtual reality scene, the user gesture may be acquired, and the user gesture is displayed in the virtual scene. When the user gesture faces to the entity in the virtual scene, it indicates that the user needs to show the gesture to the entity in the virtual scene, so the gesture type of the user gesture is identified. It should be noted that the entity towards which the user gesture is directed may be changed by moving the user gesture in the virtual scene.
The visual effects may be static emoticons or dynamic animated emoticons, for example. One gesture type may correspond to one visual effect, where the visual effects corresponding to each gesture type may be the same or different. The gesture type can be corresponding to a plurality of different visual special effects, the different visual special effects are displayed in a virtual scene, and the corresponding visual special effects are determined according to the selection result of the user. For example, a mapping relationship between the gesture type and the visual special effect may be preset, so that the visual special effect corresponding to the gesture type may be directly determined through the mapping relationship.
For example, where the gesture type is a bixin gesture, the visual special effect may be a love symbol, an animated expression with a heart-shaped pattern, or the like; when the gesture type is a like gesture, the visual special effect can be an animation expression representing like; in the case that the gesture type is a hand waving gesture, the visual special effect may be an animated expression of a palm waving or the like.
According to the technical scheme, gestures facing to an entity in a virtual scene are recognized, gesture types are determined, corresponding visual special effects are determined based on the gesture types, the gesture types are displayed in the virtual scene to correspond to the visual special effects, so that the display modes in the virtual scene are richer, the interaction modes are more diversified, the interaction interest is increased, and the problems that in the prior art, interaction is carried out only by clicking emoticons in the virtual scene, and the interaction modes are single and hard are solved.
In one embodiment, as shown in FIG. 2, displaying a visual effect in a virtual scene includes:
s201, determining the enhancement level of the display attribute of the visual special effect based on the duration of the gesture;
and S202, displaying the visual special effect in the virtual scene based on the enhancement level of the display attribute.
Illustratively, when the duration of the gesture is increased, the emotion expressed by the user is more intense, so that the enhancement level of the display attribute of the visual special effect is adjusted according to the duration of the gesture, so that the visual effect of the gesture in the virtual scene is enhanced, the display mode in the virtual scene is richer, and the emotion of the user is expressed more really. It should be noted that the enhancement level of the display attribute may be determined by setting a parameter of the display attribute, and the greater the parameter, the higher the enhancement level.
In one embodiment, the display attribute comprises at least one of a number, a movement speed, and a size of the visual special effect.
Exemplarily, the display attributes of different visual effects can be set according to different gesture types, so that the interaction mode in the virtual scene is more diversified, and the user experience is improved. For example, where the gesture type is a bizarre gesture, the display attributes of the visual effect may be a quantity and a speed. Thus, as the duration of the heart gesture increases, the number of transmission of the love symbol increases, and the speed of movement increases (i.e., the level of enhancement of the display attribute increases).
For another example, where the gesture type is a wave gesture, the display attribute of the visual effect may be size. Thus, as the duration of the hand-waving gesture increases, the animated expression of the palm-waving gradually increases (i.e., the level of enhancement of the display properties increases).
For another example, where the gesture type is a thumbs-up gesture, the display attributes of the visual effect may be size and number. Thus, as the duration of the praise gesture increases, the number of animated expressions that represent praise increases (i.e., the level of enhancement of the display properties increases).
In one embodiment, displaying a visual effect in a virtual scene includes:
s301, displaying a visual special effect in a first preset area in a virtual scene; and/or the presence of a gas in the gas,
s302, displaying the visual special effect towards the entity in the virtual scene.
In step S301, the first preset area may be, for example, a gesture area of the user in the virtual scene, and may also be other areas in the virtual scene.
For example, when the gesture type is a bixin gesture, if the visual special effect corresponding to the bixin gesture is determined to be a love symbol, the love symbol is displayed in the gesture area. And under the condition that the gesture type is a hand waving gesture, determining that the visual special effect corresponding to the hand waving gesture is an animation expression of waving of the palm, and displaying the animation expression of waving of the palm in the gesture area. And under the condition that the gesture type is the like gesture, determining that the visual special effect corresponding to the like gesture is the like animation expression, and displaying the like animation expression in the gesture area. Therefore, after the visual special effect is determined, the visual special effect is displayed in the gesture area, and the interestingness of interaction can be increased.
In step S302, for example, in the case that the gesture type is a heart gesture, if it is determined that the visual special effect corresponding to the heart gesture is a love symbol, the love symbol is sent toward the entity. And under the condition that the gesture type is the like gesture, determining that the visual special effect corresponding to the like gesture is the like animation expression, and sending the like animation expression towards the entity. Therefore, after the visual special effect is determined, the visual special effect is displayed towards the entity, attention towards the entity can be attracted, interaction interestingness can be increased, and user experience is improved.
In one embodiment, the method further includes:
in the event that the display area of the visual special effect reaches the display area of the entity, the visual special effect is displayed in a second preset area associated with the entity.
For example, the display area of the entity in the virtual scene may be set around the entity, or may be other areas in the virtual scene. The second preset region may be the same region as the actual display region, or may be a partial region in the actual display region.
In this embodiment, the gesture interaction distance between the entities is preset, and the size of the gesture interaction distance can be set according to the social distance in the actual life, such as 5 meters. Interaction in the virtual scene is closer to reality, and therefore user experience is improved. When the distance between the position of the user in the virtual scene and the position of the entity is not larger than the social distance, the display area of the visual special effect can reach the display area of the entity, the visual characteristics can be displayed in a second preset area associated with the entity, so that the display mode of the virtual scene is richer, meanwhile, the entity can feel the emotion of the user more strongly, and the relationship between the entities in the virtual scene is drawn closer.
For example, in the case that the gesture type is a hearty gesture, when the distance between the position where the user is located in the virtual scene and the position where the entity is located is not greater than the social distance, the love symbol may be moved from a preset first region (for example, the gesture region of the user in the virtual scene) to a second preset region (for example, a region around the entity) for display, or the love symbol may be directly displayed in the second preset region (for example, a region around the entity).
For ease of understanding, the following is illustrative:
in a virtual scene presented based on a virtual reality technology, when a user enters a gesture interaction mode of the virtual scene, user gestures collected by a collection device are received, gesture types corresponding to the user gestures are identified, a target character towards which the user gestures are oriented in the virtual scene is determined, and duration of the gestures is determined.
Example one, when the gesture type of the user gesture is a heart-like gesture, the visual effect is a love symbol, so the love symbol appears in the gesture area, and in addition, the love symbol can be sent towards the target character. As the duration of the gesture increases, the number of transmissions of love symbols increases and the speed of movement also increases. If the target character is within the gesture interaction distance, a love symbol is displayed in the second preset area associated with the target character, so that the pleasurable atmosphere is strengthened. The target person may be replaced by a movement gesture.
In the second example, when the gesture type of the user gesture is a hand waving gesture, the visual special effect is an animation expression of waving a palm, so that the animation expression of waving the palm appears in the gesture area. Because the hand-waving gesture is used for calling the target character, in order to prevent the too far distance or the too rich scene, the hand-waving gesture is ignored, and the animation expression waved by the palm is amplified along with the increase of the duration time of the gesture, so that the attention of the target character is aroused, and the gesture interaction effect is better.
Example three, when the gesture type of the user gesture is a like gesture, the visual special effect is an animated expression indicating like, so the animated expression indicating like appears in the gesture area. As the duration of the gesture increases, the number of animated expressions that represent praise increases so that the target character can experience a perceived emotion, making the gesture interaction more diverse.
Fig. 4 is a block diagram of a gesture processing apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include:
the recognition module 401 is configured to recognize a gesture towards an entity in the virtual scene, and determine a gesture type;
a processing module 402, configured to determine, based on the gesture type, a visual special effect corresponding to the gesture type;
a first display module 403, configured to display the visual special effect in the virtual scene.
Fig. 5 is a block diagram of a gesture processing apparatus according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus may include:
the recognition module 501 is configured to recognize a gesture towards an entity in a virtual scene, and determine a gesture type;
a processing module 502, configured to determine, based on the gesture type, a visual special effect corresponding to the gesture type;
a first display module 503, comprising:
a determining unit 504 for determining an enhancement level of a display property of the visual special effect based on a duration of the gesture;
and a display unit 505 for displaying the visual special effect in the virtual scene based on the enhancement level of the display attribute.
In one embodiment, the display attribute comprises at least one of a number, a movement speed, and a size of the visual special effect.
In one embodiment, the first display module 503 is further configured to:
displaying a visual special effect in a first preset area in a virtual scene; and/or the presence of a gas in the gas,
a visual special effect is displayed towards the entity in the virtual scene.
In one embodiment, the above apparatus further comprises:
a second display module 506, configured to display the visual special effect in a second preset area associated with the entity if the display area of the visual special effect reaches the display area of the entity.
Therefore, the gesture type of the entity in the virtual scene is recognized, the corresponding visual special effect is determined based on the gesture type, the gesture type is displayed in the virtual scene to correspond to the visual special effect, the display mode in the virtual scene is richer, the interaction mode is more diversified, the interaction interest is increased, and the problems that in the prior art, interaction is carried out only by clicking the emoticon in the virtual scene, the interaction mode is single and hard are solved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as a gesture handler. For example, in some embodiments, the gesture handler may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM603 and executed by the computing unit 601, one or more steps of the gesture handler described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the gesture processing by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (13)

1. A gesture processing method, comprising:
recognizing a gesture towards an entity in the virtual scene, and determining a gesture type;
determining a visual special effect corresponding to the gesture type based on the gesture type;
displaying the visual special effect in the virtual scene.
2. The method of claim 1, wherein said displaying the visual effect in the virtual scene comprises:
determining an enhancement level for a display attribute of the visual special effect based on a duration of the gesture;
displaying the visual special effect in the virtual scene based on the level of enhancement of the display attribute.
3. The method of claim 2, wherein the display attributes include at least one of a number, a movement speed, and a size of visual effects.
4. The method of any of claims 1-3, the displaying the visual effect in the virtual scene, comprising:
displaying the visual special effect in a first preset area in the virtual scene; and/or the presence of a gas in the gas,
displaying the visual special effect in the virtual scene towards the entity.
5. The method of claim 4, further comprising:
displaying the visual special effect in a second preset area associated with the entity if the display area of the visual special effect reaches the display area of the entity.
6. A gesture processing apparatus comprising:
the recognition module is used for recognizing and recognizing gestures facing to entities in the virtual scene and determining gesture types;
the processing module is used for determining a visual special effect corresponding to the gesture type based on the gesture type;
a first display module to display the visual special effect in the virtual scene.
7. The apparatus of claim 6, wherein the first display module comprises:
a determination unit to determine an enhancement level of a display attribute of the visual special effect based on a duration of the gesture;
a display unit to display the visual special effect in the virtual scene based on an enhancement level of the display attribute.
8. The apparatus of claim 7, wherein the display attributes comprise at least one of a number, a movement speed, and a size of visual effects.
9. The apparatus of any of claims 6-8, the first display module to further:
displaying the visual special effect in a first preset area in the virtual scene; and/or the presence of a gas in the gas,
displaying the visual special effect in the virtual scene towards the entity.
10. The apparatus of claim 9, further comprising:
a second display module to display the visual special effect in a second preset area associated with the entity if the display area of the visual special effect reaches the display area of the entity.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-5.
CN202111601419.9A 2021-12-24 2021-12-24 Gesture processing method, device, equipment and storage medium Pending CN114327059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111601419.9A CN114327059A (en) 2021-12-24 2021-12-24 Gesture processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111601419.9A CN114327059A (en) 2021-12-24 2021-12-24 Gesture processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114327059A true CN114327059A (en) 2022-04-12

Family

ID=81013990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111601419.9A Pending CN114327059A (en) 2021-12-24 2021-12-24 Gesture processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114327059A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104021A (en) * 2019-12-19 2020-05-05 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic device
CN111640202A (en) * 2020-06-11 2020-09-08 浙江商汤科技开发有限公司 AR scene special effect generation method and device
WO2021114710A1 (en) * 2019-12-09 2021-06-17 上海幻电信息科技有限公司 Live streaming video interaction method and apparatus, and computer device
CN113325954A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Method, apparatus, device, medium and product for processing virtual objects

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021114710A1 (en) * 2019-12-09 2021-06-17 上海幻电信息科技有限公司 Live streaming video interaction method and apparatus, and computer device
CN111104021A (en) * 2019-12-19 2020-05-05 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic device
CN111640202A (en) * 2020-06-11 2020-09-08 浙江商汤科技开发有限公司 AR scene special effect generation method and device
CN113325954A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Method, apparatus, device, medium and product for processing virtual objects

Similar Documents

Publication Publication Date Title
EP4080384A1 (en) Object recommendation method and apparatus, computer device, and medium
CN113643412A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113963110B (en) Texture map generation method and device, electronic equipment and storage medium
CN112860995A (en) Interaction method, device, client, server and storage medium
CN115879469B (en) Text data processing method, model training method, device and medium
CN115393488B (en) Method and device for driving virtual character expression, electronic equipment and storage medium
CN114327059A (en) Gesture processing method, device, equipment and storage medium
CN114549785A (en) Method and device for generating model substrate, electronic equipment and storage medium
CN113327311A (en) Virtual character based display method, device, equipment and storage medium
CN114238594A (en) Service processing method and device, electronic equipment and storage medium
CN113204616A (en) Method and device for training text extraction model and extracting text
CN112861504A (en) Text interaction method, device, equipment, storage medium and program product
CN114416233B (en) Weather interface display method and device, electronic equipment and storage medium
CN113553407B (en) Event tracing method and device, electronic equipment and storage medium
CN113408300B (en) Model training method, brand word recognition device and electronic equipment
CN114187429B (en) Virtual image switching method and device, electronic equipment and storage medium
CN115145672A (en) Information presentation method, information presentation apparatus, information presentation device, storage medium, and program product
CN113344620A (en) Method, device and storage medium for issuing welfare information
KR20230044153A (en) Text display methods and devices, electronic devices, storage media and computer programs
CN115829653A (en) Method, device, equipment and medium for determining relevancy of advertisement text
CN117076619A (en) Robot dialogue method and device, storage medium and electronic equipment
CN117421005A (en) Object interaction method, device, equipment and storage medium
CN114638919A (en) Virtual image generation method, electronic device, program product and user terminal
CN115577081A (en) Dialogue method and device, equipment and medium
CN112560462A (en) Method, device, server and medium for generating event extraction service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination