CN110308793B - Augmented reality AR expression generation method and device and storage medium - Google Patents

Augmented reality AR expression generation method and device and storage medium Download PDF

Info

Publication number
CN110308793B
CN110308793B CN201910597494.9A CN201910597494A CN110308793B CN 110308793 B CN110308793 B CN 110308793B CN 201910597494 A CN201910597494 A CN 201910597494A CN 110308793 B CN110308793 B CN 110308793B
Authority
CN
China
Prior art keywords
expression
sensor
user
special effect
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910597494.9A
Other languages
Chinese (zh)
Other versions
CN110308793A (en
Inventor
郝冀宣
顾星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910597494.9A priority Critical patent/CN110308793B/en
Publication of CN110308793A publication Critical patent/CN110308793A/en
Application granted granted Critical
Publication of CN110308793B publication Critical patent/CN110308793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention provides an augmented reality AR expression generation method, an augmented reality AR expression generation device and a storage medium. The method comprises the following steps: and acquiring a first AR expression to be processed, and acquiring sensor information acquired by a sensor on the terminal equipment. The sensor comprises at least one of a gyroscope, a temperature sensor, a light sensor and a sound sensor, and correspondingly, the sensor information comprises at least one of a movement parameter, a temperature parameter, an illumination parameter and a sound parameter. And adjusting the first AR expression according to the sensor information to obtain a second AR expression. And displaying the second AR expression on a display interface of the terminal equipment. The second AR expression is the first AR expression superimposed with the new special effect map, or the AR expression switched with the virtual character. By the method, the operation steps of making the AR expression by the user are simplified.

Description

Augmented reality AR expression generation method and device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of information processing, in particular to a method and a device for generating Augmented Reality (AR) expressions and a storage medium.
Background
With the development of the mobile internet, more and more users transmit and receive information by using internet instant messaging software. Instant messaging is no longer a simple chat tool, integrating multiple functions such as image capture, email, music, video, gaming, and searching. The expression in the chat tool is an important way for the user to express emotion, in order to increase the interest of chat, the making of the AR (augmented reality) expression is favored by more and more users, and the user can make the personalized AR expression to transmit mood or words.
The AR expression is usually based on a character image, the facial expression or gesture action of the character is recognized through the image, the change of the AR expression materials is triggered, the user interaction mode is single, when the user selects the AR expression materials, the AR expression materials need to be inquired on a material selection interface, then the appropriate AR expression materials are selected, the operation steps are complicated, and the user cannot process the AR expression materials in a personalized mode.
Disclosure of Invention
The invention provides a method, a device and a storage medium for generating an Augmented Reality (AR) expression, which simplify the operation steps of a user for making the AR expression.
A first aspect of the present invention provides an AR expression generating method, including:
acquiring a first AR expression to be processed;
acquiring sensor information acquired by a sensor on terminal equipment;
and adjusting the first AR expression according to the sensor information to obtain a second AR expression, and displaying the second AR expression on a display interface of the terminal equipment.
Optionally, the sensor includes at least one of a gyroscope, a temperature sensor, a light sensor, and a sound sensor.
In one implementation manner, the sensor is a gyroscope, and the sensor information includes a movement parameter of the terminal device; adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
determining the movement degree and the movement direction of the first AR expression according to the movement parameters;
and adjusting the first AR expression according to the moving degree and the moving direction to obtain the second AR expression.
In one implementation manner, the sensor is a temperature sensor, and the sensor information includes a temperature parameter of the terminal device; adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
determining a first special effect map of the AR expression corresponding to the temperature parameter according to the temperature parameter;
and superposing the first special effect map in the first AR expression to obtain a second AR expression.
In one implementation manner, the sensor is a light sensor, and the sensor information includes an illumination parameter of an environment where the terminal device is located;
adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
determining shooting modes according to the illumination parameters, wherein the shooting modes comprise a day mode and a night mode;
determining a second special effect map of the AR expression corresponding to the shooting mode according to the shooting mode;
and superposing the second special effect map in the first AR expression to obtain a second AR expression.
In one implementation, the sensor is a sound sensor, and the sensor information includes a first sound parameter indicating a third special effect map required by a user;
adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
identifying the first sound parameter to obtain a third special effect map required by a user;
and superposing the third special effect map in the first AR expression to obtain a second AR expression.
In one implementation, the superimposing the third special effect map on the first AR expression to obtain a second AR expression includes:
acquiring a second sound parameter and mouth shape information of the user in the first AR expression;
adjusting the third special effect map according to the second sound parameter and the mouth shape information;
and superposing the adjusted third special effect map in the first AR expression to obtain a second AR expression.
In one implementation manner, the sensor is a sound sensor, the sensor information includes a third sound parameter, and the third sound parameter is used to indicate a second virtual character corresponding to a second AR expression required by the user; the second virtual character and the first virtual character corresponding to the first AR expression are different virtual characters;
adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
recognizing the third sound parameters to obtain a face image of the second virtual character;
and switching the first virtual character to the second virtual character to obtain the second AR expression containing the facial image of the second virtual character.
Optionally, the first AR expression includes a facial image of the photographed user, or a facial image of a virtual character; the facial image of the virtual character and the facial image of the photographed user have the same expressive features.
A second aspect of the present invention provides an augmented reality AR expression generating device, including:
the acquiring module is used for acquiring a first AR expression to be processed;
the acquisition module is also used for acquiring sensor information acquired by a sensor on the terminal equipment;
the processing module is used for adjusting the first AR expression according to the sensor information to obtain a second AR expression;
and the display module is used for displaying the second AR expression on a display interface of the terminal equipment.
A third aspect of the present invention provides an augmented reality AR expression generating device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the augmented reality AR expression generation method according to any one of the first aspect of the invention.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program for execution by a processor to implement the augmented reality AR expression generation method according to any one of the first aspects of the present invention.
The embodiment of the invention provides an augmented reality AR expression generation method, an augmented reality AR expression generation device and a storage medium. The method comprises the following steps: and acquiring a first AR expression to be processed, and acquiring sensor information acquired by a sensor on the terminal equipment. The sensor comprises at least one of a gyroscope, a temperature sensor, a light sensor and a sound sensor, and correspondingly, the sensor information comprises at least one of a movement parameter, a temperature parameter, an illumination parameter and a sound parameter. And adjusting the first AR expression according to the sensor information to obtain a second AR expression. And displaying the second AR expression on a display interface of the terminal equipment. The second AR expression is the first AR expression on which the new special effect map is superimposed, or the AR expression of which the virtual character is switched. The method simplifies the operation steps of making the AR expression by the user.
Drawings
Fig. 1 is a schematic flow chart of an AR expression generating method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an AR expression generating method according to another embodiment of the present invention;
fig. 3 is a schematic flow chart of an AR expression generating method according to another embodiment of the present invention;
fig. 4 is a schematic flowchart of an AR expression generating method according to yet another embodiment of the present invention;
fig. 5 is a schematic flowchart of an AR expression generating method according to yet another embodiment of the present invention;
fig. 6 is a schematic flow chart illustrating superimposing a third special effect map on a first AR expression according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating an AR expression generating method according to yet another embodiment of the present invention;
fig. 8 is a functional structure diagram of an AR expression generating device according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an AR expression generating apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The terms "first," "second," and the like in the description and in the claims, and in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
Reference throughout this specification to "one embodiment" or "another embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in some embodiments" or "in this embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The traditional expression refers to a pre-drawn expression, and the expression can be a static expression or a dynamic expression. The user can download the expression packages with different styles from the network server according to personal requirements, and adds proper expressions in the process of inputting characters, so that the input process is more interesting. The method is mainly applied to scenes such as chat tools (e.g. instant messaging software), webpage comments, video barracks and the like.
The AR expression is different from the above-described conventional expression, and may include a facial image of a photographed user, or a facial image of a virtual character. If the AR expression comprises the face image of the photographed user, the user can obtain an expression image with a special effect map through gestures or changes of facial expressions, and the user can also obtain the expression image with the special effect map through interface selection operation. If the AR expression comprises the facial image of the virtual character, the user can obtain the facial image of the virtual character through interface selection operation, and then the facial expression characteristics of the shot user are obtained by combining the face recognition technology and used for controlling the facial expression of the virtual character, so that the AR expression of the virtual character expressing the emotion of the user is generated. The AR expression may be a static image or a dynamic image, and the dynamic image may be in different formats, such as GIF, live 2D or live 3D.
No matter which AR expression is used, the interaction process is single, and interaction is carried out based on facial expressions, gestures or interface selection operations of the user. The user cannot quickly acquire the AR expression materials, the AR expression materials to be added (including virtual roles or maps of the AR expressions) need to be searched on a user selection interface, and the appropriate AR expression materials are selected in a search list, so that the selection operation is completed, and the operation process is complicated. In addition, the user cannot process the AR expression materials in a personalized mode, namely, the existing AR expression materials in the material library cannot be changed, the AR expression materials are fixed, and the diversified requirements of the user on the AR expressions cannot be met.
In order to solve the above problem, an embodiment of the present invention provides an AR expression generation method, which includes acquiring a first AR expression to be processed, where the first AR expression may include a facial image of a user to be photographed or a facial image of a virtual character. The method comprises the steps of obtaining sensor information collected by each sensor on the terminal equipment, and adjusting the current first AR expression according to the collected sensor information to obtain a second AR expression. The second AR expression may be a first AR expression on which the special effect map is superimposed, or may be an AR expression of a second virtual character different from the first virtual character corresponding to the first AR expression. And displaying the obtained second AR expression on a display interface of the terminal equipment.
In this embodiment, a user does not need to manually select a map to be added or an image of a virtual character to be used on a display interface of a terminal device, and the terminal device superimposes a corresponding special effect map on a current AR expression according to sensor information by collecting the sensor information of a sensor, or switches other AR expressions, so that the operation steps of making the AR expression by the user are greatly simplified. The superposed special effect chartlet can be adjusted in a personalized mode according to the information of the sensor, and the interestingness of using the AR expression by the user is increased.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a schematic flowchart of an AR expression generating method according to an embodiment of the present invention, where the method may be executed by any device that executes the method, and the device may be implemented by software and/or hardware.
As shown in fig. 1, the AR expression generating method provided in this embodiment includes the following steps:
s101, obtaining a first AR expression to be processed.
In this embodiment, the first AR expression to be processed may include a facial image of the photographed user, or a facial image of a virtual character, which is not particularly limited. If the first AR expression to be processed comprises a facial image of the virtual character, only displaying the facial image of the virtual character on a display interface of the terminal equipment, wherein the facial image of the virtual character and the facial image of the user shot by the camera have the same expression characteristics, namely, the terminal equipment migrates the facial expression characteristics of the shot user into the virtual character.
Alternatively, the first AR expression to be processed may include a complete image of the photographed user, i.e., an image of other parts of the body other than the face captured by the camera, such as a bust image of the photographed user. Correspondingly, the to-be-processed first AR expression may include a partial image of the photographed user and a partial image of the virtual character, that is, the head image of the photographed user may be replaced with the head image of the virtual character, and images of other parts of the photographed user remain to transfer the facial expression of the photographed user to the virtual character. The first AR expression to be processed may include a complete image of the virtual character, that is, the complete image of the photographed user may be completely replaced with the complete image of the virtual character, and the facial expression of the photographed user and the motions of each part of the body may be simultaneously migrated to the virtual character.
The virtual character in this embodiment may be a virtual animal character, or may be a virtual character, such as a virtual cartoon character.
S102, acquiring sensor information acquired by a sensor on the terminal equipment.
In this embodiment, the sensors on the terminal device include one or more of the following sensors: a gyroscope, a temperature sensor, a light sensor and a sound sensor. Wherein the content of the first and second substances,
a gyroscope is an angular motion detection device using a moment-of-momentum sensitive housing of a high-speed rotating body about one or two axes orthogonal to the axis of rotation relative to the inertial space. The gyroscope in this embodiment is used for measuring a movement parameter of the terminal device, where the movement parameter includes a translation vector and/or a rotation vector.
A temperature sensor is a sensor that senses temperature and converts it into a usable output signal. The temperature sensor in this embodiment is used to measure the temperature parameter of the environment around the terminal device.
The light sensor is also called a photoreceptor, and is a sensor capable of automatically adjusting the screen brightness of the terminal equipment according to the environment where the terminal equipment is located. The light sensor in this embodiment is used to measure an illumination parameter of an environment in which the terminal device is located.
The sound sensor functions as a microphone for receiving sound waves. The sensor is internally provided with a capacitance type electret microphone sensitive to sound, and the sound wave enables an electret film in the microphone to vibrate to cause the change of capacitance, so that a tiny voltage which changes correspondingly is generated. The sound sensor in this embodiment is used to detect a sound parameter, and the content indicated by the sound parameter is different in different application scenarios, which can be specifically referred to in the following embodiments, and is not specifically developed here.
Of course, the sensor of the present embodiment is not limited to the above sensor, and may be a sensor having other detection functions, and may be arranged according to actual needs, and this embodiment is not particularly limited to this embodiment.
S103, adjusting the first AR expression according to the sensor information to obtain a second AR expression.
Based on the sensors, the sensor information of different sensors can be correspondingly acquired. If the sensor is a gyroscope, the sensor information includes the movement parameters of the terminal device. If the sensor is a temperature sensor, the sensor information includes temperature parameters of the environment where the terminal device is located. If the sensor is a light sensor, the sensor information includes an illumination parameter of an environment where the terminal device is located. If the sensor is a sound sensor, the sensor information includes sound parameters.
The second AR expression is the first AR expression superimposed with the new special effect map, or the AR expression switched to the virtual character. In particular, the method comprises the following steps of,
in this embodiment, the terminal device adjusts the first AR expression according to the sensor information. In a possible implementation manner, the terminal device superimposes a special effect map in the first AR expression according to the sensor information, where the special effect map is determined based on the sensor information, or displays an optional special effect map based on the sensor information, and the user determines the special effect map to be superimposed from the optional special effect map. In a possible implementation manner, the terminal device switches the first AR expression into the second AR expression according to the sensor information, wherein the virtual roles of the first AR expression and the second AR expression are different virtual roles. In a possible implementation manner, the terminal device adjusts the special effect map of the first AR expression according to the sensor information, for example, adjusts a position or an angle of the special effect map. Specific schemes for the above implementations can be found in the following examples.
And S104, displaying the second AR expression on a display interface of the terminal equipment.
According to the AR expression generation method provided by the embodiment of the invention, the terminal equipment adjusts the first AR expression by acquiring the sensor information of the sensor, so that the user does not need to make excessive interface operation, and the operation steps of making the AR expression by the user are simplified. In addition, the terminal equipment can also perform personalized adjustment on the AR expression according to the sensor information, so that the interestingness of using the AR expression by the user is increased.
On the basis of the foregoing embodiments, in the AR expression generation method provided in this embodiment, the sensor includes a gyroscope, and the sensor information includes a movement parameter. The embodiment discloses a technical scheme for adjusting the first AR expression according to the mobile parameters of the terminal device, and the scheme realizes personalized adjustment of the AR expression and increases the interestingness of using the AR expression by a user. The AR expression generation method provided in this embodiment is described in detail below with reference to fig. 2.
Fig. 2 is a schematic flow chart of an AR expression generating method according to another embodiment of the present invention. As shown in fig. 2, the AR expression generating method provided in this embodiment specifically includes the following steps:
s201, acquiring a first AR expression to be processed;
s201 of this embodiment is the same as S101 of the above embodiment, and reference may be made to the above embodiment for details, which are not described herein again.
S202, acquiring sensor information acquired by a sensor on the terminal equipment, wherein the sensor information comprises a mobile parameter.
In this embodiment, the sensor includes a gyroscope. The gyroscope is used for measuring the movement parameters of the terminal equipment, wherein the movement parameters comprise translation vectors and/or rotation vectors.
And S203, determining the moving degree and the moving direction of the first AR expression according to the moving parameters.
And S204, adjusting the first AR expression according to the moving degree and the moving direction to obtain a second AR expression.
In this embodiment, the movement parameters may include a translation vector. The translation vector includes a translation distance and a translation direction. The translation direction of the terminal device by the user can be left, right, up and down.
In one implementation, determining a moving distance and a moving direction of the first AR expression according to the moving parameters may include: and determining the deformation degree and the deformation direction of the face image of the shot user or the face image of the virtual character in the first AR expression according to the translation distance and the translation direction of the terminal equipment. Correspondingly, adjusting the first AR expression according to the moving degree and the moving direction to obtain a second AR expression, including: and adjusting the face image of the shot user or the face image of the virtual character in the first AR expression according to the deformation degree and the deformation direction to obtain a second AR expression. The face image of the photographed user or the face image of the virtual character in the first AR expression is a deformed image.
Illustratively, the user translates the terminal device to the left by a certain distance, and correspondingly, the facial image of the photographed user or the facial image of the virtual character in the first AR expression is deformed to the left, and the degree of deformation is proportional to the translation distance. The user translates the terminal device upwards for a certain distance, and correspondingly, the face image of the shot user in the first AR expression or the face image of the virtual character is deformed upwards, and the deformation degree is in direct proportion to the translation distance. Through the mode, the interestingness of making the AR expression by the user is improved.
In another implementation manner, determining a moving distance and a moving direction of the first AR expression according to the moving parameters may include: and determining the translation distance and the translation direction of the special effect map in the first AR expression according to the translation distance and the translation direction of the terminal equipment. Correspondingly, adjusting the first AR expression according to the moving degree and the moving direction to obtain a second AR expression, including: and adjusting the position of the special effect map added in the first AR expression according to the translation distance and the translation direction to obtain a second AR expression.
Illustratively, the user translates the terminal device a certain distance to the left, and correspondingly translates the special effect map added to the first AR expression a certain distance to the left. And the user translates the terminal device upwards for a certain distance, and correspondingly translates the special effect map added in the first AR expression upwards for a certain distance. The ratio of the moving distance of the terminal in the space to the moving distance of the special effect map on the display interface is preset, and the moving distance of the special effect map on the display interface is adjusted according to the ratio. According to the interactive mode, the user does not need to adjust the position of the map on the display interface of the terminal equipment, and the steps of making the AR expression by the user are simplified.
In this embodiment, the movement parameter may include a rotation vector. The rotation vector includes a rotation angle and a rotation direction. The rotation direction of the terminal device may be a clockwise direction or a counterclockwise direction.
Determining a moving distance and a moving direction of the first AR expression according to the moving parameters may include: and determining the rotation angle and the rotation direction of the special effect map added in the first AR expression according to the rotation angle and the rotation direction of the terminal equipment. Correspondingly, adjusting the first AR expression according to the moving degree and the moving direction to obtain a second AR expression, including: and adjusting the position of the special effect map added in the first AR expression according to the rotation angle and the rotation direction to obtain a second AR expression.
Illustratively, the user rotates the terminal device clockwise by 15 °, and correspondingly rotates the added special effect map in the first AR expression clockwise by 15 °. According to the interaction mode, the user does not need to rotate the map on the display interface of the terminal equipment, and the steps of making the AR expression by the user are simplified.
And S205, displaying the second AR expression on a display interface of the terminal device.
In the method for generating the AR expression provided by the embodiment of the present invention, the terminal device obtains the movement parameters acquired by the gyroscope, where the movement parameters include a translation vector and/or a rotation vector, determines the movement degree and the movement direction of the first AR expression according to the movement parameters, and adjusts the facial image of the user to be photographed in the first AR expression, the facial image of the virtual character, or the added special effect map according to the movement degree and the movement direction, so as to obtain the second AR expression. And displaying the second AR expression on a display interface of the terminal equipment. The second AR expression is an AR expression after the user is customized. The implementation process increases the interestingness of the user in using the AR expression, and meets the individual requirements of different users on the AR expression.
On the basis of the foregoing embodiments, in the AR expression generating method provided in this embodiment, the sensor includes a temperature sensor, and the sensor information includes a temperature parameter of an environment where the terminal device is located. The embodiment discloses a technical scheme for adjusting the first AR expression according to the temperature parameter of the environment where the terminal device is located, and the scheme simplifies the operation steps of making the AR expression by a user. The AR expression generation method provided in this embodiment is described in detail below with reference to fig. 3.
Fig. 3 is a flowchart illustrating an AR expression generating method according to another embodiment of the present invention. As shown in fig. 3, the AR expression generating method provided in this embodiment includes the following steps:
s301, obtaining a first AR expression to be processed.
S301 of this embodiment is the same as S101 of the above embodiment, and reference may be made to the above embodiment for details, which are not described herein again.
S302, acquiring sensor information acquired by a sensor on the terminal equipment, wherein the sensor information comprises temperature parameters.
In this embodiment, the sensor comprises a temperature sensor. The temperature sensor is used for measuring the temperature parameter of the environment where the terminal equipment is located.
S303, determining a first special effect map of the AR expression corresponding to the temperature parameter according to the temperature parameter.
S304, superposing the first special effect map in the first AR expression to obtain a second AR expression.
In this embodiment, a resource library of the AR expression material is pre-stored with first special effect maps of the AR expression corresponding to different temperature parameters. The first special effect map can be a background map or a front map of the AR expression, and the map can be a static map or a dynamic map.
For example, if the temperature parameter of the environment where the terminal device is located, which is acquired by the terminal device and acquired by the temperature sensor, is minus 5 degrees, the terminal device acquires, from the resource library, a first special effect map of the AR expression corresponding to the temperature parameter according to the temperature parameter, where the first special effect map may be a background map or a front map containing snowmen or snowflakes. The user can download the background map or the pre-map related to the temperature parameter from the server in advance according to the self requirement, and when the user starts the automatic map adding function, the background map or the pre-map related to the temperature parameter pre-stored by the user is automatically added into the AR expression, so that the query or the click of the user on the first special effect map related to the environment temperature is simplified.
S305, displaying the second AR expression on a display interface of the terminal device.
According to the AR expression generation method provided by the embodiment of the invention, the terminal equipment acquires the temperature parameter acquired by the temperature sensor, and the temperature parameter is used for indicating the temperature value of the environment where the terminal equipment is located. And acquiring a first special effect map of the AR expression corresponding to the temperature parameter from the resource library according to the temperature parameter, and superposing the first special effect map in the first AR expression to obtain a second AR expression. And displaying the second AR expression on a display interface of the terminal equipment. The second AR expression of this embodiment includes a special effect map related to the ambient temperature. The implementation process simplifies the operation steps of making the AR expression by the user.
On the basis of the foregoing embodiments, in the AR expression generating method provided in this embodiment, the sensor includes a light sensor, and the sensor information includes an illumination parameter of an environment where the terminal device is located. The embodiment discloses a technical scheme for adjusting the first AR expression according to the illumination parameter of the environment where the terminal device is located, and the scheme simplifies the operation steps of making the AR expression by a user. The AR expression generation method provided in this embodiment is described in detail below with reference to fig. 4.
Fig. 4 is a flowchart illustrating an AR expression generating method according to still another embodiment of the present invention. As shown in fig. 4, the AR expression generating method provided in this embodiment includes the following steps:
s401, obtaining a first AR expression to be processed.
S401 of this embodiment is the same as S101 of the above embodiment, and reference may be made to the above embodiment for details, which are not described herein again.
S402, acquiring sensor information acquired by a sensor on the terminal equipment, wherein the sensor information comprises an illumination parameter.
In this embodiment, the sensor comprises a light sensor. The light sensor is used for measuring the illumination parameters of the environment where the terminal equipment is located.
And S403, determining shooting modes according to the illumination parameters, wherein the shooting modes comprise a day mode and a night mode.
And S404, determining a second special effect map of the AR expression corresponding to the shooting mode according to the shooting mode.
S405, overlapping the second special effect map in the first AR expression to obtain a second AR expression.
Specifically, the terminal device acquires the illumination parameters of the environment where the terminal device is located, which are acquired by the light sensor, and when the illumination parameters are less than or equal to the preset illumination parameters, the shooting mode of the terminal device is determined to be the night mode. And when the illumination parameter is larger than the preset illumination parameter, determining that the shooting mode of the terminal equipment is the daytime mode.
In the embodiment, a second special effect map of the AR expression corresponding to different shooting modes is prestored in the resource library of the AR expression material. The second special effect map can be a background map or a front map of the AR expression, and the map can be a static map or a dynamic map.
The user can download the background map or the pre-map related to the illumination parameters from the server in advance according to the self requirements, and when the user starts the automatic map adding function, the background map or the pre-map related to the shooting mode and pre-stored by the user is automatically added into the AR expression, so that the user can be simplified to inquire or click and select the second special effect map related to the illumination parameters.
And S406, displaying the second AR expression on a display interface of the terminal equipment.
According to the AR expression generation method provided by the embodiment of the invention, the terminal equipment acquires the illumination parameters of the light sensor, and the illumination parameters are used for indicating the illumination of the environment where the terminal equipment is located. And determining a shooting mode according to the illumination parameters, wherein the shooting mode comprises a day mode and a night mode. And determining a second special effect map of the AR expression corresponding to the shooting mode according to the shooting mode, and superposing the second special effect map in the first AR expression to obtain a second AR expression. And displaying the second AR expression on a display interface of the terminal equipment. The second AR expression of this embodiment includes a special effect map related to ambient lighting. The implementation process simplifies the operation steps of making the AR expression by the user.
On the basis of the foregoing embodiments, in the AR expression generating method provided in this embodiment, the sensor includes a sound sensor, the sensor information includes sound parameters, and the contents indicated by the sound parameters are different in different application scenes. The following embodiments disclose technical schemes for adjusting the first AR expression according to the sound parameters, which not only simplify the operation steps of making the AR expression by the user, but also increase the interestingness of using the AR expression by the user. The AR expression generation method will be described in detail below with reference to fig. 5, 6, and 7, respectively.
Fig. 5 is a flowchart illustrating an AR expression generating method according to still another embodiment of the present invention. As shown in fig. 5, the AR expression generating method provided in this embodiment includes the following steps:
s501, obtaining a first AR expression to be processed.
S501 of this embodiment is the same as S101 of the above embodiment, and reference may be made to the above embodiment for details, which are not repeated herein.
S502, acquiring sensor information acquired by a sensor on the terminal equipment, wherein the sensor information comprises a first sound parameter.
In this example, the sensor comprises an acoustic sensor. The sound sensor is used for detecting a first sound parameter. The first sound parameter is for indicating a third mipmap of the user's needs.
S503, identifying the first sound parameter to obtain a third special effect map required by the user.
And S504, superposing the third special effect map in the first AR expression to obtain a second AR expression.
In this embodiment, the terminal device identifies the first sound parameter acquired by the sound sensor by using a speech recognition technology, determines a third special effect map required by the user, and acquires the third special effect map from the resource library or the server. The third special effect map can be a background map or a front map of the AR expression, and the map can be a static map or a dynamic map.
For example, a user inputs a name "sun" of a required special effect map through a voice of a sound sensor, and the terminal device first queries a special effect map of "sun" pre-stored by the user in a local resource library. And if the special effect map of the sun does not exist in the resource library, downloading the special effect map of the sun on line through the server. When the user starts the voice recognition function, the special effect chartlet required by the user can be automatically added into the AR expression, so that the operation steps of user query or click in the AR expression making process are simplified.
And S505, displaying the second AR expression on a display interface of the terminal equipment.
In the AR expression generating method provided in this embodiment, the terminal device acquires the first sound parameter by using the sound sensor, where the first sound parameter is used to indicate the third special effect map required by the user. And identifying the first sound parameter to determine a third special effect map required by the user, and superposing the third special effect map in the first AR expression to obtain a second AR expression. And displaying a second AR expression comprising a third special effect chartlet on the terminal equipment. The implementation process simplifies the operation steps of AR expression making.
Optionally, the embodiment shown in fig. 6 shows a technical scheme of how to adjust the special effect map with the added AR expression according to the sound parameters collected by the sound sensor. The technical scheme relates to a specific example, and whether a user performs blowing operation or not is judged through a sound sensor and mouth shape recognition, so that movement or deformation of a special effect map in an AR expression is triggered.
Fig. 6 is a schematic flow chart of superimposing the third special effect map on the first AR expression according to an embodiment of the present invention, and based on the embodiment shown in fig. 5, as shown in fig. 6, the step S503 may include the following steps:
s5031, acquiring a second sound parameter and mouth shape information of the user in the first AR expression;
s5032, adjusting a third special effect map according to the second sound parameter and the mouth shape information;
and S5033, superposing the adjusted third special effect map in the first AR expression to obtain a second AR expression.
In this embodiment, the third special effect map superimposed in the first AR expression may be a movable map material such as a bubble, a paper plane, and a balloon. The second sound parameter is used for indicating the blowing strength corresponding to the blowing action of the user.
Specifically, the terminal device acquires the mouth shape information of the user by adopting an image recognition technology, and judges whether the user performs the air blowing operation or not according to the second sound parameter acquired by the sound sensor and the mouth shape information of the user. And if the fact that the user is performing blowing operation is determined, determining the blowing strength of the user according to the second sound parameter and the mouth shape information of the user, and triggering the movement or deformation of the third special effect map added in the first AR expression according to the blowing strength. Illustratively, when the third special effect map is a bubble, the greater the blowing force of the user is, the faster the moving speed of the bubble is, and the bubble has a certain deformation.
In the method for generating the AR expression provided by this embodiment, the terminal device acquires the mouth shape information of the photographed user by using an image recognition technology, and simultaneously acquires a second sound parameter acquired by the sound sensor, where the second sound parameter is used to indicate the blowing strength corresponding to the user's blowing action. And moving or deforming the third special effect map added in the first AR expression according to the second sound parameter and the mouth shape information of the user to obtain a second AR expression. The second AR expression of the present embodiment is a dynamic expression. The implementation process increases the interestingness of using the AR expression by the user, and realizes dynamic adjustment of the added special effect map.
Fig. 7 is a flowchart illustrating an AR expression generating method according to still another embodiment of the present invention. As shown in fig. 7, the AR expression generating method provided in this embodiment includes the following steps:
s701, obtaining a first AR expression to be processed.
S701 of this embodiment is the same as S101 of the above embodiment, and reference may be made to the above embodiment for details, which are not described herein again.
S702, acquiring sensor information acquired by a sensor on the terminal equipment, wherein the sensor information comprises a third sound parameter.
In this example, the sensor comprises an acoustic sensor. The sound sensor is used for detecting a third sound parameter. The third sound parameter is used for indicating a second virtual character corresponding to a second AR expression required by the user.
When the first AR expression includes a facial image of the first virtual character, the second virtual character is a different virtual character from the first virtual character corresponding to the first AR expression.
And S703, acquiring the image of the second virtual character indicated by the third sound parameter.
S704, the first virtual character is switched to a second virtual character, and a second AR expression including a facial image of the second virtual character is obtained.
Specifically, the terminal device identifies a second sound parameter acquired by the sound sensor by using a voice identification technology, acquires a facial image of the second virtual character indicated by the third sound parameter from the resource library or the server, and migrates the facial expression characteristics of the photographed user to the second virtual character.
For example, assume that the first AR expression includes a facial image of the animal character "dog", i.e., the first virtual character is "dog". The user inputs the second virtual role required by voice input through the sound sensor to be the cat, the terminal equipment acquires the face image of the second virtual role, the face image of the first virtual role is replaced by the face image of the second virtual role, and meanwhile, the facial expression characteristics of the user shot at present are transferred to the second virtual role. The implementation process realizes automatic switching of the virtual roles in the AR expression, and simplifies the operation steps of manufacturing the AR expression.
S705, displaying the second AR expression on a display interface of the terminal device.
Optionally, the first AR expression of this embodiment may include a facial image of the photographed user, and the above implementation process is to replace a virtual character of the photographed user, that is, automatically replace the facial image of the photographed user with the facial image of the virtual character by using a voice recognition technology, and simultaneously migrate the facial expression features of the photographed user into the virtual character. According to the implementation mode, the user does not need to inquire or click on the virtual role selection interface, and the operation steps of AR expression making are simplified.
In this embodiment, an AR expression generating method is provided, where a terminal device acquires a third sound parameter through a sound sensor, where the third sound parameter is used to indicate a second virtual character corresponding to a second AR expression required by a user. And acquiring a face image of the second virtual character by identifying the third sound parameter, and switching the first virtual character into the second virtual character to obtain a second AR expression containing the face image of the second virtual character. And displaying the second AR expression on a display interface of the terminal equipment. The implementation process simplifies the operation steps of AR expression making.
Fig. 8 is a schematic functional structure diagram of an AR expression generating device according to an embodiment of the present invention, and as shown in fig. 8, the AR expression generating device 800 according to the embodiment includes:
an obtaining module 801, configured to obtain a first AR expression to be processed;
the obtaining module 801 is further configured to obtain sensor information acquired by a sensor on the terminal device;
the processing module 802 is configured to adjust the first AR expression according to the sensor information to obtain a second AR expression;
and a display module 803, configured to display the second AR expression on a display interface of the terminal device.
Optionally, the sensor includes at least one of a gyroscope, a temperature sensor, a light sensor, and a sound sensor.
Optionally, the sensor is a gyroscope, and the sensor information includes a movement parameter of the terminal device; the processing module 802 is specifically configured to:
determining the moving degree and the moving direction of the first AR expression according to the moving parameters;
and adjusting the first AR expression according to the moving degree and the moving direction to obtain the second AR expression.
Optionally, the sensor is a temperature sensor, and the sensor information includes a temperature parameter of the terminal device; the processing module 802 is specifically configured to:
determining a first special effect map of the AR expression corresponding to the temperature parameter according to the temperature parameter;
and superposing the first special effect map in the first AR expression to obtain a second AR expression.
Optionally, the sensor is a light sensor, and the sensor information includes an illumination parameter of an environment where the terminal device is located; the processing module 802 is specifically configured to:
determining shooting modes according to the illumination parameters, wherein the shooting modes comprise a day mode and a night mode;
determining a second special effect chartlet of the AR expression corresponding to the shooting mode according to the shooting mode;
and superposing the second special effect map in the first AR expression to obtain a second AR expression.
Optionally, the sensor is a sound sensor, the sensor information includes a first sound parameter, and the first sound parameter is used to indicate a third special effect map required by the user; the processing module 802 is specifically configured to:
identifying the first sound parameter to obtain a third special effect map required by a user;
and superposing the third special effect map in the first AR expression to obtain a second AR expression.
Optionally, the obtaining module 801 is further configured to obtain a second sound parameter and mouth shape information of the user in the first AR expression;
the processing module 802 is further configured to:
adjusting the third special effect map according to the second sound parameter and the mouth shape information;
and superposing the adjusted third special effect map in the first AR expression to obtain a second AR expression.
Optionally, the sensor is a sound sensor, the sensor information includes a third sound parameter, and the third sound parameter is used to indicate a second virtual character corresponding to a second AR expression required by the user; the second virtual character and the first virtual character corresponding to the first AR expression are different virtual characters; the processing module 802 is specifically configured to:
recognizing the third sound parameters to obtain a face image of the second virtual character;
and switching the first virtual character to the second virtual character to obtain the second AR expression containing the facial image of the second virtual character.
Optionally, the first AR expression includes a facial image of the photographed user, or a facial image of a virtual character; the facial image of the virtual character and the facial image of the photographed user have the same expressive features.
The AR expression generating device provided in this embodiment may implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 9 is a schematic diagram of a hardware structure of an AR expression generating apparatus according to an embodiment of the present invention, and as shown in fig. 9, an AR expression generating apparatus 900 according to the embodiment includes:
a memory 901;
a processor 902; and
a computer program;
wherein, the computer program is stored in the memory 901 and configured to be executed by the processor 902 to implement the technical solution of any one of the foregoing method embodiments, and the implementation principle and technical effect thereof are similar, and are not described herein again.
Alternatively, the memory 901 may be separate or integrated with the processor 902.
When the memory 901 is a device separate from the processor 902, the AR expression generation apparatus 900 further includes:
a bus 903 for connecting the memory 901 and the processor 902.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by the processor 902 to implement the steps performed by the AR expression generating apparatus 900 in the above method embodiments.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in the AR expression generating apparatus.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. An Augmented Reality (AR) expression generation method is characterized by comprising the following steps:
acquiring a first AR expression to be processed;
acquiring sensor information acquired by a sensor on terminal equipment;
adjusting the first AR expression according to the sensor information to obtain a second AR expression, and displaying the second AR expression on a display interface of the terminal equipment;
the sensor is a sound sensor, the sensor information comprises a third sound parameter, and the third sound parameter is used for indicating a second virtual role corresponding to a second AR expression required by the user; the second virtual character and the first virtual character corresponding to the first AR expression are different virtual characters;
adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
recognizing the third sound parameters to obtain a face image of the second virtual character;
and switching the first virtual character to the second virtual character to obtain the second AR expression containing the facial image of the second virtual character.
2. The method of claim 1, wherein the sensor comprises at least one of a gyroscope, a temperature sensor, a light sensor, and a sound sensor.
3. The method of claim 2, wherein the sensor is a gyroscope, and the sensor information includes a movement parameter of the terminal device; adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
determining the moving degree and the moving direction of the first AR expression according to the moving parameters;
and adjusting the first AR expression according to the moving degree and the moving direction to obtain the second AR expression.
4. The method of claim 2, wherein the sensor is a temperature sensor, and the sensor information includes a temperature parameter of the terminal device; adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
determining a first special effect map of the AR expression corresponding to the temperature parameter according to the temperature parameter;
and superposing the first special effect map in the first AR expression to obtain a second AR expression.
5. The method according to claim 2, wherein the sensor is a light sensor, and the sensor information includes an illumination parameter of an environment in which the terminal device is located;
adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
determining shooting modes according to the illumination parameters, wherein the shooting modes comprise a day mode and a night mode;
determining a second special effect map of the AR expression corresponding to the shooting mode according to the shooting mode;
and superposing the second special effect map in the first AR expression to obtain a second AR expression.
6. The method of claim 2, wherein the sensor is a sound sensor, the sensor information comprising a first sound parameter indicating a third special effect map of a user's needs;
adjusting the first AR expression according to the sensor information to obtain a second AR expression, including:
identifying the first sound parameter to obtain a third special effect map required by a user;
and superposing the third special effect map in the first AR expression to obtain a second AR expression.
7. The method of claim 6, wherein superimposing the third special effect map on the first AR expression results in a second AR expression, comprising:
acquiring a second sound parameter and mouth shape information of the user in the first AR expression;
adjusting the third special effect map according to the second sound parameter and the mouth shape information;
and superposing the adjusted third special effect map in the first AR expression to obtain a second AR expression.
8. The method of any of claims 1 to 7, wherein the first AR expression comprises a facial image of the user being photographed, or a facial image of a virtual character; the facial image of the virtual character and the facial image of the photographed user have the same expressive features.
9. An Augmented Reality (AR) expression generation device, comprising:
the acquiring module is used for acquiring a first AR expression to be processed;
the acquisition module is also used for acquiring sensor information acquired by a sensor on the terminal equipment;
the processing module is used for adjusting the first AR expression according to the sensor information to obtain a second AR expression;
the display module is used for displaying the second AR expression on a display interface of the terminal equipment;
the sensor is a sound sensor, the sensor information comprises a third sound parameter, and the third sound parameter is used for indicating a second virtual role corresponding to a second AR expression required by the user; the second virtual character and the first virtual character corresponding to the first AR expression are different virtual characters;
the processing module is specifically configured to:
recognizing the third sound parameters to obtain a face image of the second virtual character;
and switching the first virtual character to the second virtual character to obtain the second AR expression containing the facial image of the second virtual character.
10. An Augmented Reality (AR) expression generation device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the augmented reality AR expression generation method of any one of claims 1-8.
11. A computer-readable storage medium, having stored thereon a computer program for execution by a processor to implement the augmented reality AR expression generation method of any one of claims 1-8.
CN201910597494.9A 2019-07-04 2019-07-04 Augmented reality AR expression generation method and device and storage medium Active CN110308793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910597494.9A CN110308793B (en) 2019-07-04 2019-07-04 Augmented reality AR expression generation method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910597494.9A CN110308793B (en) 2019-07-04 2019-07-04 Augmented reality AR expression generation method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110308793A CN110308793A (en) 2019-10-08
CN110308793B true CN110308793B (en) 2023-03-14

Family

ID=68079721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910597494.9A Active CN110308793B (en) 2019-07-04 2019-07-04 Augmented reality AR expression generation method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110308793B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639613B (en) * 2020-06-04 2024-04-16 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN111760265B (en) * 2020-06-24 2024-03-22 抖音视界有限公司 Operation control method and device
CN111882567A (en) * 2020-08-03 2020-11-03 深圳传音控股股份有限公司 AR effect processing method, electronic device and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004287557A (en) * 2003-03-19 2004-10-14 Matsushita Electric Ind Co Ltd Video phone terminal and virtual character variation control device
CN104780338A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
CN106200831A (en) * 2016-08-31 2016-12-07 广州数娱信息科技有限公司 A kind of AR, holographic intelligent device
CN107861682A (en) * 2017-11-03 2018-03-30 网易(杭州)网络有限公司 The control method for movement and device of virtual objects
CN109035420A (en) * 2018-08-21 2018-12-18 维沃移动通信有限公司 A kind of processing method and mobile terminal of augmented reality AR image
CN109218648A (en) * 2018-09-21 2019-01-15 维沃移动通信有限公司 A kind of display control method and terminal device
WO2019124850A1 (en) * 2017-12-20 2019-06-27 네이버랩스 주식회사 Method and system for personifying and interacting with object

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10055895B2 (en) * 2016-01-29 2018-08-21 Snap Inc. Local augmented reality persistent sticker objects

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004287557A (en) * 2003-03-19 2004-10-14 Matsushita Electric Ind Co Ltd Video phone terminal and virtual character variation control device
CN104780338A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
CN106200831A (en) * 2016-08-31 2016-12-07 广州数娱信息科技有限公司 A kind of AR, holographic intelligent device
CN107861682A (en) * 2017-11-03 2018-03-30 网易(杭州)网络有限公司 The control method for movement and device of virtual objects
WO2019124850A1 (en) * 2017-12-20 2019-06-27 네이버랩스 주식회사 Method and system for personifying and interacting with object
CN109035420A (en) * 2018-08-21 2018-12-18 维沃移动通信有限公司 A kind of processing method and mobile terminal of augmented reality AR image
CN109218648A (en) * 2018-09-21 2019-01-15 维沃移动通信有限公司 A kind of display control method and terminal device

Also Published As

Publication number Publication date
CN110308793A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN108415705B (en) Webpage generation method and device, storage medium and equipment
JP7058760B2 (en) Image processing methods and their devices, terminals and computer programs
CN110308793B (en) Augmented reality AR expression generation method and device and storage medium
CN108197327B (en) Song recommendation method, device and storage medium
US20190339840A1 (en) Augmented reality device for rendering a list of apps or skills of artificial intelligence system and method of operating the same
TW202119362A (en) An augmented reality data presentation method, electronic device and storage medium
CN109189879B (en) Electronic book display method and device
KR102560689B1 (en) Method and apparatus for displaying an ar object
US20230160716A1 (en) Method and apparatus for displaying surrounding information using augmented reality
US11625066B2 (en) Foldable electronic device and photographing method using multiple cameras in foldable electronic device
CN108270794B (en) Content distribution method, device and readable medium
KR102648993B1 (en) Electronic device for providing avatar based on emotion state of user and method thereof
AU2018278562B2 (en) Method for pushing picture, mobile terminal, and storage medium
US11069115B2 (en) Method of controlling display of avatar and electronic device therefor
CN113424228A (en) Electronic device for providing avatar animation and method thereof
WO2022048398A1 (en) Multimedia data photographing method and terminal
US11531702B2 (en) Electronic device for generating video comprising character and method thereof
CN113918767A (en) Video clip positioning method, device, equipment and storage medium
KR102639725B1 (en) Electronic device for providing animated image and method thereof
KR20190134975A (en) Augmented realtity device for rendering a list of apps or skills of artificial intelligence system and method of operating the same
US11238622B2 (en) Method of providing augmented reality contents and electronic device therefor
KR102646344B1 (en) Electronic device for image synthetic and operating thereof
CN116580707A (en) Method and device for generating action video based on voice
CN111652986B (en) Stage effect presentation method and device, electronic equipment and storage medium
CN113139614A (en) Feature extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant