CN108830917B - Information generation method, terminal and computer readable storage medium - Google Patents
Information generation method, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN108830917B CN108830917B CN201810535632.6A CN201810535632A CN108830917B CN 108830917 B CN108830917 B CN 108830917B CN 201810535632 A CN201810535632 A CN 201810535632A CN 108830917 B CN108830917 B CN 108830917B
- Authority
- CN
- China
- Prior art keywords
- information
- picture
- pictures
- acquiring
- identified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses an information generation method, which comprises the following steps: acquiring input information input by an object to be identified, and determining key information based on the input information; acquiring a first picture matched with the key information and the preset picture based on the preset picture and the key information; the preset picture is a picture comprising portrait information of the object to be identified; and processing the first picture to generate animation information of the portrait information of the object to be identified. The embodiment of the invention also discloses a terminal and a computer readable storage medium, which solve the problem that the manual operation of a user is needed in the related technology and improve the intelligence of the terminal.
Description
Technical Field
The present invention relates to image processing technology in the field of communications, and in particular, to an information generating method, a terminal, and a computer-readable storage medium.
Background
Along with the continuous popularization of intelligent terminals, the frequency of using the intelligent terminals by people is higher and higher. In particular, the chat function of the smart terminal is most frequently used in people's daily life. In addition, when a user chats with an application with a chatting function in the intelligent terminal, the user often uses information of non-characters such as various expressions and small videos;
at present, the non-text information such as emoticons, small videos and the like in the chat function includes information for cartoon characters, animals and the like, and also includes information for various film and television stars. However, as the user demand increases, the user wants to use non-text information such as a face expression and a small video for the user. Although some software can generate interesting emoticons for users, the interesting emoticons need to be generated by inputting photos manually and operating the photos manually, and operation difficulty and intelligence are high for the users.
Disclosure of Invention
In view of this, embodiments of the present invention are expected to provide an information generating method, a terminal, and a computer-readable storage medium, so as to solve the problem that a user needs to manually operate in the related art, reduce operation difficulty and complexity, and greatly improve intelligence of the terminal.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a method of information generation, the method comprising:
acquiring input information input by an object to be identified, and determining key information based on the input information;
acquiring a first picture matched with the key information and the preset picture based on the preset picture and the key information; the preset picture is a picture comprising portrait information of the object to be identified;
and processing the first picture to generate animation information of portrait information of the object to be recognized.
Optionally, the obtaining input information input by the object to be recognized, and determining key information based on the input information includes:
acquiring input information input by the object to be recognized;
and analyzing the input information, and acquiring information capable of representing the emotion of the object to be recognized from the input information to obtain the key information.
Optionally, the obtaining, based on a preset picture and the key information, a first picture matched with the key information and the preset picture includes:
acquiring a preset picture, and acquiring portrait information of the object to be identified in the preset picture;
acquiring pictures matched with the preset pictures to obtain a plurality of second pictures based on the portrait information of the object to be identified;
and acquiring the first picture from the plurality of second pictures based on the key information.
Optionally, the obtaining the first picture from the plurality of second pictures based on the key information includes:
performing expression recognition on each second picture, and determining expression information corresponding to portrait information of the object to be recognized in each second picture;
and acquiring the first picture from the plurality of second pictures based on the expression information and the key information corresponding to each second picture.
Optionally, the obtaining the first picture from the plurality of second pictures based on the expression information and the key information corresponding to each second picture includes:
and acquiring a second picture with expression information matched with the key information from the plurality of second pictures to obtain the first picture.
Optionally, the processing the first picture to generate animation information about portrait information of the object to be recognized includes:
identifying each first picture, and acquiring a picture corresponding to the head portrait information in each first picture to obtain a plurality of third pictures;
acquiring information matched with the expression information of the head portrait information in the third picture to obtain target information; the target information comprises information representing expression information of head portrait information in the third picture;
adding the target information in at least one third picture;
if the pictures without the added target information exist in the third pictures, generating animation information of portrait information of the object to be recognized based on the third pictures with the added target information and the third pictures without the added target information;
and if the target information is added to each third picture, generating animation information of portrait information of the object to be identified based on the third picture added with the target information.
Optionally, the identifying each first picture, obtaining a picture corresponding to the avatar information in each first picture, and obtaining multiple third pictures includes:
identifying each first picture, and determining an area where head portrait information in the first picture is located;
and performing cutout processing on the region where the head portrait information is located in each first picture to obtain a plurality of third pictures.
A terminal, the terminal comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute the information generating program stored in the memory to implement the steps of:
acquiring input information input by an object to be identified, and determining key information based on the input information;
acquiring a first picture matched with the key information and the preset picture based on the preset picture and the key information; the preset picture comprises portrait information of the object to be identified;
and processing the first picture to generate animation information of the portrait information of the object to be identified.
Optionally, the processor is further configured to perform the following steps:
identifying each first picture, and acquiring a picture corresponding to the head portrait information in each first picture to obtain a plurality of third pictures;
acquiring information matched with the expression information of the head portrait information in the third picture to obtain target information; the target information comprises information representing expression information of head portrait information in the third picture;
adding the target information in at least one third picture;
if the pictures without the added target information exist in the plurality of third pictures, generating animation information of the portrait information of the object to be identified based on the third pictures with the added target information and the third pictures without the added target information;
if the target information is added to each third picture, animation information about portrait information of the object to be recognized is generated based on the third picture added with the target information.
A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the information generating method described above.
The information generation method, the terminal and the computer readable storage medium provided by the embodiment of the invention acquire the input information input by the object to be identified, determine the key information based on the input information, acquire the first picture matched with the key information and the preset picture based on the preset picture including the portrait information of the object to be identified and the key information, and then process the first picture to generate the animation information related to the portrait information of the object to be identified.
Drawings
Fig. 1 is a schematic hardware configuration diagram of an alternative mobile terminal implementing various embodiments of the present invention;
fig. 2 is a schematic structural diagram of a communication system in which a mobile terminal according to an embodiment of the present invention can operate;
fig. 3 is a schematic flowchart of an information generating method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another information generating method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an interface for generating input information according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an area where a head portrait in a picture of an information generation method according to an embodiment of the present invention is located;
fig. 7 is a schematic diagram of an interface for generating a prompt message according to an embodiment of the present invention;
fig. 8 is a schematic flowchart of another information generating method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "part", or "unit" used to indicate elements are used only for facilitating the description of the present invention, and have no particular meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: a Radio Frequency (RF) unit 101, a Wi-Fi module 102, an audio output unit 103, an a/V (audio/video) input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following specifically describes the components of the mobile terminal with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), general Packet Radio Service (GPRS), code Division Multiple Access 2000 (Code Division Multiple Access 2000, cdma2000), wideband Code Division Multiple Access (WCDMA), time Division-Synchronous Code Division Multiple Access (TD-SCDMA), FDD-LTE, and TDD-LTE, etc.
Wi-Fi belongs to a short-distance wireless transmission technology, and a mobile terminal can help a user to receive and send emails, browse webpages, access streaming media and the like through a Wi-Fi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the Wi-Fi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within a scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the Wi-Fi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by graphics processor 1041 may be stored in memory 109 (or other storage medium) or transmitted via radio unit 101 or Wi-Fi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, without limitation.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation on or near the touch panel, the touch panel is transmitted to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes User Equipment (UE) 201, evolved UMTS Terrestrial Radio Access Network (E-UTRAN) 202, evolved Packet Core Network (EPC) 203, and IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 with access to the EPC 203.
The EPC203 may include a Mobility Management Entity (MME) 2031, a Home Subscriber Server (HSS) 2032, other MMEs 2033, a Serving Gateway (SGW) 2034, a packet data network gateway (PDN Gate Way, PGW) 2035, and a Policy and Charging Rules Function (PCRF) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IP Multimedia Subsystem (IMS) or other IP services, and the like.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems, and the like.
Based on the above mobile terminal hardware structure and communication system, various embodiments of the present invention are proposed.
An embodiment of the present invention provides an information generating method, which is shown in fig. 3 and includes the following steps:
Step 301 obtains input information input by an object to be identified and determines key information based on the input information, which can be implemented by a terminal; the terminal may be a terminal having an information editing function or capable of installing an application having an information editing function; in the embodiment of the present invention, the terminal may be an intelligent mobile terminal having the above function.
The object to be recognized may be an object to which an input operation of input information is currently performed using the terminal, and for example, if the object to which an input operation of input information is currently performed using the terminal is a user holding the terminal, the object to be recognized is the user. The terminal comprises a mobile phone as an example for explanation, and the input information can be generated after a user inputs information in an information editing frame by using a self-contained information application in an operating system of the mobile phone; alternatively, the input information may be generated by inputting information to be transmitted or stored in an information edit box of an application that can transmit information and is installed in a mobile phone by the user. That is, the input information may be generated after the user inputs information using a mobile phone; of course, the generation of the input information may also include other ways, which are only illustrated and not limited to this way.
The key information can be obtained by analyzing input information input by a user; in one possible implementation, the key information may be extracted from the input information. In the embodiment of the present invention, the key information may refer to information representing mood, emotion, and the like of the user, and of course, the key information may also refer to information representing actions, multiple places, desired destination places, and the like of the user.
The preset picture is a picture including portrait information of an object to be identified.
In other embodiments of the present invention, the step 302 obtains the first picture matched with the key information and the preset picture based on the preset picture and the key information, and may be implemented by a terminal; the preset picture may be a picture about the object to be recognized, and in one possible implementation, the preset picture may have a picture that is a feature capable of accurately recognizing facial information of the object to be recognized, for example, the preset picture may be a picture of taken avatar information of the object to be recognized.
The first picture can be obtained from a gallery in the terminal; the first picture is matched with a preset picture and is matched with key information. In a possible implementation manner, the matching of the first picture with the preset picture may mean that the image information of the first picture includes portrait information of the object to be recognized, and the matching of the first picture with the key information may mean that the portrait information of the object to be recognized in the first picture is matched with the key information.
In other embodiments of the present invention, step 303 processes the first picture to generate animation information about portrait information of the object to be recognized, which may be implemented by the terminal; the first picture can be obtained only by one picture or can be obtained by a plurality of pictures; if a first picture is acquired, animation information about portrait information of an object to be recognized can be generated by the first picture; if a plurality of first pictures are acquired, animation information about portrait information of the object to be identified can be generated by adopting all the acquired first pictures. The animation information may be generated by directly using the acquired first picture, or the animation information may be generated based on the first picture to which the corresponding information is added after some information corresponding to the key information is added to the acquired first picture. The animation information may be an emoticon of portrait information about the object to be recognized, and the animation information may be for use when the object to be recognized is subjected to information editing or chatting.
The information generation method provided by the embodiment of the invention obtains the input information input by the object to be identified, determines the key information based on the input information, obtains the multiple first pictures matched with the key information and the preset pictures based on the preset pictures including the portrait information of the object to be identified and the key information, and then processes the multiple first pictures to generate the animation information related to the portrait information of the object to be identified.
Based on the foregoing embodiments, an embodiment of the present invention provides an information generating method, as shown in fig. 4, by taking an example that key information includes information representing mood and emotion of a user, the method includes the following steps:
And step 402, the terminal analyzes the input information and acquires information capable of representing the emotion of the object to be recognized from the input information to obtain key information.
The key information can be obtained by performing semantic analysis on input information, splitting the input information into a plurality of words or terms according to a semantic analysis result, and then obtaining words or terms capable of representing the emotion of the object to be recognized from the plurality of words or terms obtained by splitting. It should be noted that, if the input information is voice information, the voice information may be converted into text information before the operation is performed, and then the operation is performed to obtain the key information.
The object to be identified xiao Peng is described as an example of using WeChat and friend chatting in a mobile phone: when a friend subtotal knows that xiao Peng has some difficulty in recent work, actively sends a WeChat inquiry xiao Peng to be in a recent situation, xiao Peng receives information sent by the friend subtotal to the friend subtotal, and inputs information "good luck" in a WeChat edit box as shown in FIG. 5; at this time, after the mobile phone analyzes the input information xiao Peng, the key information "happy" can be determined.
And step 403, the terminal acquires a preset picture and acquires portrait information of an object to be identified in the preset picture.
The preset picture is a picture including portrait information of an object to be identified.
In other embodiments of the present invention, the portrait information may be obtained by acquiring avatar information of an object to be identified in a preset picture.
And step 404, the terminal acquires pictures matched with the preset pictures based on the portrait information of the object to be identified to obtain a plurality of second pictures.
The second picture may be obtained by acquiring a picture including portrait information of the object to be identified from a gallery in the terminal.
After the terminal acquires the second picture, the expression of the object to be identified in each picture is analyzed, and then the first picture with the expression information matched with the key information is obtained. In a feasible implementation manner, after the first picture is obtained, the first picture may be marked by using a preset identifier; the preset identifier may be preset by the user according to the preference, interest, will, and the like of the user. The preset mark can be a pattern, various types of characters and the like which can be distinguished from other pictures.
And step 406, the terminal identifies each first picture and determines the area where the head portrait information in the first picture is located.
The area where the avatar information in the first picture is located may refer to a location where an avatar of the object to be identified in the first picture is located.
And 407, performing matting on the region where the head portrait information in each first picture is located by the terminal to obtain a plurality of third pictures.
The third picture may be a picture of avatar information of the object to be recognized, and certainly, the avatar information in each third picture is matched with the expression and key information of the object to be recognized. If the key information is 'happy', the avatar information in each third picture is a happy expression of the object to be recognized. As shown in fig. 6, a picture of the object to be recognized is obtained, the expression of the object to be recognized in the picture is happy, and the terminal performs cutout processing on the image information covered by the area a, so as to obtain a third picture.
And step 408, the terminal acquires information matched with the expression information of the head portrait information in the third picture to obtain target information.
And the target information comprises information representing expression information of the avatar information in the third picture.
In other embodiments of the present invention, the target information may include text information and/or scene information; the text information can be some text information corresponding to the key information; the scene information may be a scene picture corresponding to the key information, for example, the key information is bitter, and the corresponding scene information may be a picture in which leaves on the tree are all dropped or leaves on the tree are continuously dropped.
And step 409, adding target information in at least one third picture by the terminal.
The adding of the target information in the at least one third picture may be synthesizing the target information and the third picture into one picture.
And step 410, if the third pictures without the added target information exist in the plurality of third pictures, the terminal generates animation information of the portrait information of the object to be identified based on the third pictures with the added target information and the third pictures without the added target information.
And 411, if the target information is added to each third picture, the terminal generates animation information of the portrait information of the object to be identified based on the third picture added with the target information.
In other embodiments of the present invention, after generating animation information about portrait information of an object to be recognized, the terminal may automatically save the corresponding animation information and generate prompt information for prompting a user to use the animation information; for example, as shown in fig. 7, a prompt message B "about the expression avatar about your own joy is generated" may be displayed on the screen of the mobile phone, and the user knows that the expression information of the avatar of the user is stored in the mobile phone after seeing the prompt message, and then the expression of the avatar of the user may be sent to a friend for a short time.
It should be noted that, for the description of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the description in the other embodiments, which is not repeated herein.
The information generation method provided by the embodiment of the invention obtains the input information input by the object to be identified, determines the key information based on the input information, obtains the first picture matched with the key information and the preset picture based on the preset picture including the portrait information of the object to be identified and the key information, and then processes the first picture to generate the animation information related to the portrait information of the object to be identified.
Based on the foregoing embodiments, an embodiment of the present invention provides an information generating method, which is shown in fig. 8 and includes the following steps:
And 502, the terminal analyzes the input information and acquires information capable of representing the emotion of the object to be recognized from the input information to obtain key information.
The preset picture is a picture including portrait information of an object to be identified.
And 504, the terminal acquires pictures matched with the preset pictures based on the portrait information of the object to be identified to obtain a plurality of second pictures.
And 505, the terminal performs expression recognition on each second picture and determines expression information corresponding to the portrait information of the object to be recognized in each second picture.
The method comprises the steps that expression information corresponding to portrait information of an object to be recognized in each second picture is determined, and the expression information can be realized by adopting an expression recognition method capable of recognizing the expression of the image; in a feasible implementation manner, the facial image of each second picture may be acquired first, then the preprocessing is performed, and then the expression feature extraction and the expression classification are performed, so as to determine the corresponding expression information.
The step of obtaining the first picture from the plurality of second pictures based on the expression information and the key information corresponding to each second picture can be realized by the following steps:
and obtaining a second picture with expression information matched with the key information from the plurality of second pictures to obtain a first picture.
In other embodiments of the present invention, the first picture may be obtained by acquiring, from the plurality of second pictures, a picture in which the expression information of the avatar is the key information.
And 507, the terminal identifies each first picture and determines the area of the head portrait information in the first picture.
And step 508, the terminal performs matting on the region where the head portrait information in each first picture is located to obtain a plurality of third pictures.
And the target information comprises information representing expression information of the avatar information in the third picture.
And step 510, adding target information in at least one third picture by the terminal.
And 511, if the third pictures without the added target information exist in the plurality of third pictures, the terminal generates animation information of the portrait information of the object to be identified based on the third pictures with the added target information and the third pictures without the added target information.
And step 512, if the target information is added to each third picture, the terminal generates animation information of the portrait information of the object to be identified based on the third picture added with the target information.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
The information generation method provided by the embodiment of the invention obtains the input information input by the object to be identified, determines the key information based on the input information, obtains the first picture matched with the key information and the preset picture based on the preset picture including the portrait information of the object to be identified and the key information, and then processes the first picture to generate the animation information related to the portrait information of the object to be identified.
Based on the foregoing embodiments, an embodiment of the present invention provides a terminal, where the terminal may be applied to an information generating method provided in embodiments corresponding to fig. 3 to 4 and 8, and as shown in fig. 9, the terminal 6 may include: a processor 61, a memory 62 and a communication bus 63;
the communication bus 63 is used for realizing communication connection between the processor 61 and the memory 62;
the processor 61 is configured to execute the information generating program stored in the memory 62 to implement the following steps:
acquiring input information input by an object to be identified, and determining key information based on the input information;
acquiring a first picture matched with the key information and the preset picture based on the preset picture and the key information;
the method comprises the steps that a preset picture comprises portrait information of an object to be identified;
and processing the first picture to generate animation information of portrait information of the object to be identified.
In other embodiments of the present invention, the processor 61 is configured to execute the input information stored in the memory 62 for obtaining the input of the object to be recognized, and determine the key information based on the input information, so as to implement the following steps:
acquiring input information input by an object to be identified;
analyzing the input information, and acquiring information capable of representing the emotion of the object to be recognized from the input information to obtain key information.
In other embodiments of the present invention, the processor 61 is configured to execute the preset picture and the key information stored in the memory 62, and obtain a first picture matching the key information and the preset picture, so as to implement the following steps:
acquiring a preset picture, and acquiring portrait information of an object to be identified in the preset picture;
acquiring pictures matched with preset pictures to obtain a plurality of second pictures based on the portrait information of the object to be identified;
and acquiring a first picture from the plurality of second pictures based on the key information.
In other embodiments of the present invention, the processor 61 is configured to execute the key information-based acquisition of the first picture from the plurality of second pictures stored in the memory 62 to implement the following steps:
performing expression recognition on each second picture, and determining expression information corresponding to portrait information of an object to be recognized in each second picture;
and acquiring a first picture from the plurality of second pictures based on the expression information and the key information corresponding to each second picture.
In other embodiments of the present invention, the processor 61 is configured to execute the emotion information and the key information stored in the memory 62, based on the corresponding emotion information and key information of each second picture, to obtain the first picture from the plurality of second pictures, so as to implement the following steps:
and obtaining a second picture with expression information matched with the key information from the plurality of second pictures to obtain a first picture.
In other embodiments of the present invention, the processor 61 is configured to execute the processing on the plurality of first pictures stored in the memory 62 to generate animation information about portrait information of the object to be recognized, so as to implement the following steps:
identifying each first picture, and acquiring a picture corresponding to the head portrait information in each first picture to obtain a plurality of third pictures;
acquiring information matched with the expression information of the head portrait information in the third picture to obtain target information;
the target information comprises information representing expression information of the head portrait information in the third picture;
adding target information in at least one third picture;
if the pictures without the added target information exist in the plurality of third pictures, generating animation information about the portrait information of the object to be identified based on the third pictures with the added target information and the third pictures without the added target information;
and if the target information is added to each third picture, generating animation information about the portrait information of the object to be recognized based on the third picture added with the target information.
In other embodiments of the present invention, the processor 61 is configured to execute the identification processing on each first picture stored in the memory 62, and acquire a picture corresponding to the avatar information in each first picture to obtain a plurality of third pictures, so as to implement the following steps:
identifying each first picture, and determining the area of the head portrait information in the first picture;
and performing cutout processing on the region where the head portrait information in each first picture is located to obtain a plurality of third pictures.
It should be noted that, for a specific implementation process of the step executed by the processor in this embodiment, reference may be made to the implementation process in the information generation method provided in the embodiments corresponding to fig. 3 to 4 and 8, and details are not described here again.
The terminal provided by the embodiment of the invention acquires the input information input by the object to be identified, determines the key information based on the input information, acquires the plurality of first pictures matched with the key information and the preset pictures based on the preset pictures including the portrait information of the object to be identified and the key information, and then processes the plurality of first pictures to generate the animation information related to the portrait information of the object to be identified.
Based on the foregoing embodiments, embodiments of the invention provide a computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of:
acquiring input information input by an object to be identified, and determining key information based on the input information;
acquiring a first picture matched with the key information and the preset picture based on the preset picture and the key information;
the method comprises the steps that a preset picture comprises portrait information of an object to be identified;
and processing the first picture to generate animation information of portrait information of the object to be identified.
In other embodiments of the present invention, the one or more programs are executable by the one or more processors to obtain input information input by an object to be recognized, and determine key information based on the input information, to implement the steps of:
acquiring input information input by an object to be identified;
analyzing the input information, and acquiring information capable of representing the emotion of the object to be recognized from the input information to obtain key information.
In other embodiments of the present invention, the one or more programs are executable by the one or more processors to obtain, based on the preset picture and the key information, a first picture matching the key information and the preset picture to implement the steps of:
acquiring a preset picture, and acquiring portrait information of an object to be identified in the preset picture;
acquiring pictures matched with preset pictures to obtain a plurality of second pictures based on the portrait information of the object to be identified;
and acquiring a first picture from the plurality of second pictures based on the key information.
In other embodiments of the present invention, the one or more programs are executable by the one or more processors to obtain a first picture from a plurality of second pictures based on the key information to perform the steps of:
performing expression recognition on each second picture, and determining expression information corresponding to portrait information of an object to be recognized in each second picture;
and acquiring a first picture from the plurality of second pictures based on the expression information and the key information corresponding to each second picture.
In other embodiments of the present invention, the one or more programs may be executed by the one or more processors to obtain the first picture from the plurality of second pictures based on the corresponding facial expression information and key information of each second picture, so as to implement the following steps:
and obtaining a second picture with expression information matched with the key information from the plurality of second pictures to obtain a first picture.
In other embodiments of the present invention, the one or more programs are executable by the one or more processors to process the plurality of first pictures to generate animation information regarding portrait information of the object to be recognized, to implement the steps of:
identifying each first picture, and acquiring a picture corresponding to the head portrait information in each first picture to obtain a plurality of third pictures;
acquiring information matched with the expression information of the head portrait information in the third picture to obtain target information;
the target information comprises information representing expression information of the head portrait information in the third picture;
adding target information in at least one third picture;
if the pictures without added target information exist in the third pictures, generating animation information about the portrait information of the object to be identified based on the third pictures with added target information and the third pictures without added target information;
and if the target information is added to each third picture, generating animation information about the portrait information of the object to be identified based on the third picture added with the target information.
In other embodiments of the present invention, the one or more programs may be executed by the one or more processors to perform the identification processing on each first picture, and obtain the pictures corresponding to the avatar information in each first picture to obtain a plurality of third pictures, so as to implement the following steps:
identifying each first picture, and determining the area of the head portrait information in the first picture;
and performing cutout processing on the region where the head portrait information in each first picture is located to obtain a plurality of third pictures.
It should be noted that, for a specific implementation process of the step executed by the processor in this embodiment, reference may be made to the implementation process in the information generation method provided in the embodiments corresponding to fig. 3 to 5, 7, and 9, and details are not described here again.
The computer-readable storage medium may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM), and the like; and may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present invention.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (9)
1. An information generating method, characterized in that the method comprises:
acquiring input information input by an object to be identified, and determining key information based on the input information; wherein the key information characterizes the emotion of the object to be recognized and the behavior of the object to be recognized;
acquiring a first picture matched with the key information and the preset picture based on the preset picture and the key information; the preset picture is a picture comprising portrait information of the object to be identified;
processing the first picture to generate animation information about portrait information of the object to be identified;
wherein the processing the first picture to generate animation information about portrait information of the object to be identified comprises:
identifying each first picture, and acquiring a picture corresponding to the head portrait information in each first picture to obtain a plurality of third pictures;
acquiring information matched with the expression information of the head portrait information in the third picture to obtain target information; the target information comprises information representing expression information of head portrait information in the third picture;
adding the target information in at least one third picture;
if the pictures without the added target information exist in the plurality of third pictures, generating animation information of the portrait information of the object to be identified based on the third pictures with the added target information and the third pictures without the added target information;
and if the target information is added to each third picture, generating animation information of portrait information of the object to be identified based on the third picture added with the target information.
2. The method according to claim 1, wherein the obtaining input information input by an object to be recognized and determining key information based on the input information comprises:
acquiring input information input by the object to be recognized;
and analyzing the input information, and acquiring information capable of representing the emotion of the object to be recognized from the input information to obtain the key information.
3. The method according to claim 1 or 2, wherein the obtaining a first picture matching with the key information and the preset picture based on a preset picture and the key information comprises:
acquiring a preset picture, and acquiring portrait information of the object to be identified in the preset picture;
acquiring pictures matched with the preset pictures to obtain a plurality of second pictures based on the portrait information of the object to be identified;
and acquiring the first picture from the plurality of second pictures based on the key information.
4. The method according to claim 3, wherein the obtaining the first picture from the plurality of second pictures based on the key information comprises:
performing expression recognition on each second picture, and determining expression information corresponding to portrait information of the object to be recognized in each second picture;
and acquiring the first picture from the plurality of second pictures based on the expression information and the key information corresponding to each second picture.
5. The method according to claim 4, wherein the obtaining the first picture from the plurality of second pictures based on the corresponding facial expression information and the key information of each second picture comprises:
and acquiring a second picture with expression information matched with the key information from the plurality of second pictures to obtain the first picture.
6. The method according to claim 1, wherein the identifying each of the first pictures to obtain a plurality of third pictures by obtaining a picture corresponding to avatar information in each of the first pictures comprises:
identifying each first picture, and determining an area where head portrait information in the first picture is located;
and performing cutout processing on the region where the head portrait information is located in each first picture to obtain a plurality of third pictures.
7. A terminal, characterized in that the terminal comprises: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute the information generating program stored in the memory to implement the steps of:
acquiring input information input by an object to be identified, and determining key information based on the input information; wherein the key information characterizes the emotion of the object to be identified and the behavior of the object to be identified;
acquiring a first picture matched with the key information and the preset picture based on the preset picture and the key information; the preset picture comprises portrait information of the object to be identified;
processing the first picture to generate animation information about portrait information of the object to be identified;
wherein the processing the first picture to generate animation information about portrait information of the object to be recognized comprises:
identifying each first picture, and acquiring a picture corresponding to the head portrait information in each first picture to obtain a plurality of third pictures;
acquiring information matched with the expression information of the head portrait information in the third picture to obtain target information; the target information comprises information representing expression information of head portrait information in the third picture;
adding the target information in at least one third picture;
if the pictures without the added target information exist in the plurality of third pictures, generating animation information of the portrait information of the object to be identified based on the third pictures with the added target information and the third pictures without the added target information;
and if the target information is added to each third picture, generating animation information of portrait information of the object to be identified based on the third picture added with the target information.
8. The terminal of claim 7, wherein the processor is further configured to perform the steps of:
identifying each first picture, and acquiring a picture corresponding to the head portrait information in each first picture to obtain a plurality of third pictures;
acquiring information matched with the expression information of the head portrait information in the third picture to obtain target information; the target information comprises information representing expression information of head portrait information in the third picture;
adding the target information in at least one third picture;
if the pictures without the added target information exist in the plurality of third pictures, generating animation information of the portrait information of the object to be identified based on the third pictures with the added target information and the third pictures without the added target information;
if the target information is added to each third picture, animation information about portrait information of the object to be recognized is generated based on the third picture added with the target information.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs, which are executable by one or more processors, to implement the steps of the information generation method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810535632.6A CN108830917B (en) | 2018-05-29 | 2018-05-29 | Information generation method, terminal and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810535632.6A CN108830917B (en) | 2018-05-29 | 2018-05-29 | Information generation method, terminal and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108830917A CN108830917A (en) | 2018-11-16 |
CN108830917B true CN108830917B (en) | 2023-04-18 |
Family
ID=64146852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810535632.6A Active CN108830917B (en) | 2018-05-29 | 2018-05-29 | Information generation method, terminal and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108830917B (en) |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005038160A (en) * | 2003-07-14 | 2005-02-10 | Oki Electric Ind Co Ltd | Image generation apparatus, image generating method, and computer readable recording medium |
CN101354795A (en) * | 2008-08-28 | 2009-01-28 | 北京中星微电子有限公司 | Method and system for driving three-dimensional human face cartoon based on video |
CN102298784A (en) * | 2011-08-16 | 2011-12-28 | 武汉大学 | Cloud model-based synthetic method for facial expressions |
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
CN103488293A (en) * | 2013-09-12 | 2014-01-01 | 北京航空航天大学 | Man-machine motion interaction system and method based on expression recognition |
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
CN103854306A (en) * | 2012-12-07 | 2014-06-11 | 山东财经大学 | High-reality dynamic expression modeling method |
CN104346824A (en) * | 2013-08-09 | 2015-02-11 | 汉王科技股份有限公司 | Method and device for automatically synthesizing three-dimensional expression based on single facial image |
WO2015070690A1 (en) * | 2013-11-15 | 2015-05-21 | Tencent Technology (Shenzhen) Company Limited | Method and system for processing instant messaging messages |
CN104780339A (en) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | Method and electronic equipment for loading expression effect animation in instant video |
WO2015176287A1 (en) * | 2014-05-22 | 2015-11-26 | 华为技术有限公司 | Method and apparatus for communication by using text information |
CN105279737A (en) * | 2015-07-10 | 2016-01-27 | 深圳市美贝壳科技有限公司 | Device and method for generating person photograph materials |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
CN107360322A (en) * | 2017-06-30 | 2017-11-17 | 北京小米移动软件有限公司 | Information cuing method and device |
CN107517405A (en) * | 2017-07-31 | 2017-12-26 | 努比亚技术有限公司 | The method, apparatus and computer-readable recording medium of a kind of Video processing |
CN107657652A (en) * | 2017-09-11 | 2018-02-02 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107818787A (en) * | 2017-10-31 | 2018-03-20 | 努比亚技术有限公司 | A kind of processing method of voice messaging, terminal and computer-readable recording medium |
CN107846566A (en) * | 2017-10-31 | 2018-03-27 | 努比亚技术有限公司 | A kind of information processing method, equipment and computer-readable recording medium |
CN108009546A (en) * | 2016-10-28 | 2018-05-08 | 北京京东尚科信息技术有限公司 | information identifying method and device |
CN108073855A (en) * | 2016-11-11 | 2018-05-25 | 腾讯科技(深圳)有限公司 | A kind of recognition methods of human face expression and system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107257403A (en) * | 2012-04-09 | 2017-10-17 | 英特尔公司 | Use the communication of interaction incarnation |
KR101988279B1 (en) * | 2013-01-07 | 2019-06-12 | 삼성전자 주식회사 | Operating Method of User Function based on a Face Recognition and Electronic Device supporting the same |
CN104063427A (en) * | 2014-06-06 | 2014-09-24 | 北京搜狗科技发展有限公司 | Expression input method and device based on semantic understanding |
US20180077095A1 (en) * | 2015-09-14 | 2018-03-15 | X Development Llc | Augmentation of Communications with Emotional Data |
JP6711044B2 (en) * | 2016-03-16 | 2020-06-17 | カシオ計算機株式会社 | Image processing device, display device, animation generation method, and program |
-
2018
- 2018-05-29 CN CN201810535632.6A patent/CN108830917B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005038160A (en) * | 2003-07-14 | 2005-02-10 | Oki Electric Ind Co Ltd | Image generation apparatus, image generating method, and computer readable recording medium |
CN101354795A (en) * | 2008-08-28 | 2009-01-28 | 北京中星微电子有限公司 | Method and system for driving three-dimensional human face cartoon based on video |
CN102298784A (en) * | 2011-08-16 | 2011-12-28 | 武汉大学 | Cloud model-based synthetic method for facial expressions |
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
CN103854306A (en) * | 2012-12-07 | 2014-06-11 | 山东财经大学 | High-reality dynamic expression modeling method |
CN104346824A (en) * | 2013-08-09 | 2015-02-11 | 汉王科技股份有限公司 | Method and device for automatically synthesizing three-dimensional expression based on single facial image |
CN103488293A (en) * | 2013-09-12 | 2014-01-01 | 北京航空航天大学 | Man-machine motion interaction system and method based on expression recognition |
WO2015070690A1 (en) * | 2013-11-15 | 2015-05-21 | Tencent Technology (Shenzhen) Company Limited | Method and system for processing instant messaging messages |
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
WO2015176287A1 (en) * | 2014-05-22 | 2015-11-26 | 华为技术有限公司 | Method and apparatus for communication by using text information |
CN104780339A (en) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | Method and electronic equipment for loading expression effect animation in instant video |
CN105279737A (en) * | 2015-07-10 | 2016-01-27 | 深圳市美贝壳科技有限公司 | Device and method for generating person photograph materials |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
WO2017181769A1 (en) * | 2016-04-21 | 2017-10-26 | 腾讯科技(深圳)有限公司 | Facial recognition method, apparatus and system, device, and storage medium |
CN108009546A (en) * | 2016-10-28 | 2018-05-08 | 北京京东尚科信息技术有限公司 | information identifying method and device |
CN108073855A (en) * | 2016-11-11 | 2018-05-25 | 腾讯科技(深圳)有限公司 | A kind of recognition methods of human face expression and system |
CN107360322A (en) * | 2017-06-30 | 2017-11-17 | 北京小米移动软件有限公司 | Information cuing method and device |
CN107517405A (en) * | 2017-07-31 | 2017-12-26 | 努比亚技术有限公司 | The method, apparatus and computer-readable recording medium of a kind of Video processing |
CN107657652A (en) * | 2017-09-11 | 2018-02-02 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107818787A (en) * | 2017-10-31 | 2018-03-20 | 努比亚技术有限公司 | A kind of processing method of voice messaging, terminal and computer-readable recording medium |
CN107846566A (en) * | 2017-10-31 | 2018-03-27 | 努比亚技术有限公司 | A kind of information processing method, equipment and computer-readable recording medium |
Non-Patent Citations (2)
Title |
---|
基于语义维度的人脸表情生成;张申等;《清华大学学报(自然科学版)》;20110115;全文 * |
面向网上人际交流的便捷人脸动画;戴鹏等;《计算机辅助设计与图形学学报》;20080615;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108830917A (en) | 2018-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107835464B (en) | Video call window picture processing method, terminal and computer readable storage medium | |
CN109701266B (en) | Game vibration method, device, mobile terminal and computer readable storage medium | |
CN110321474B (en) | Recommendation method and device based on search terms, terminal equipment and storage medium | |
CN109040445B (en) | Information display method, dual-screen mobile terminal and computer readable storage medium | |
CN109036420B (en) | Voice recognition control method, terminal and computer readable storage medium | |
CN107347011B (en) | Group message processing method, equipment and computer readable storage medium | |
CN107818787B (en) | Voice information processing method, terminal and computer readable storage medium | |
CN110180181B (en) | Method and device for capturing wonderful moment video and computer readable storage medium | |
CN109840444B (en) | Code scanning identification method, equipment and computer readable storage medium | |
CN109062465A (en) | A kind of application program launching method, mobile terminal and storage medium | |
CN109672822A (en) | A kind of method for processing video frequency of mobile terminal, mobile terminal and storage medium | |
CN108012270B (en) | Information processing method, equipment and computer readable storage medium | |
CN109167880B (en) | Double-sided screen terminal control method, double-sided screen terminal and computer readable storage medium | |
CN114025215A (en) | File processing method, mobile terminal and storage medium | |
CN108733278A (en) | A kind of matching making friends method, mobile terminal and computer storage media | |
CN110083294B (en) | Screen capturing method, terminal and computer readable storage medium | |
CN108900696B (en) | Data processing method, terminal and computer readable storage medium | |
CN108566476B (en) | Information processing method, terminal and computer readable storage medium | |
CN109710168B (en) | Screen touch method and device and computer readable storage medium | |
CN108876387B (en) | Payment verification method, payment verification equipment and computer-readable storage medium | |
CN108255389B (en) | Image editing method, mobile terminal and computer readable storage medium | |
CN112532838B (en) | Image processing method, mobile terminal and computer storage medium | |
CN109471569A (en) | A kind of screen adjustment method of mobile terminal, mobile terminal and storage medium | |
CN110275667B (en) | Content display method, mobile terminal, and computer-readable storage medium | |
CN108196926B (en) | Platform content identification method, terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |