CN106657650B - System expression recommendation method and device and terminal - Google Patents

System expression recommendation method and device and terminal Download PDF

Info

Publication number
CN106657650B
CN106657650B CN201611245182.4A CN201611245182A CN106657650B CN 106657650 B CN106657650 B CN 106657650B CN 201611245182 A CN201611245182 A CN 201611245182A CN 106657650 B CN106657650 B CN 106657650B
Authority
CN
China
Prior art keywords
expressions
user
editing
expression
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611245182.4A
Other languages
Chinese (zh)
Other versions
CN106657650A (en
Inventor
郑懿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CSSC Education Technology (Beijing) Co.,Ltd.
Original Assignee
Cssc Education Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cssc Education Technology Beijing Co ltd filed Critical Cssc Education Technology Beijing Co ltd
Priority to CN201611245182.4A priority Critical patent/CN106657650B/en
Publication of CN106657650A publication Critical patent/CN106657650A/en
Application granted granted Critical
Publication of CN106657650B publication Critical patent/CN106657650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a system expression recommendation method, which comprises the following steps: receiving an editing instruction for editing characters in a character editing frame, triggering and starting a front camera according to the editing instruction, acquiring facial expressions of a user through the front camera, displaying one or more system expressions recommended to the user according to the acquired facial expressions, and adding one or more system expressions in the recommended one or more system expressions to the position of a cursor in the character editing frame according to a preset rule. The invention also discloses a system expression recommendation device and a terminal, which solve the problems that the workload for finding the expression which is matched with the current emotion from a huge system expression library is large and long time is consumed in the related technology, greatly shorten the time for finding the expression from the system expression library by a user, and improve the user experience.

Description

System expression recommendation method and device and terminal
Technical Field
The invention relates to the technical field of terminals, in particular to a method and a device for recommending system expressions and a terminal.
Background
Along with the development of the internet and the popularization of the terminal, the user group of the terminal is larger and larger, and meanwhile, more intelligent and humanized requirements are provided for software.
In the prior art, a real terminal is used as a game machine or a television by a user, possibly a learning machine, possibly a playground of a baby and the like, so that more fun is brought to the life of people.
In the process of chatting through the chatting software, a user needs to express the current mood at many times, and generally, the emotional state of the chatting moment can be conveyed through the system expression. However, at present, a user needs to download system expressions first and then select expressions capable of expressing own states from a system expression library, and because the current system expressions are very rich, much time is consumed for finding out a desired expression from a huge system expression library, and user experience is poor.
Aiming at the problems that the workload for finding the expression which is matched with the current emotion from a huge system expression library is large and long time is consumed in the related technology, a solution is not provided at present.
Disclosure of Invention
The invention mainly aims to provide a system expression recommendation method, a device and a terminal, and aims to solve the problems that the workload for finding out an expression matched with the current emotion from a huge system expression library is large and long time is consumed in the related technology.
In order to achieve the above object, the present invention provides a system expression recommendation method, including:
receiving an editing instruction for editing characters in a character editing frame;
triggering and starting a front camera according to the editing instruction;
acquiring facial expressions of a user through the front camera;
determining one or more system expressions recommended to the user according to the collected facial expressions;
adding one or more recommended system expressions in the one or more recommended system expressions to the position of the cursor in the text editing box according to a preset rule, wherein the preset rule comprises: and carrying out self-adaptive recording according to the action of the equipment.
Optionally, determining one or more system expressions recommended to the user according to the collected facial expressions includes: and determining one or more system expressions recommended to the user according to the preset corresponding relation between the facial expressions and the system expressions.
Optionally, acquiring the facial expression of the user through the front camera includes: and tracking the characteristic parts of eyes, nose and mouth through signal processing, and acquiring the facial expression of the user.
Optionally, acquiring the facial expression of the user through the front camera includes: and determining the facial expression of the user according to the corresponding relation between the tracked feature information of the eyes, the nose and the mouth and the feature information and the facial expression stored in the database.
Optionally, after displaying one or more system expressions recommended to the user according to the collected facial expressions, the method further includes: and automatically closing the front camera when an editing instruction for editing the characters in the character editing frame is not received within preset time.
According to another aspect of the present invention, there is also provided a system expression recommendation apparatus, including:
the receiving module is used for receiving an editing instruction for editing the characters in the character editing frame;
the triggering module is used for triggering and starting the front camera according to the editing instruction;
the acquisition module is used for acquiring the facial expression of the user through the front camera;
the determining module is used for displaying one or more system expressions recommended to the user according to the collected facial expressions;
an adding module, configured to add one or more recommended system expressions of the one or more system expressions to a position of a cursor in the text editing box according to a preset rule, where the preset rule includes: and carrying out self-adaptive recording according to the action of the equipment.
Optionally, the determining module includes:
and the determining unit is used for determining one or more system expressions recommended to the user according to the preset corresponding relation between the facial expressions and the system expressions.
Optionally, the collecting module includes:
and the acquisition unit is used for tracking the characteristic parts of eyes, nose and mouth through signal processing and acquiring the facial expression of the user.
Optionally, the collecting unit includes:
and the determining subunit is used for determining the facial expression of the user according to the corresponding relation between the tracked feature information of the eyes, the nose and the mouth and the feature information and the facial expression stored in the database.
Optionally, the apparatus further comprises:
and the closing module is used for automatically closing the front camera when the editing instruction for editing the characters in the character editing frame is not received within the preset time.
According to another aspect of the present invention, there is also provided a terminal including one of the above-described apparatuses.
By the method, the editing instruction for editing the characters in the character editing frame is received, the front camera is triggered and started according to the editing instruction, the facial expressions of the user are collected through the front camera, one or more system expressions recommended to the user are determined according to the collected facial expressions, one or more system expressions in the recommended one or more system expressions are added to the position of the cursor in the character editing frame according to the preset rule, the problems that the workload is large and long time is consumed when the expression which is matched with the current emotion is found from a huge system expression library in the related technology are solved, one or more system expressions which are matched with the facial expressions are automatically recommended for the user to select by collecting the facial expressions of the user, and the system expressions which are close to the emotion of the user are added behind the characters input by the user according to certain rules, the time for the user to search for the expression from the system expression library is greatly shortened, and the user experience is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an optional mobile terminal for implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
FIG. 3 is a flow chart of a system expression recommendation method according to an embodiment of the invention;
FIG. 4 is a first diagram of system expression recommendation according to an embodiment of the present invention;
FIG. 5 is a second diagram of system expression recommendation according to an embodiment of the invention;
FIG. 6 is a third schematic diagram of system expression recommendation according to an embodiment of the invention;
FIG. 7 is a fourth schematic diagram of system expression recommendation, in accordance with an embodiment of the present invention;
FIG. 8 is a block diagram of a system expression recommendation device according to an embodiment of the present invention;
FIG. 9 is a block diagram I of a system expression recommendation device according to a preferred embodiment of the present invention;
FIG. 10 is a block diagram II of a system expression recommendation device according to the preferred embodiment of the present invention;
fig. 11 is a block diagram three of a system expression recommendation device according to a preferred embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic diagram of a hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
The mobile terminal 100 may include a wireless communication unit 110, an a/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, etc.
Fig. 1 illustrates the mobile terminal 100 having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. The elements of the mobile terminal 100 will be described in detail below.
The wireless communication unit 110 may generally include one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel may include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to a terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal may exist in various forms, for example, it may exist in the form of an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), an Electronic Service Guide (ESG) of digital video broadcasting-handheld (DVB-H), and the like. The broadcast receiving module 111 may receive a signal broadcast by using various types of broadcasting systems. In particular, the broadcast receiving module 111 may receive digital broadcasting by using a digital broadcasting system such as a data broadcasting system of multimedia broadcasting-terrestrial (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video broadcasting-handheld (DVB-H), forward link media (MediaFLO @), terrestrial digital broadcasting integrated service (ISDB-T), and the like. The broadcast receiving module 111 may be constructed to be suitable for various broadcasting systems that provide broadcast signals as well as the above-mentioned digital broadcasting systems. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet module 113 supports wireless internet access of the mobile terminal. The module may be internally or externally coupled to the terminal. The wireless internet access technology to which the module relates may include WLAN (wireless LAN) (Wi-Fi), Wibro (wireless broadband), Wimax (worldwide interoperability for microwave access), HSDPA (high speed downlink packet access), and the like.
The short-range communication module 114 is a module for supporting short-range communication. Some examples of short-range communication technologies include bluetooth (TM), Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), zigbee (TM), and the like.
The location information module 115 is a module for checking or acquiring location information of the mobile terminal. A typical example of the location information module 115 is a GPS (global positioning system). According to the current technology, the GPS calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information, thereby accurately calculating three-dimensional current location information according to longitude, latitude, and altitude. Currently, a method for calculating position and time information uses three satellites and corrects an error of the calculated position and time information by using another satellite. In addition, the GPS can calculate speed information by continuously calculating current position information in real time.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 122, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the cameras 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the construction of the mobile terminal 100. The microphone 122 may receive sounds (audio data) via the microphone 122 in a phone call mode, a recording mode, a voice recognition mode, or the like, and is capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication module 112 in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data to control various operations of the mobile terminal 100 according to a command input by a user. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device. The sensing unit 140 may include a proximity sensor 141.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal 100. Various command signals or power input from the cradle may be used as a signal for identifying whether the mobile terminal 100 is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, mobile terminal 100 may include two or more display units (or other display devices), for example, mobile terminal 100 may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other incoming communication (communicating communication) is received, the alarm unit 153 may provide a tactile output (i.e., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, the mobile terminal 100 has been described in terms of its functionality. In addition, the mobile terminal 100 in the embodiment of the present invention may be a mobile terminal such as a folder type, a bar type, a swing type, a slide type, and other various types, and is not limited herein.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which a mobile terminal according to the present invention is operable will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global system for mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, a CDMA wireless communication system may include a plurality of intelligent terminals 100, a plurality of Base Stations (BSs) 270, Base Station Controllers (BSCs) 275, and a Mobile Switching Center (MSC) 280. The MSC 280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC 280 is also configured to interface with a BSC275, which may be coupled to the base station 270 via a backhaul. The backhaul line may be constructed according to any of several known interfaces, which may include, for example, european/american standard high capacity digital lines (E1/T1), Asynchronous Transfer Mode (ATM), network protocol (IP), point-to-point protocol (PPP), frame relay, high-rate digital subscriber line (HDSL), Asymmetric Digital Subscriber Line (ADSL), or various types of digital subscriber lines (xDSL). It will be understood that a system as shown in fig. 2 may include multiple BSCs 275.
Each BS 270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz, 5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cells". Alternatively, each partition of a particular BS 270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT)295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in fig. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In fig. 2, several Global Positioning System (GPS) satellites 300 are shown. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. The location information module 115 (e.g., GPS) as shown in fig. 1 is generally configured to cooperate with the satellites 300 to obtain desired positioning information. Other techniques that can track the location of the mobile terminal may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS 270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station is processed within a particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC 280, the MSC interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS 270 to transmit forward link signals to the mobile terminal 100.
Based on the mobile terminal, an embodiment of the present invention provides a system expression recommendation method, and fig. 3 is a flowchart of the system expression recommendation method according to the embodiment of the present invention, and as shown in fig. 3, the method includes the following steps:
step S302, receiving an editing instruction for editing characters in a character editing frame;
step S304, triggering and starting a front camera according to the editing instruction;
step S306, acquiring facial expressions of a user through the front camera;
step S308, determining one or more system expressions recommended to the user according to the collected facial expressions;
and S310, adding one or more recommended system expressions in the one or more system expressions to the position of the cursor in the text editing box according to a preset rule.
Through the steps, an editing instruction for editing characters in a character editing frame is received, a front camera is triggered and started according to the editing instruction, facial expressions of a user are collected through the front camera, one or more system expressions recommended to the user are displayed according to the collected facial expressions, one or more system expressions in the recommended one or more system expressions are added to the position of a cursor in the character editing frame according to a preset rule, the problems that in the related art, the workload is large and long time is consumed when the expression which is matched with the current emotion is found from a huge system expression library are solved, one or more system expressions which are matched with the facial expressions are automatically recommended for the user to select through collecting the facial expressions of the user, and the system expressions which are close to the emotion of the user are added behind the characters input by the user according to a certain rule, the time for the user to search for the expression from the system expression library is greatly shortened, and the user experience is improved.
Facial expressions are captured through the front lens, the emoticon or the emoticon is recommended automatically, when a user inputs characters, the front lens is opened, the emotion of the user is recognized through capturing the facial expressions of the user, and the purpose of intelligently and quickly adding the emoticons or the emoticons is achieved. Sending a WeChat to the friend, sending a smile, automatically identifying by the front lens, then giving a plurality of expression packages related to the smile for the user to select and add, wrinkling and eyebrow dropping, and recommending a plurality of expression packages representing anger or sadness. Fig. 4 is a schematic diagram of system expression recommendation according to an embodiment of the present invention, as shown in fig. 4, a current photo of a user is taken through a front camera, a facial expression of the user is determined to be happy according to the photo, a plurality of system expressions of happy interest corresponding to the happy interest in a database are displayed directly above a text edit box, the plurality of system expressions recommended to the user are provided for the user to select, and after the user selects one or more of the system expressions, the recommended plurality of system expressions are removed.
In the process of inputting characters, a front camera is opened, the upper half part of a mobile phone displays facial expressions captured by the front camera, and emoticons are displayed above a character input box according to the captured facial expressions dynamically, and ways of adding one or more recommended system expressions in one or more system expressions to the position of a cursor in a character editing box according to preset rules are various, in an optional embodiment, fig. 5 is a schematic diagram second recommended by the system expressions according to an embodiment of the invention, as shown in fig. 5, after a plurality of system expressions recommended to a user are determined, the system expressions are arranged from left to right according to the sequence of high matching degree to low matching degree, however, one or more system expressions destarched to the user are determined, the preset rules can be that the first system expression is directly added to a character editing place, and fig. 6 is a schematic diagram third recommended by the system expressions according to the embodiment of the invention, as shown in fig. 6, in the case where it is determined that there are a plurality of system expressions recommended to the user, the preset rule may be that the system expressions ranked in the first two digits are added to the file editing place, and the first and second system expressions are added to the text editing area.
In another optional embodiment, the manner of adding one or more recommended system expressions to the position of the cursor in the text editing box according to the preset rule may also be implemented according to a corresponding operation of a user, the front camera is turned on during text input, the upper half of the mobile phone displays facial expressions captured by the front camera, emoticons are dynamically displayed above the text input box according to the captured expressions, and the emoticons are arranged in a sequence from high to low, with the left being the first one, from left to right. In a conventional state, the user can add the emoticon by clicking the bar of the emoticon. The expression can also be adaptively recorded and added according to the action of the equipment. Fig. 7 is a fourth schematic diagram of system expression recommendation according to an embodiment of the present invention, as shown in fig. 7, the operation process is adaptively entered: in the process of inputting characters, the expression packet bar shows various matched expressions, when a user slightly lifts the top of the mobile phone upwards by more than 30 degrees, the user feels the action through the cooperation of elements such as a gravity sensing gyroscope and the like, the first system expression with the highest matching degree is added into the character input box once, is thrown and added for several times, and is thrown for 4 times, so that the first system expression is added for 4 times, the user feels the impression that icons are thrown from the top, a slight action can be added when the expressions are needed in the input process, the user feels the interest brought in the adding process, and the experience degree of the user is further improved.
The camera collects a face picture, and the collected picture can be subjected to face detection and positioning. The utility model is convenient to use and collects the facial images to build a database. The method is used by researchers in the fields of pattern recognition such as face detection/recognition/expression/posture and artificial intelligence. Facial expression automatic analysis tools, which may be the first commercially developed facial expression automatic analysis tool in the world based on a facial expression analysis system (FaceReader), are available with which an individual's emotional changes can be objectively assessed. A series of signals expressed by the human face, which are important for man-machine communication, are one of the most direct communication methods, and the emotional state and intention of others can be identified through the signals. Based on a principle similar to the facial expression analysis system described above, embodiments of the present invention divide expressions into the following categories: happy, sad, angry, surprised, afraid, aversion, lack of expression, etc. The acquiring of the facial expression of the user through the front camera specifically includes: face finding, face modeling and facial expression classification. The expression can be analyzed in real time by using a front camera to generate facial expression data. In analyzing the facial expression, a precise pattern or a skip pattern may be selected for high speed analysis. A basic human expression algorithm model is stored in a database, and the expression of a user is recognized according to an original facial image input in advance.
Based on the facial expression collection method, collecting the facial expression of the user through the front camera may include: and tracking the characteristic parts of eyes, nose and mouth through signal processing, and acquiring the facial expression of the user. And matching the collected feature information of the facial expression of the user with the feature information of the original facial image, and taking the facial expression corresponding to the feature information with the highest matching degree as the facial expression of the user. Further, acquiring the facial expression of the user through the front camera comprises: and determining the facial expression of the user according to the corresponding relation between the tracked feature information of the eyes, the nose and the mouth and the feature information and the facial expression stored in the database. Feature information is extracted from the facial image information detected in the input image, and a facial expression of the detected face is determined based on the extracted feature information and the acquired reference feature information.
And a plurality of system expression icons or expression packages are stored in the database, and before one or more system expressions recommended to the user are displayed according to the collected facial expressions, the one or more system expressions recommended to the user are determined according to the preset corresponding relation between the facial expressions and the system expressions. The following classifications for expressions: happy, sad, angry, surprised, afraid, aversive, absent of expressions, system expressions, one or more system expressions corresponding to each category are set, respectively. According to the collected facial expressions of the user, one or more system expressions corresponding to the facial expressions can be determined.
In order to reduce unnecessary power consumption, if the user does not edit the characters after a period of time is determined, the user does not need to be matched with system expressions in real time, so that the front camera can be automatically closed, and after one or more system expressions recommended to the user are displayed according to the collected facial expressions, an editing instruction for editing the characters in a character editing frame is not received within a preset time, and the front camera is automatically closed.
According to another aspect of the embodiments of the present invention, there is also provided a system expression recommendation apparatus, and fig. 8 is a block diagram of the system expression recommendation apparatus according to the embodiments of the present invention, as shown in fig. 8, including:
a receiving module 82, configured to receive an edit instruction for editing a text in the text edit box;
the triggering module 84 is used for triggering and starting the front camera according to the editing instruction;
the acquisition module 86 is used for acquiring the facial expression of the user through the front camera;
a determining module 88, configured to display one or more system expressions recommended to the user according to the collected facial expressions;
and the adding module 810 is configured to add one or more recommended system expressions in the one or more recommended system expressions to the position of the cursor in the text editing box according to a preset rule.
Fig. 9 is a block diagram of a system expression recommendation device according to a preferred embodiment of the present invention, and as shown in fig. 9, the determining module 88 includes:
the determining unit 92 is configured to determine one or more system expressions recommended to the user according to a preset correspondence between facial expressions and system expressions.
Fig. 10 is a second block diagram of the system expression recommendation device according to the preferred embodiment of the present invention, and as shown in fig. 10, the collecting module 86 includes:
and the acquisition unit 102 is used for tracking the characteristic parts of eyes, nose and mouth through signal processing and acquiring the facial expression of the user.
Optionally, the acquiring unit 102 includes:
and the determining subunit is used for determining the facial expression of the user according to the corresponding relation between the tracked feature information of the eyes, the nose and the mouth and the feature information and the facial expression stored in the database.
Fig. 11 is a third block diagram of a system expression recommendation device according to a preferred embodiment of the present invention, and as shown in fig. 11, the device further includes:
and a closing module 112, configured to automatically close the front-facing camera if an edit instruction for editing a text in the text edit box is not received within a predetermined time.
According to another aspect of the embodiments of the present invention, there is also provided a terminal including one of the above-mentioned apparatuses.
In the embodiment of the invention, an editing instruction for editing characters in a character editing frame is received, a front camera is triggered and started according to the editing instruction, facial expressions of a user are collected through the front camera, one or more system expressions recommended to the user are determined according to the collected facial expressions, one or more system expressions in the recommended one or more system expressions are added to the position of a cursor in the character editing frame according to a preset rule, the problems that the workload is large and long time is consumed when the expression which is matched with the current emotion is found from a huge system expression library in the related art are solved, one or more system expressions which are matched with the facial expressions are automatically recommended for the user to select by collecting the facial expressions of the user, and the system expressions which are close to the emotion of the user are added behind the characters input by the user according to a certain rule, the time for the user to search for the expression from the system expression library is greatly shortened, and the user experience is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A system expression recommendation method is characterized by comprising the following steps:
receiving an editing instruction for editing characters in a character editing frame;
triggering and starting a front camera according to the editing instruction;
acquiring facial expressions of a user through the front camera;
determining one or more system expressions recommended to the user according to the collected facial expressions;
adding one or more recommended system expressions in the one or more recommended system expressions to the position of the cursor in the text editing box according to a preset rule, wherein the preset rule comprises: and carrying out self-adaptive recording according to the action of the equipment.
2. The method of claim 1, wherein determining one or more system expressions to recommend to the user based on the collected facial expressions comprises:
and determining one or more system expressions recommended to the user according to the preset corresponding relation between the facial expressions and the system expressions.
3. The method of claim 1, wherein capturing facial expressions of a user via the front-facing camera comprises:
and tracking the characteristic parts of eyes, nose and mouth through signal processing, and acquiring the facial expression of the user.
4. The method of claim 3, wherein capturing facial expressions of the user via the front-facing camera comprises:
and determining the facial expression of the user according to the corresponding relation between the tracked feature information of the eyes, the nose and the mouth and the feature information and the facial expression stored in the database.
5. The method of any of claims 1-4, wherein after displaying one or more system expressions recommended to the user according to the collected facial expressions, the method further comprises:
and automatically closing the front camera when an editing instruction for editing the characters in the character editing frame is not received within preset time.
6. A system expression recommendation device, comprising:
the receiving module is used for receiving an editing instruction for editing the characters in the character editing frame;
the triggering module is used for triggering and starting the front camera according to the editing instruction;
the acquisition module is used for acquiring the facial expression of the user through the front camera;
the determining module is used for displaying one or more system expressions recommended to the user according to the collected facial expressions;
an adding module, configured to add one or more recommended system expressions of the one or more system expressions to a position of a cursor in the text editing box according to a preset rule, where the preset rule includes: and carrying out self-adaptive recording according to the action of the equipment.
7. The apparatus of claim 6, wherein the determining module comprises:
and the determining unit is used for determining one or more system expressions recommended to the user according to the preset corresponding relation between the facial expressions and the system expressions.
8. The apparatus of claim 6, wherein the acquisition module comprises:
and the acquisition unit is used for tracking the characteristic parts of eyes, nose and mouth through signal processing and acquiring the facial expression of the user.
9. The apparatus of claim 8, wherein the acquisition unit comprises:
and the determining subunit is used for determining the facial expression of the user according to the corresponding relation between the tracked feature information of the eyes, the nose and the mouth and the feature information and the facial expression stored in the database.
10. A terminal, characterized in that it comprises the apparatus of any one of claims 6 to 9.
CN201611245182.4A 2016-12-26 2016-12-26 System expression recommendation method and device and terminal Active CN106657650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611245182.4A CN106657650B (en) 2016-12-26 2016-12-26 System expression recommendation method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611245182.4A CN106657650B (en) 2016-12-26 2016-12-26 System expression recommendation method and device and terminal

Publications (2)

Publication Number Publication Date
CN106657650A CN106657650A (en) 2017-05-10
CN106657650B true CN106657650B (en) 2020-10-30

Family

ID=58836712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611245182.4A Active CN106657650B (en) 2016-12-26 2016-12-26 System expression recommendation method and device and terminal

Country Status (1)

Country Link
CN (1) CN106657650B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153496B (en) * 2017-07-04 2020-04-28 北京百度网讯科技有限公司 Method and device for inputting emoticons
CN107613102B (en) * 2017-08-30 2019-05-17 维沃移动通信有限公司 A kind of session information input method and mobile terminal
CN107633225A (en) * 2017-09-18 2018-01-26 北京金山安全软件有限公司 Information obtaining method and device
CN107784114A (en) * 2017-11-09 2018-03-09 广东欧珀移动通信有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN107911601A (en) * 2017-11-21 2018-04-13 深圳市欧信计算机通信科技有限公司 A kind of intelligent recommendation when taking pictures is taken pictures the method and its system of expression and posture of taking pictures
CN109640104B (en) * 2018-11-27 2022-03-25 平安科技(深圳)有限公司 Live broadcast interaction method, device, equipment and storage medium based on face recognition
CN110780955B (en) * 2019-09-05 2023-08-22 连尚(新昌)网络科技有限公司 Method and equipment for processing expression message
CN112214632B (en) * 2020-11-03 2023-11-17 虎博网络技术(上海)有限公司 Text retrieval method and device and electronic equipment
CN114115526A (en) * 2021-10-29 2022-03-01 歌尔科技有限公司 Head-wearing wireless earphone, control method thereof and wireless communication system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102455898A (en) * 2010-10-29 2012-05-16 张明 Cartoon expression based auxiliary entertainment system for video chatting
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression
CN104780093A (en) * 2014-01-15 2015-07-15 阿里巴巴集团控股有限公司 Method and device for processing expression information in instant messaging process
CN105262676A (en) * 2015-10-28 2016-01-20 广东欧珀移动通信有限公司 Method and apparatus for transmitting message in instant messaging

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750094A (en) * 2012-06-13 2012-10-24 胡锦云 Image acquiring method
US20160055370A1 (en) * 2014-08-21 2016-02-25 Futurewei Technologies, Inc. System and Methods of Generating User Facial Expression Library for Messaging and Social Networking Applications
CN105515955A (en) * 2015-12-25 2016-04-20 北京奇虎科技有限公司 Chat information distribution method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102455898A (en) * 2010-10-29 2012-05-16 张明 Cartoon expression based auxiliary entertainment system for video chatting
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression
CN104780093A (en) * 2014-01-15 2015-07-15 阿里巴巴集团控股有限公司 Method and device for processing expression information in instant messaging process
CN105262676A (en) * 2015-10-28 2016-01-20 广东欧珀移动通信有限公司 Method and apparatus for transmitting message in instant messaging

Also Published As

Publication number Publication date
CN106657650A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106657650B (en) System expression recommendation method and device and terminal
CN106888158B (en) Instant messaging method and device
CN104750420B (en) Screenshotss method and device
KR101466027B1 (en) Mobile terminal and its call contents management method
CN104902212A (en) Video communication method and apparatus
CN105391562B (en) Group chat device, method and mobile terminal
CN104917896A (en) Data pushing method and terminal equipment
CN105224925A (en) Video process apparatus, method and mobile terminal
CN105159533A (en) Mobile terminal and automatic verification code input method thereof
CN104731512B (en) The method, apparatus and terminal that picture is shared
CN104778067B (en) Start method and the terminal unit of audio
CN105303398B (en) Information display method and system
CN106789589B (en) Sharing processing method, sharing processing device and terminal
CN104679890B (en) Picture method for pushing and device
CN105141507A (en) Method and device for displaying head portrait for social application
CN104809221A (en) Recommending method for music information and device
CN105049637A (en) Device and method for controlling instant communication
CN107071321B (en) Video file processing method and device and terminal
CN106506778A (en) A kind of dialing mechanism and method
CN106598538B (en) Instruction set updating method and system
CN105739873A (en) Screen capturing method and terminal
CN107071329A (en) The method and device of automatic switchover camera in video call process
CN106657579B (en) Content sharing method and device and terminal
CN106024013B (en) Voice data searching method and system
CN106504050A (en) A kind of information comparison device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: 100089 603, floor 6, building 2, No. 70, South Xueyuan Road, Haidian District, Beijing

Applicant after: CSSC Education Technology (Beijing) Co.,Ltd.

Address before: 518057 Guangdong Province, Shenzhen high tech Zone of Nanshan District City, No. 9018 North Central Avenue's innovation building A, 6-8 layer, 10-11 layer, B layer, C District 6-10 District 6 floor

Applicant before: NUBIA TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant