CN109388456B - Head portrait selection method and mobile terminal - Google Patents

Head portrait selection method and mobile terminal Download PDF

Info

Publication number
CN109388456B
CN109388456B CN201811103418.XA CN201811103418A CN109388456B CN 109388456 B CN109388456 B CN 109388456B CN 201811103418 A CN201811103418 A CN 201811103418A CN 109388456 B CN109388456 B CN 109388456B
Authority
CN
China
Prior art keywords
label
target
head portrait
user
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811103418.XA
Other languages
Chinese (zh)
Other versions
CN109388456A (en
Inventor
田小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811103418.XA priority Critical patent/CN109388456B/en
Publication of CN109388456A publication Critical patent/CN109388456A/en
Application granted granted Critical
Publication of CN109388456B publication Critical patent/CN109388456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The embodiment of the invention provides a head portrait selection method, which comprises the following steps: generating a first label according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; displaying the recommended head portrait according to the first label, the second label and the third label; and receiving a target avatar selected by the user in the recommended avatar, and switching the avatar used by the login account of the social application program into the target avatar after sending the text content. The method and the device have the advantages that the avatar corresponding to the social application program account can be dynamically changed according to the first tag, the second tag or the third tag of the user, so that the avatar corresponding to the social application program account is not limited to the avatar set by the user any more, the avatar of the account logged in by the social application program can be dynamically changed, the requirements of personalized users are met, and the use experience of the user is improved.

Description

Head portrait selection method and mobile terminal
Technical Field
The invention relates to the technical field of mobile terminals, in particular to a head portrait selection method and a mobile terminal.
Background
In the existing social application programs, such as WeChat, QQ, a post bar and the like, after a user usually logs in a social account, a head portrait is set for the social account, and the user needs to show the individuation of the user through the head portrait.
Disclosure of Invention
The embodiment of the invention provides an avatar selection method and a mobile terminal, and aims to solve the problem that in the prior art, the personalized experience is poor due to a single avatar of a user in a social application program.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an avatar selection method, including: generating a first label according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; displaying a recommended head portrait according to the first label, the second label and the third label; and receiving a target avatar selected by the user in the recommended avatar, and switching the avatar used by the login account of the social application program into the target avatar after the text content is sent.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes: the first generation module is used for generating a first label according to the displayed interface in the process of inputting characters by a user through a social application program; the second generation module is used for generating a second label according to the geographical position information of the mobile terminal; the third generation module is used for generating a third label according to the text content input by the user; the display module is used for displaying the recommended head portrait according to the first label, the second label and the third label; and the first switching module is used for receiving the target head portrait selected by the user in the recommended head portrait and switching the head portrait used by the login account of the social application program into the target head portrait after the text content is sent.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the avatar selection method.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the avatar selection method are implemented.
In the embodiment of the invention, a first label is generated according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; the method comprises the steps of displaying recommended head portraits according to a first label, a second label and a third label, receiving a target head portrait selected by a user in the recommended head portraits, switching the head portraits used by a login account of a social application program into the target head portraits after sending text content, and realizing that the head portraits corresponding to the account of the social application program can be dynamically changed according to the first label, the second label or the third label of the user, so that the head portraits corresponding to the social application program are not limited to the head portraits set by the user any more, the head portraits of the login account of the social application program can be dynamically changed, the requirements of personalized users are met, and the use experience of the users is improved.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for selecting an avatar according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for selecting an avatar according to a second embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 4 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of a mobile terminal according to a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of an avatar selection method according to a first embodiment of the present invention is shown.
The head portrait selection method provided by the embodiment of the invention comprises the following steps:
step 101: and in the process of inputting characters by a user through the social application program, generating a first label according to the displayed interface.
The social application program can be a QQ, WeChat, a post, a forum and other social application programs, and the first tag is generated according to the currently displayed interface.
For example: if the currently displayed interface is a chat interface, the first tag can be a chat, and if the currently displayed interface is a forum recovery interface, the first tag can be a forum reply.
Step 102: and generating a second label according to the geographical position information of the mobile terminal.
Invoking a positioning system of the mobile terminal to determine current geographic location information, for example: a Global Positioning System (GPS) is called to determine the geographic location of the mobile terminal, and a second tag is generated according to the geographic location information, where the second tag may be a company, a family, or the like.
Step 103: and generating a third label according to the text content input by the user.
When a user uses the social application program and inputs text content in the input area, the input text content is subjected to semantic recognition, and a mood tag, namely a third tag, corresponding to the text content is determined according to semantic understanding, wherein the third tag can be happy, too much, depressed and the like.
Step 104: and displaying the recommended head portrait according to the first label, the second label and the third label.
And searching the corresponding recommended head portrait in the head database according to the generated first label, the second label and the third label, and displaying the recommended head portrait on a display interface.
It should be noted that the avatar database is established before step 101, a plurality of avatars are preset in the avatar database, each avatar has a plurality of tags, and each avatar in the avatar database may have only one tag.
When the head portrait in the head portrait library has a plurality of tags, the tag with the largest weight value can be determined according to the weight values of the first tag, the second tag and the third tag, and each head portrait is determined from the head portrait library according to the tag with the largest weight value.
Step 105: and receiving a target avatar selected by the user in the recommended avatar, and switching the avatar used by the login account of the social application program into the target avatar after sending the text content.
And the user selects the displayed recommended head portrait, the selected head portrait is determined as a target head portrait, the target head portrait is used for replacing the head portrait used by the login account of the application program, and when characters are sent, head portrait switching is carried out.
In the embodiment of the invention, a first label is generated according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; the method comprises the steps of displaying recommended head portraits according to a first label, a second label and a third label, receiving a target head portrait selected by a user in the recommended head portraits, switching the head portraits used by a login account of a social application program into the target head portraits after sending text content, and realizing that the head portraits corresponding to the account of the social application program can be dynamically changed according to the first label, the second label or the third label of the user, so that the head portraits corresponding to the social application program are not limited to the head portraits set by the user any more, the head portraits of the login account of the social application program can be dynamically changed, the requirements of personalized users are met, and the use experience of the users is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of a method for selecting an avatar according to a second embodiment of the present invention is shown.
The head portrait selection method provided by the embodiment of the invention comprises the following steps:
step 201: and in the process of inputting characters by a user through the social application program, generating a first label according to the displayed interface.
The social application program can be a QQ, WeChat, a post, a forum and other social application programs, and the first tag is generated according to the currently displayed interface.
For example: if the currently displayed interface is a chat interface, the first tag can be a chat, and if the currently displayed interface is a forum recovery interface, the first tag can be a forum reply.
Step 202: and generating a second label according to the geographical position information of the mobile terminal.
Invoking a positioning system of the mobile terminal to determine current geographic location information, for example: a Global Positioning System (GPS) is called to determine the geographic location of the mobile terminal, and a second tag is generated according to the geographic location information, where the second tag may be a company, a family, or the like.
Step 203: and generating a third label according to the text content input by the user.
When a user uses the social application program and inputs text content in the input area, the input text content is subjected to semantic recognition, and a mood tag, namely a third tag, corresponding to the text content is determined according to semantic understanding, wherein the third tag can be happy, too much, depressed and the like.
Step 204: and displaying the first label, the second label and the third label in the interface.
The method comprises the steps of displaying a first label, a second label and a third label in a current display interface for a user to select, wherein the display method of the first label, the second label and the third label can be pop-up frame display, floating window display and the like, the display positions of the first label, the second label and the third label can be any position of the display interface, and the first label, the second label and the third label are displayed in the edge area of the display interface in order to prevent the main content of the interface from being shielded.
Step 205: and receiving a target label selected by a user from the first label, the second label and the third label.
And receiving the selection operation of the user on the first label, the second label or the third label, and taking the label selected by the user as a target label, wherein the target label is a label focused by the user.
Step 206: and searching each head portrait corresponding to the target label in a head database.
And determining the label of each head portrait in the head portrait database, and acquiring each head portrait corresponding to the label consistent with the target label.
Step 207: and respectively determining the coincidence value of each head portrait and the target label.
It should be noted that, in addition to corresponding to different tags, each avatar in the avatar library also has a corresponding value, and the corresponding value of the target tag of each avatar is determined respectively. In addition to presetting coincidence values for each label, the coincidence values of each head portrait and the target label can be respectively calculated through an algorithm, and the embodiment of the invention is not particularly limited.
Step 208: and sequencing the head portraits from top to bottom according to the coincidence values.
The head portraits are sorted from high to low according to the coincidence value of each head portraits and the target label, and because the number of the head portraits with the target label is too large, when all the head portraits with the target label are output and displayed, a user needs to screen the head portraits in the head portraits with a large number, which causes a complicated problem of user operation, and therefore the head portraits need to be sorted from high to low according to the coincidence value.
Step 209: and determining the head portraits ranked in the top preset number as the recommended head portraits.
It should be noted that, a person skilled in the art may set the preset number according to actual situations, where the preset number may be 3, 5, 7, and the like, and the embodiment of the present invention is not limited in this regard.
Step 210: and displaying each recommended head portrait.
Step 211: and receiving a target avatar selected by the user in the recommended avatar, and switching the avatar used by the login account of the social application program into the target avatar after sending the text content.
And outputting and displaying the recommended head portraits of the preset number, enabling the user to select the head portraits according to the preference, and directly replacing the head portraits used by the current social application program login account with the selected target head portraits after the characters are sent.
Step 212: and when the selection operation of the user on the recommended head portrait is not received within a preset time period, determining the head portrait with the highest coincidence value with the target label as the target head portrait.
It should be noted that, a person skilled in the art may set the preset time period according to an actual situation, where the preset time period may be set to 5s, 10s, 15s, and the like, and the embodiment of the present invention is not limited thereto.
And when the selection operation of the user on each displayed head portrait is not received within a preset time period, directly acquiring the head portrait with the highest coincidence value as a target head portrait, and directly replacing the target head portrait with the head portrait used by the social application program login account after the characters are sent.
Step 213: and after the text content is sent, switching the head portrait used by the login account of the social application program into a target head portrait.
In the embodiment of the invention, a first label is generated according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; the method comprises the steps of displaying recommended head portraits according to a first label, a second label and a third label, receiving a target head portrait selected by a user in the recommended head portraits, switching the head portraits used by a login account of a social application program into the target head portraits after sending text content, and realizing that the head portraits corresponding to the account of the social application program can be dynamically changed according to the first label, the second label or the third label of the user, so that the head portraits corresponding to the social application program are not limited to the head portraits set by the user any more, the head portraits of the login account of the social application program can be dynamically changed, the requirements of personalized users are met, and the use experience of the users is improved. In addition, the selection operation of the user on the labels is received, the target labels required by the user are determined, the head portraits to be displayed are determined according to the coincidence value of the target labels, the user can conveniently and quickly select the head portraits in a limited number of head portraits, and the use experience of the user is improved.
EXAMPLE III
Referring to fig. 3, a block diagram of a mobile terminal according to a third embodiment of the present invention is shown.
The mobile terminal provided by the embodiment of the invention comprises: the first generation module 301 is configured to generate a first tag according to a displayed interface in a process of inputting characters by a user through a social application program; a second generating module 302, configured to generate a second tag according to the geographic location information of the mobile terminal; a third generating module 303, configured to generate a third tag according to the text content input by the user; a display module 304, configured to display the recommended avatar according to the first tag, the second tag, and the third tag; a first switching module 305, configured to receive a target avatar selected by the user from the recommended avatar, and switch the avatar used by the social application login account to the target avatar after sending the text content.
The first generation module generates a first tag according to a displayed interface in a process that a user inputs characters through a social application program, for example: if the currently displayed interface is a chat interface, the first tag can be a chat, and if the currently displayed interface is a forum recovery interface, the first tag can be a forum reply. The second generating module generates a second tag according to the geographical location information of the mobile terminal, and invokes a positioning system of the mobile terminal to determine the current geographical location information, for example: a Global Positioning System (GPS) is called to determine the geographic location of the mobile terminal, and a second tag is generated according to the geographic location information, where the second tag may be a company, a family, or the like. And the third generation module generates a third label according to the text content input by the user. When a user uses the social application program and inputs text content in the input area, the input text content is subjected to semantic recognition, and a mood tag, namely a third tag, corresponding to the text content is determined according to semantic understanding, wherein the third tag can be happy, too much, depressed and the like. And according to the generated first label, the second label and the third label, the display module displays the recommended head portrait according to the first label, the second label and the third label. And the first switching module receives a target head portrait selected by the user from the recommended head portraits and switches the head portrait used by the login account of the social application program into the target head portrait after sending the text content.
In the embodiment of the invention, a first label is generated according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; the method comprises the steps of displaying recommended head portraits according to a first label, a second label and a third label, receiving a target head portrait selected by a user in the recommended head portraits, switching the head portraits used by a login account of a social application program into the target head portraits after sending text content, and realizing that the head portraits corresponding to the account of the social application program can be dynamically changed according to the first label, the second label or the third label of the user, so that the head portraits corresponding to the social application program are not limited to the head portraits set by the user any more, the head portraits of the login account of the social application program can be dynamically changed, the requirements of personalized users are met, and the use experience of the users is improved.
Example four
Referring to fig. 4, a block diagram of a mobile terminal according to a fourth embodiment of the present invention is shown.
The mobile terminal provided by the embodiment of the invention comprises: the first generation module 401 is configured to generate a first tag according to a displayed interface in a process of inputting characters by a user through a social application program; a second generating module 402, configured to generate a second tag according to the geographic location information of the mobile terminal; a third generating module 403, configured to generate a third tag according to the text content input by the user; a display module 404, configured to display a recommended avatar according to the first tag, the second tag, and the third tag; a first switching module 405, configured to receive a target avatar selected by the user from the recommended avatar, and switch the avatar used by the social application login account to the target avatar after sending the text content.
Preferably, the display module 404 includes: the first display sub-module 4041 is configured to display the first tag, the second tag, and the third tag in an interface; a receiving sub-module 4042, configured to receive a target tag selected by a user from the first tag, the second tag, and the third tag; and the second display sub-module 4043 is configured to search and display each recommended avatar corresponding to the target tag in the avatar database.
Preferably, the second display sub-module 4043 includes: a searching unit 40431, configured to search a head portrait database for each head portrait corresponding to the target tag; a first determining unit 40432, configured to determine a coincidence value between each of the head portraits and the target label; the sorting unit 40433 is used for sorting the head portraits from top to bottom according to the coincidence values; a second determining unit 4034, configured to determine a preset number of top-ranked avatars as recommended avatars; a display unit 4035 configured to display each of the recommended avatar.
Preferably, the mobile terminal further includes: a determining module 406, configured to, after the first switching module 405 receives a target avatar selected by the user from the recommended avatars, and after the avatar used by the social application login account is switched to the target avatar after the text content is sent, determine, when a selection operation of the user on each recommended avatar is not accepted within a preset time period, an avatar with a highest coincidence value with the target tag as the target avatar; a second switching module 407, configured to switch the avatar used by the social application login account to the target avatar after the text content is sent.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
In the embodiment of the invention, a first label is generated according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; the method comprises the steps of displaying recommended head portraits according to a first label, a second label and a third label, receiving a target head portrait selected by a user in the recommended head portraits, switching the head portraits used by a login account of a social application program into the target head portraits after sending text content, and realizing that the head portraits corresponding to the account of the social application program can be dynamically changed according to the first label, the second label or the third label of the user, so that the head portraits corresponding to the social application program are not limited to the head portraits set by the user any more, the head portraits of the login account of the social application program can be dynamically changed, the requirements of personalized users are met, and the use experience of the users is improved. In addition, the selection operation of the user on the labels is received, the target labels required by the user are determined, the head portraits to be displayed are determined according to the coincidence value of the target labels, the user can conveniently and quickly select the head portraits in a limited number of head portraits, and the use experience of the user is improved.
EXAMPLE five
Referring to fig. 5, a hardware structure diagram of a mobile terminal for implementing various embodiments of the present invention is shown.
The mobile terminal 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510, configured to generate a first tag according to a displayed interface in a process of inputting a text by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; displaying a recommended head portrait according to the first label, the second label and the third label; and receiving a target avatar selected by the user in the recommended avatar, and switching the avatar used by the login account of the social application program into the target avatar after the text content is sent.
In the embodiment of the invention, a first label is generated according to a displayed interface in the process of inputting characters by a user through a social application program; generating a second label according to the geographical position information of the mobile terminal; generating a third label according to the text content input by the user; the method comprises the steps of displaying recommended head portraits according to a first label, a second label and a third label, receiving a target head portrait selected by a user in the recommended head portraits, switching the head portraits used by a login account of a social application program into the target head portraits after sending text content, and realizing that the head portraits corresponding to the account of the social application program can be dynamically changed according to the first label, the second label or the third label of the user, so that the head portraits corresponding to the social application program are not limited to the head portraits set by the user any more, the head portraits of the login account of the social application program can be dynamically changed, the requirements of personalized users are met, and the use experience of the users is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external device is connected to the mobile terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the embodiment of the avatar selection method are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the avatar selection method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A head portrait selection method is applied to a mobile terminal, and is characterized by comprising the following steps:
generating a first label according to a displayed interface in the process of inputting characters by a user through a social application program;
generating a second label according to the geographical position information of the mobile terminal;
generating a third label according to the text content input by the user;
displaying a recommended head portrait according to the first label, the second label and the third label;
and receiving a target avatar selected by the user in the recommended avatar, and switching the avatar used by the login account of the social application program into the target avatar after the text content is sent.
2. The method of claim 1, wherein the step of displaying a recommended avatar based on the first, second, and third tags comprises:
displaying the first label, the second label and the third label in an interface;
receiving a target label selected by a user from the first label, the second label and the third label;
and searching and displaying each recommended head portrait corresponding to the target label in a head database.
3. The method of claim 2, wherein the step of searching and displaying the recommended head portraits corresponding to the target tags in the head database comprises:
searching each head portrait corresponding to the target label in a head database;
respectively determining the coincidence value of each head portrait and the target label;
sequencing the head portraits from top to bottom according to the coincidence values;
determining the head portraits ranked in the top in a preset number as each recommended head portraits;
and displaying each recommended head portrait.
4. The method of claim 3, wherein after the step of receiving a target avatar selected by the user from the recommended avatar and switching the avatar used by the social application login account to the target avatar after sending the textual content, the method further comprises:
when the selection operation of the user on each recommended head portrait is not received within a preset time period, determining the head portrait with the highest coincidence value with the target label as a target head portrait;
and after the text content is sent, switching the head portrait used by the login account of the social application program into the target head portrait.
5. A mobile terminal, characterized in that the mobile terminal comprises:
the first generation module is used for generating a first label according to the displayed interface in the process of inputting characters by a user through a social application program;
the second generation module is used for generating a second label according to the geographical position information of the mobile terminal;
the third generation module is used for generating a third label according to the text content input by the user;
the display module is used for displaying the recommended head portrait according to the first label, the second label and the third label;
and the first switching module is used for receiving the target head portrait selected by the user in the recommended head portrait and switching the head portrait used by the login account of the social application program into the target head portrait after the text content is sent.
6. The mobile terminal of claim 5, wherein the display module comprises:
the first display submodule is used for displaying the first label, the second label and the third label in an interface;
the receiving submodule is used for receiving a target label selected by a user from the first label, the second label and the third label;
and the second display sub-module is used for searching and displaying each recommended head portrait corresponding to the target tag in the head database.
7. The mobile terminal of claim 6, wherein the second display sub-module comprises:
the searching unit is used for searching each head portrait corresponding to the target label in a head database;
a first determining unit, configured to determine a coincidence value between each of the head portraits and the target label;
the sorting unit is used for sorting the head portraits from top to bottom according to the coincidence values;
the second determining unit is used for determining the head portraits in the front sequence with the preset number as the recommended head portraits;
and the display unit is used for displaying each recommended head portrait.
8. The mobile terminal of claim 7, wherein the mobile terminal further comprises:
the determining module is used for receiving the target head portrait selected by the user from the recommended head portraits in the first switching module, switching the head portrait used by the login account of the social application program into the target head portrait after the text content is sent, and determining the head portrait with the highest coincidence value with the target label as the target head portrait when the selecting operation of the user on each recommended head portrait is not received in a preset time period;
and the second switching module is used for switching the head portrait used by the login account of the social application program into the target head portrait after the text content is sent.
9. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the avatar selection method according to any of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the avatar selection method according to any one of claims 1 to 4.
CN201811103418.XA 2018-09-20 2018-09-20 Head portrait selection method and mobile terminal Active CN109388456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811103418.XA CN109388456B (en) 2018-09-20 2018-09-20 Head portrait selection method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811103418.XA CN109388456B (en) 2018-09-20 2018-09-20 Head portrait selection method and mobile terminal

Publications (2)

Publication Number Publication Date
CN109388456A CN109388456A (en) 2019-02-26
CN109388456B true CN109388456B (en) 2021-12-07

Family

ID=65417675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811103418.XA Active CN109388456B (en) 2018-09-20 2018-09-20 Head portrait selection method and mobile terminal

Country Status (1)

Country Link
CN (1) CN109388456B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078341A (en) * 2019-12-04 2020-04-28 维沃移动通信有限公司 Account head portrait setting method and electronic equipment
CN111176510A (en) * 2019-12-30 2020-05-19 上海连尚网络科技有限公司 Method and apparatus for changing head portrait
CN115309302A (en) * 2021-05-06 2022-11-08 阿里巴巴新加坡控股有限公司 Icon display method, device, program product and storage medium
CN113395201B (en) * 2021-06-10 2024-02-23 广州繁星互娱信息科技有限公司 Head portrait display method, device, terminal and server in chat session
CN114040216B (en) * 2021-11-03 2023-07-11 杭州网易云音乐科技有限公司 Live broadcast room recommendation method, medium, device and computing equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215302A (en) * 2011-05-28 2011-10-12 华为技术有限公司 Contact photo providing method, management platform and user terminal
CN103412885A (en) * 2013-07-18 2013-11-27 中国联合网络通信集团有限公司 Contact person photo setting method and device
CN105141507A (en) * 2015-08-26 2015-12-09 努比亚技术有限公司 Method and device for displaying head portrait for social application
CN105391676A (en) * 2014-09-05 2016-03-09 腾讯科技(深圳)有限公司 Instant communication message processing method, device and system
CN105429847A (en) * 2014-09-22 2016-03-23 中国移动通信集团天津有限公司 Client side display head portrait setting method and device
CN105959203A (en) * 2016-04-22 2016-09-21 北京小米移动软件有限公司 Portrait-setting method and device
CN106407436A (en) * 2016-09-27 2017-02-15 维沃移动通信有限公司 Communication account number head portrait processing method and mobile terminal
CN106453778A (en) * 2016-09-27 2017-02-22 维沃移动通信有限公司 Contact avatar setting method and mobile terminal
CN106506805A (en) * 2016-09-29 2017-03-15 乐视控股(北京)有限公司 Head portrait of contact person generation method and device
CN106790920A (en) * 2016-12-20 2017-05-31 北京小米移动软件有限公司 Head portrait picture method to set up and device
CN107247549A (en) * 2017-06-16 2017-10-13 北京小米移动软件有限公司 Obtain method, device, terminal and the storage medium of user's head portrait
CN107527072A (en) * 2017-08-31 2017-12-29 北京小米移动软件有限公司 Determine method and device, the electronic equipment of similar head portrait
CN107659611A (en) * 2017-08-14 2018-02-02 北京五八信息技术有限公司 User's head portrait generation method, device and system based on big data
CN107959893A (en) * 2017-12-05 2018-04-24 广州酷狗计算机科技有限公司 The method and apparatus for showing account head portrait
CN108196751A (en) * 2018-01-08 2018-06-22 深圳天珑无线科技有限公司 Update method, terminal and the computer readable storage medium of group chat head portrait

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599017A (en) * 2009-07-14 2009-12-09 阿里巴巴集团控股有限公司 A kind of generation mthods, systems and devices of head image of network user
CN103905594B (en) * 2014-03-28 2019-01-15 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN105094513B (en) * 2014-05-23 2019-05-28 腾讯科技(北京)有限公司 User's head portrait setting method, device and electronic equipment
US20160028666A1 (en) * 2014-07-24 2016-01-28 Framy Inc. System and method for instant messaging
CN106210267A (en) * 2016-06-21 2016-12-07 珠海市魅族科技有限公司 The management method of contact head image, managing device and server
US10521503B2 (en) * 2016-09-23 2019-12-31 Qualtrics, Llc Authenticating a respondent to an electronic survey
CN107203306A (en) * 2017-05-03 2017-09-26 北京小米移动软件有限公司 Head portrait processing method and processing device
CN107181673A (en) * 2017-06-08 2017-09-19 腾讯科技(深圳)有限公司 Instant communicating method and device, computer equipment and storage medium
CN107580111B (en) * 2017-08-17 2019-10-25 努比亚技术有限公司 Contact head image generation method, terminal and computer readable storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215302A (en) * 2011-05-28 2011-10-12 华为技术有限公司 Contact photo providing method, management platform and user terminal
CN103412885A (en) * 2013-07-18 2013-11-27 中国联合网络通信集团有限公司 Contact person photo setting method and device
CN105391676A (en) * 2014-09-05 2016-03-09 腾讯科技(深圳)有限公司 Instant communication message processing method, device and system
CN105429847A (en) * 2014-09-22 2016-03-23 中国移动通信集团天津有限公司 Client side display head portrait setting method and device
CN105141507A (en) * 2015-08-26 2015-12-09 努比亚技术有限公司 Method and device for displaying head portrait for social application
CN105959203A (en) * 2016-04-22 2016-09-21 北京小米移动软件有限公司 Portrait-setting method and device
CN106407436A (en) * 2016-09-27 2017-02-15 维沃移动通信有限公司 Communication account number head portrait processing method and mobile terminal
CN106453778A (en) * 2016-09-27 2017-02-22 维沃移动通信有限公司 Contact avatar setting method and mobile terminal
CN106506805A (en) * 2016-09-29 2017-03-15 乐视控股(北京)有限公司 Head portrait of contact person generation method and device
CN106790920A (en) * 2016-12-20 2017-05-31 北京小米移动软件有限公司 Head portrait picture method to set up and device
CN107247549A (en) * 2017-06-16 2017-10-13 北京小米移动软件有限公司 Obtain method, device, terminal and the storage medium of user's head portrait
CN107659611A (en) * 2017-08-14 2018-02-02 北京五八信息技术有限公司 User's head portrait generation method, device and system based on big data
CN107527072A (en) * 2017-08-31 2017-12-29 北京小米移动软件有限公司 Determine method and device, the electronic equipment of similar head portrait
CN107959893A (en) * 2017-12-05 2018-04-24 广州酷狗计算机科技有限公司 The method and apparatus for showing account head portrait
CN108196751A (en) * 2018-01-08 2018-06-22 深圳天珑无线科技有限公司 Update method, terminal and the computer readable storage medium of group chat head portrait

Also Published As

Publication number Publication date
CN109388456A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN109388456B (en) Head portrait selection method and mobile terminal
CN108491123B (en) Method for adjusting application program icon and mobile terminal
CN107734175B (en) Notification message prompting method and mobile terminal
CN111143015B (en) Screen capturing method and electronic equipment
CN111130989B (en) Information display and sending method and electronic equipment
CN108600089B (en) Expression image display method and terminal equipment
CN110674662A (en) Scanning method and terminal equipment
CN107734172B (en) Information display method and mobile terminal
CN109634438B (en) Input method control method and terminal equipment
CN110971510A (en) Message processing method and electronic equipment
CN109189303B (en) Text editing method and mobile terminal
CN108920040B (en) Application icon sorting method and mobile terminal
CN108600079B (en) Chat record display method and mobile terminal
CN108196781B (en) Interface display method and mobile terminal
CN110096203B (en) Screenshot method and mobile terminal
CN109982273B (en) Information reply method and mobile terminal
CN110795002A (en) Screenshot method and terminal equipment
CN109063076B (en) Picture generation method and mobile terminal
CN111007980A (en) Information input method and terminal equipment
CN108520760B (en) Voice signal processing method and terminal
CN110780751B (en) Information processing method and electronic equipment
CN108287644B (en) Information display method of application program and mobile terminal
CN108628534B (en) Character display method and mobile terminal
CN110851219A (en) Information processing method and electronic equipment
CN110851042A (en) Interface display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant