CN110955787B - User head portrait setting method, computer equipment and computer readable storage medium - Google Patents

User head portrait setting method, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN110955787B
CN110955787B CN201911099252.3A CN201911099252A CN110955787B CN 110955787 B CN110955787 B CN 110955787B CN 201911099252 A CN201911099252 A CN 201911099252A CN 110955787 B CN110955787 B CN 110955787B
Authority
CN
China
Prior art keywords
user
content
multimedia data
data
head portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911099252.3A
Other languages
Chinese (zh)
Other versions
CN110955787A (en
Inventor
杨京沅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lianshang Network Technology Co Ltd
Original Assignee
Shanghai Lianshang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lianshang Network Technology Co Ltd filed Critical Shanghai Lianshang Network Technology Co Ltd
Priority to CN201911099252.3A priority Critical patent/CN110955787B/en
Publication of CN110955787A publication Critical patent/CN110955787A/en
Application granted granted Critical
Publication of CN110955787B publication Critical patent/CN110955787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The application provides a setting method of a user head portrait, computer equipment and a computer readable storage medium. According to the method and the device for setting the user head portrait, the content which is interested by the user is obtained through the interactive operation of the user head portrait setting interface based on the appointed application by the user, and then at least one piece of multimedia data is obtained according to the content which is interested by the user, so that the at least one piece of multimedia data can be output for the user to set the user head portrait of the appointed application.

Description

User head portrait setting method, computer equipment and computer readable storage medium
[ field of technology ]
The present invention relates to the internet technology, and in particular, to a method for setting a user avatar, a computer device, and a computer readable storage medium.
[ background Art ]
With the deep development of the internet, the terminal can integrate more and more functions, so that Applications (APP) applied to the terminal are endless. In order to enhance the personalized experience of the user, the setting of user portraits is involved in most applications.
In general, a user can select his/her favorite image from an image library and set it as a user avatar. In this way, the user is required to browse the images in the gallery one by one, the operation is complicated, and the operation time is long, resulting in a reduction in the setting efficiency.
[ invention ]
Aspects of the present application provide a user avatar setting method, a computer device, and a computer-readable storage medium for improving user avatar setting efficiency.
In one aspect of the present application, a method for setting a user avatar is provided, including:
responding to the interactive operation of a user based on a user head portrait setting interface of a designated application, and acquiring the content interested by the user;
obtaining at least one multimedia data according to the content of interest of the user;
and outputting the at least one multimedia data for the user to set the user head portrait of the appointed application.
In another aspect of the present application, there is provided a computer apparatus, the apparatus comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of setting a user avatar as provided in the above aspect.
In another aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of setting a user avatar as provided in the above aspect.
As can be seen from the above technical solution, in the embodiment of the present application, by responding to an interactive operation of a user setting interface of a user based on a specified application, content interesting to the user is obtained, and then, at least one multimedia data is obtained according to the content interesting to the user, so that the at least one multimedia data can be output for the user to perform user head portrait setting of the specified application.
In addition, by adopting the technical scheme provided by the application, the experience of the user can be effectively improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flow chart of a method for setting a user avatar according to an embodiment of the present application;
fig. 2 is a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present application.
[ detailed description ] of the invention
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that, the terminals referred to in the embodiments of the present application may include, but are not limited to, mobile phones, personal digital assistants (Personal Digital Assistant, PDA), wireless handheld devices, tablet computers (Tablet computers), personal computers (Personal Computer, PC), MP3 players, MP4 players, wearable devices (e.g., smart glasses, smart watches, smart bracelets, etc.), and so on.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Fig. 1 is a flowchart of a method for setting a user avatar according to an embodiment of the present application, as shown in fig. 1.
101. And responding to the interactive operation of the user based on the user head portrait setting interface of the appointed application, and acquiring the content interested by the user.
The user avatar setting interface is an application interface for performing user avatar setting of an application.
102. And obtaining at least one piece of multimedia data according to the content of interest of the user.
103. And outputting the at least one multimedia data for the user to set the user head portrait of the appointed application.
In the present application, the "designation" of the two words in the designated application has no special meaning, and is for designating the current operation object, so that the designated application is a general application.
The execution bodies 101 to 103 may be part or all of applications located in the local terminal, or may be functional units such as plug-ins or software development kits (Software Development Kit, SDKs) provided in the applications located in the local terminal, or may be processing engines located in a server on the network side, or may be distributed systems located on the network side, for example, processing engines or distributed systems in an application processing platform on the network side, which is not particularly limited in this embodiment.
It is to be understood that the application may be a native program (native app) installed on the terminal, or may also be a web page program (webApp) of a browser on the terminal, which is not particularly limited in this embodiment.
In this way, the user obtains the content of interest of the user through responding to the interactive operation of the user head portrait setting interface of the appointed application, and then at least one piece of multimedia data is obtained according to the content of interest of the user, so that the at least one piece of multimedia data can be output for the user to set the user head portrait of the appointed application.
Optionally, in a possible implementation of this embodiment, before 101, the user interaction based on the user avatar setting interface of the specified application may be further acquired. Wherein the interactive operation may include, but is not limited to, an interactive operation gesture, which may include, but is not limited to, at least one of the following interactive operation gestures:
setting an operation gesture of an interactive operation control on an interface by a user for the user head portrait;
a user hangs the operation gesture above the user head portrait setting interface;
a contact operation gesture of a user on the user head portrait setting interface; and
and the user drives the movement trend of the terminal based on the user head portrait setting interface.
The operation gesture of the user on the interactive operation control on the user avatar setting interface may refer to the user operating the interactive operation control on the user avatar setting interface as displayed by the display device of the click operation terminal. So-called interactive operation controls, which are objects with which a user can interact to input or manipulate data, may be made up of one or more page elements. The operation called a click operation may be a trigger operation performed by a user to control a cursor using an external input device such as a mouse or a keyboard, or may be a touch operation performed by a user using a touch input device such as a finger or a stylus, and is not particularly limited in this embodiment.
For example, the user may specifically perform a click operation on a setting operation control on the user avatar setting interface, and after acquiring a click operation gesture of the user on the setting operation control on the user avatar setting interface, may directly acquire the content interested by the user.
The hanging operation gesture of the user above the user head portrait setting interface may refer to a hanging sliding track of the user above the user head portrait setting interface displayed by the display device of the terminal within the acquisition range of the image sensor of the terminal. The image sensor may be a charge coupled device (Charge Coupled Device, CCD) sensor, or may also be a metal oxide semiconductor device (Complementary Metal-Oxide Semiconductor, CMOS) sensor, which is not particularly limited in this embodiment. The suspended sliding track may include, but is not limited to, a straight line or a curve with any shape, which is formed by a plurality of stay points corresponding to a plurality of continuous sliding events, and the embodiment is not limited in particular.
The touch operation gesture of the user on the user avatar setting interface may refer to a touch sliding track of the user on the user avatar setting interface displayed on the display device of the terminal. Generally, terminals may be classified into two types, one type being a touch type terminal and the other type being a non-touch type terminal, according to whether or not a display device has a touch-controllable characteristic. Specifically, a contact sliding track of a user on a user head portrait setting interface displayed on a touch screen of a touch terminal can be detected. The contact sliding track may include, but is not limited to, a straight line or a curve of any shape formed by a plurality of touch points corresponding to a plurality of continuous touch events, which is not particularly limited in this embodiment. Specifically, the click gesture of the user on any area of the user avatar setting interface may be specifically a click gesture of the user on any area of the user avatar setting interface, or a long-press gesture of the user on any area of the user avatar setting interface may be further a slide gesture of the user on any area of the user avatar setting interface, which is not particularly limited in this embodiment.
For example, the user may specifically perform a long-press operation on an arbitrary region on the user avatar setting interface, and after obtaining a long-press operation gesture of the user on the arbitrary region on the user avatar setting interface, may output an avatar setting option for the user to perform an operation. After the user operates the avatar setting option, the content of interest to the user can be directly acquired.
The user driving the movement trend of the terminal based on the user avatar setting interface may refer to a user holding the terminal, and driving the movement track of the movement performed by the terminal, for example, shaking, overturning, etc., when the display device of the terminal displays the user avatar setting interface.
In a specific implementation process, a sensor device may be specifically used to detect an operation gesture of the user based on the user avatar setting interface. Specifically, the sensor device may include, but is not limited to, at least one of a gravity sensor, an acceleration sensor, a pressure sensor, an infrared sensor, a distance sensor, and an image sensor, which is not particularly limited in this embodiment.
The distance sensor may be an ultrasonic distance sensor, or may also be an infrared distance sensor, or may also be a laser distance sensor, or may also be a microwave distance sensor, which is not particularly limited in this embodiment. These distance sensors are all well known in the art, and detailed descriptions thereof can be found in the related art, and are not repeated here.
The image sensor may be a charge coupled device (Charge Coupled Device, CCD) sensor, or may also be a metal oxide semiconductor device (Complementary Metal-Oxide Semiconductor, CMOS) sensor, which is not particularly limited in this embodiment.
In this implementation process, the detection of the operation gesture of the user based on the user avatar setting interface may specifically refer to detecting a start point, an end point, and a track formed from the start point to the end point of the operation gesture of the user based on the user avatar setting interface, or may further detect radian data corresponding to the track.
In order to achieve the above-mentioned function, optionally, in one possible implementation manner of this embodiment, before obtaining the interactive operation of the user based on the user avatar setting interface, a specified operation may be further preset. Only when the acquired interactive operation satisfies a preset designation operation, the interactive operation of acquiring the content of interest to the user can be performed.
Taking the example that the obtained interactive operation comprises an interactive operation gesture, after the interactive operation gesture of the user based on the user head portrait setting interface is obtained, the operation gesture can be compared with a preset appointed gesture, and the operation of obtaining the content of interest of the user can be executed only when the obtained interactive operation gesture meets the preset appointed gesture.
In particular, the data specifying the gesture may be stored in a storage device of the terminal.
In a specific implementation process, the storage device of the terminal may be a slow storage device, specifically, a hard disk of a computer system, or may also be a non-running Memory, i.e. a physical Memory, of a mobile phone, for example, a Read-Only Memory (ROM), a Memory card, etc., which is not limited in this embodiment.
In another specific implementation process, the storage device of the terminal may also be a fast storage device, specifically may be a memory of a computer system, or may also be an operation memory of a mobile phone, that is, a system memory, for example, a random access memory (Random Access Memory, RAM), which is not limited in this embodiment.
Optionally, in one possible implementation manner of this embodiment, in 101, as a response of the user to the interactive operation of the user avatar setting interface based on the specified application, the content of interest of the user may be obtained specifically according to the attribute data of the user and/or the historical operation data of the user.
In a specific implementation, the attribute data of the user may include, but is not limited to, at least one of the following data:
sex;
age, age;
a constellation;
an academic history; and
occupation.
In another specific implementation, the historical operation data of the user may include, but is not limited to, at least one of the following data:
the user uses historical operating data of the specified application;
the user uses the historical operation data of the terminal where the appointed application is located; and
and the user uses the historical operation data of other applications except the appointed application in the terminal where the appointed application is located.
For example, according to the book a read by the user in the last period of time, the content of interest of the user can be obtained as the main character of the book a, the author of the book a, the book type to which the book a belongs, and the like.
Alternatively, for another example, the content of interest to the user may be obtained as a web game, an animation, or the like, depending on whether the sex of the user is male and the age is 11 years.
Alternatively, in one possible implementation manner of the present embodiment, in response to the user's interaction based on the user avatar setting interface of the specified application, in 101, a keyword provided by the user may be specifically obtained, and the keyword is used as the content of interest to the user.
In a specific implementation process, the keyword input by the user may be specifically obtained, and then the keyword may be used as the content interested by the user.
In another specific implementation process, the keyword selected by the user may be specifically obtained, and then the keyword may be used as the content interested by the user.
In this implementation, some recommended keywords, such as cartoon, harbor drama, sunrise, etc., may be provided to the user for selection by the user.
Optionally, in one possible implementation manner of this embodiment, in 102, matching processing may be specifically performed in a multimedia library according to the content interested by the user, so as to obtain a matching degree of each content in the multimedia library, and further, at least one multimedia data with a matching degree greater than or equal to a preset matching threshold may be obtained according to the matching degree of each content in the multimedia library.
The matching degree refers to the matching degree of multimedia data with content of interest to a user, and the higher the matching degree is, the larger the matching degree is, whereas the lower the matching degree is, the smaller the matching degree is.
Specifically, according to the content interested by the user and each piece of multimedia data in the multimedia library, the matching degree of each piece of content in the multimedia library can be obtained by adopting the existing image matching algorithm between various texts and the multimedia data.
Further, in 103, the at least one multimedia data may be specifically output from large to small according to the matching degree of each multimedia data in the at least one multimedia data, so that the user may perform the user avatar setting of the specified application.
Optionally, in one possible implementation manner of this embodiment, in 103, the at least one multimedia data may be specifically output on the user avatar setting interface.
After outputting the at least one multimedia data on the user avatar setting interface, one multimedia data may be selected from the at least one multimedia data further in response to the user's interaction based on the at least one multimedia data. The selected one multimedia data may then be set as a user avatar of the user.
The user interaction based on the at least one multimedia data may be further acquired before selecting one multimedia data from the at least one multimedia data in response to the user interaction based on the at least one multimedia data. Wherein the interactive operation may include, but is not limited to, an interactive operation gesture, which may include, but is not limited to, at least one of the following interactive operation gestures:
the user operates the operation gesture of the interactive operation control corresponding to the at least one multimedia data;
a hover operation gesture of a user over the at least one multimedia data;
a contact manipulation gesture by a user on the at least one multimedia data; and
and the user drives the movement trend of the terminal based on the at least one multimedia data.
For details, reference may be made to the content of the user interaction in the foregoing implementation.
Thus, user avatar setting of the specified application is completed.
Further, after the user head portrait setting of the specified application is completed, the effect of the set user head portrait can be further displayed to the user, and the set user head portrait of the user is output on the user head portrait setting interface.
In this application, the multimedia data may include, but is not limited to, at least one of image data and video data, which is not particularly limited in this embodiment.
In this embodiment, the content of interest of the user is obtained by responding to the interactive operation of the user avatar setting interface of the user based on the specified application, and then at least one piece of multimedia data is obtained according to the content of interest of the user, so that the at least one piece of multimedia data can be output for the user to perform the user avatar setting of the specified application.
In addition, by adopting the technical scheme provided by the application, the experience of the user can be effectively improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
Fig. 2 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present application. The computer system/server 12 shown in FIG. 2 is merely an example and should not be taken as limiting the functionality and scope of use of embodiments of the present application.
As shown in FIG. 2, the computer system/server 12 is in the form of a general purpose computing device. Components of computer system/server 12 may include, but are not limited to: one or more processors or processing units 16, a storage device or system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer system/server 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer system/server 12 and includes both volatile and non-volatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer system/server 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 2, commonly referred to as a "hard disk drive"). Although not shown in fig. 2, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The system memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the present application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described herein.
The computer system/server 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer system/server 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer system/server 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 44. Also, the computer system/server 12 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 20. As shown, network adapter 20 communicates with other modules of computer system/server 12 via bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computer system/server 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to realize the setting method of the user avatar provided in any one of the embodiments corresponding to fig. 1.
Another embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method for setting a user avatar according to any one of the embodiments corresponding to fig. 1.
In particular, any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the elements is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple elements or page components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (12)

1. A method for setting a user avatar, comprising:
responding to the interactive operation of a user based on a user head portrait setting interface of a designated application, and acquiring the content interested by the user;
obtaining at least one multimedia data according to the content of interest of the user;
outputting the at least one multimedia data on the user avatar setting interface for the user to perform user avatar setting of the specified application; wherein,
after the at least one multimedia data is output on the user avatar setting interface, the method further comprises:
selecting one multimedia data from the at least one multimedia data in response to the user's interaction based on the at least one multimedia data;
and setting the multimedia data as a user head portrait of the user.
2. The method of claim 1, wherein the step of, prior to the obtaining the content of interest to the user in response to the user's interaction based on the user profile setting interface of the specified application, further comprises:
and acquiring the interactive operation of the user based on the user head portrait setting interface.
3. The method of claim 1, wherein after setting the one multimedia data as the user avatar of the user, further comprising:
and outputting the set user head portrait of the user on the user head portrait setting interface.
4. The method of claim 1, wherein the obtaining the content of interest to the user comprises:
and obtaining the content interested by the user according to the attribute data of the user and/or the historical operation data of the user.
5. The method of claim 1, wherein the historical operating data of the user comprises at least one of:
the user uses historical operating data of the specified application;
the user uses the historical operation data of the terminal where the appointed application is located; and
and the user uses the historical operation data of other applications except the appointed application in the terminal where the appointed application is located.
6. The method of claim 1, wherein the obtaining the content of interest to the user comprises:
and acquiring keywords provided by the user, and taking the keywords as the content interested by the user.
7. The method of claim 6, wherein the obtaining the keyword provided by the user, using the keyword as the content of interest to the user, comprises:
acquiring keywords input by the user, and taking the keywords as the content interested by the user; or alternatively
And acquiring the keywords selected by the user, and taking the keywords as the content interested by the user.
8. The method of claim 1, wherein the obtaining at least one multimedia data from the content of interest to the user comprises:
according to the content of interest of the user, carrying out matching processing in a multimedia library so as to obtain the matching degree of each content in the multimedia library;
and obtaining at least one multimedia data with the matching degree larger than or equal to a preset matching threshold according to the matching degree of each content in the multimedia library.
9. The method of claim 8, wherein the outputting the at least one multimedia data for the user to make the user avatar settings of the specified application comprises:
and outputting the at least one multimedia data from large to small according to the matching degree of the at least one multimedia data so as to enable the user to set the user head portrait of the appointed application.
10. The method according to any one of claims 1 to 9, wherein the multimedia data comprises at least one of image data and video data.
11. A computer device, the device comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-10.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-10.
CN201911099252.3A 2019-11-12 2019-11-12 User head portrait setting method, computer equipment and computer readable storage medium Active CN110955787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911099252.3A CN110955787B (en) 2019-11-12 2019-11-12 User head portrait setting method, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911099252.3A CN110955787B (en) 2019-11-12 2019-11-12 User head portrait setting method, computer equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110955787A CN110955787A (en) 2020-04-03
CN110955787B true CN110955787B (en) 2024-03-12

Family

ID=69977281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911099252.3A Active CN110955787B (en) 2019-11-12 2019-11-12 User head portrait setting method, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110955787B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105630846A (en) * 2014-11-19 2016-06-01 深圳市腾讯计算机系统有限公司 Head portrait updating method and apparatus
CN107481318A (en) * 2017-08-09 2017-12-15 广东欧珀移动通信有限公司 Replacement method, device and the terminal device of user's head portrait
CN107580111A (en) * 2017-08-17 2018-01-12 努比亚技术有限公司 Contact head image generation method, terminal and computer-readable recording medium
CN108009200A (en) * 2017-10-30 2018-05-08 努比亚技术有限公司 The method to set up and mobile terminal of contact image, computer-readable recording medium
CN109618018A (en) * 2018-12-17 2019-04-12 北京达佳互联信息技术有限公司 User's method for displaying head portrait, device, terminal, server and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8832552B2 (en) * 2008-04-03 2014-09-09 Nokia Corporation Automated selection of avatar characteristics for groups
US10210647B2 (en) * 2017-03-02 2019-02-19 International Business Machines Corporation Generating a personal avatar and morphing the avatar in time

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105630846A (en) * 2014-11-19 2016-06-01 深圳市腾讯计算机系统有限公司 Head portrait updating method and apparatus
CN107481318A (en) * 2017-08-09 2017-12-15 广东欧珀移动通信有限公司 Replacement method, device and the terminal device of user's head portrait
CN107580111A (en) * 2017-08-17 2018-01-12 努比亚技术有限公司 Contact head image generation method, terminal and computer-readable recording medium
CN108009200A (en) * 2017-10-30 2018-05-08 努比亚技术有限公司 The method to set up and mobile terminal of contact image, computer-readable recording medium
CN109618018A (en) * 2018-12-17 2019-04-12 北京达佳互联信息技术有限公司 User's method for displaying head portrait, device, terminal, server and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
牛爽 ; .社交媒体头像的分类及其背后的心理因素探析――以微信头像为例.传播与版权.2019,(04),全文. *
网页交互设计中头像外框形状的识读性研究――以圆形、方形、圆角矩形为例;刘梦娇;;赤峰学院学报(自然科学版)(16);全文 *

Also Published As

Publication number Publication date
CN110955787A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
RU2632144C1 (en) Computer method for creating content recommendation interface
US9152529B2 (en) Systems and methods for dynamically altering a user interface based on user interface actions
US8413075B2 (en) Gesture movies
US9423908B2 (en) Distinguishing between touch gestures and handwriting
WO2019184490A1 (en) Method for use in displaying icons of hosted applications, and device and storage medium
US20210256077A1 (en) Methods, devices and computer-readable storage media for processing a hosted application
WO2020200263A1 (en) Method and device for processing picture in information flow, and computer readable storage medium
WO2019205706A1 (en) Application component processing method, device, and computer readable storage medium
US10521101B2 (en) Scroll mode for touch/pointing control
CN106020434B (en) Method, device and product for man-machine interface device input fusion
WO2020221299A1 (en) Instant communication method and device, and computer-readable storage medium
US20210326151A1 (en) Methods, devices and computer-readable storage media for processing a hosted application
US9733826B2 (en) Interacting with application beneath transparent layer
EP3405869A1 (en) Method and an apparatus for providing a multitasking view
CN112667118A (en) Method, apparatus and computer readable medium for displaying historical chat messages
US20200356251A1 (en) Conversion of handwriting to text in text fields
CN104123069B (en) A kind of page control method by sliding, device and terminal device
US20150347364A1 (en) Highlighting input area based on user input
CN107291367B (en) Use method and device of eraser
WO2016057438A1 (en) Multiple stage user interface
CN110955787B (en) User head portrait setting method, computer equipment and computer readable storage medium
CN108874141B (en) Somatosensory browsing method and device
WO2019218684A1 (en) Application partition processing method, device, and computer readable storage medium
JP2019537785A (en) Information displayed while information scrolls on the terminal screen
US20190114131A1 (en) Context based operation execution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant