CN113064981A - Group head portrait generation method, device, equipment and storage medium - Google Patents

Group head portrait generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113064981A
CN113064981A CN202110328404.3A CN202110328404A CN113064981A CN 113064981 A CN113064981 A CN 113064981A CN 202110328404 A CN202110328404 A CN 202110328404A CN 113064981 A CN113064981 A CN 113064981A
Authority
CN
China
Prior art keywords
group
information
avatar
target group
candidate text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110328404.3A
Other languages
Chinese (zh)
Inventor
贺若昕
徐扬
伊成强
张博琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110328404.3A priority Critical patent/CN113064981A/en
Publication of CN113064981A publication Critical patent/CN113064981A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]

Abstract

The disclosure relates to a method, a device, equipment and a storage medium for generating a group head portrait, and belongs to the technical field of computers. The embodiment of the disclosure provides a flexible group head portrait generation method, and a new reference factor is added in the generation method, so that a head portrait which is more vivid and can reflect the group property can be generated. In the generation mode, the name and type information of the group or the attribute information of the members are specifically used as factors for generating the head portrait, based on the factors, the candidate text which is consistent with the group is obtained and recommended to the user, the candidate text selected in the way is used as the display element in the head portrait, the generated head portrait can clearly and intuitively know the properties of the group, and the generation effect is better.

Description

Group head portrait generation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for generating a group avatar.
Background
With the development of computer technology, users can quickly contact and communicate with each other through a network in life or work so as to achieve efficient operation or efficient interaction. In order to facilitate communication among a plurality of users, a group can be established, members in the group can speak in the group and see the speeches of other members, and more effective communication is achieved.
In the related art, a group avatar generating method generally generates an avatar of a group based on names of group members when the group is established, and the avatar includes the names of the group members. However, the group generation method is monotonous, and the generated head portrait cannot reflect more information of the group, so the generation effect is poor.
Disclosure of Invention
The present disclosure provides a group avatar generation method, apparatus, device and storage medium, which can generate a more vivid avatar and improve the generation effect. The technical scheme of the disclosure is as follows.
According to a first aspect of the embodiments of the present disclosure, there is provided a group avatar generating method, including:
responding to an avatar generation instruction of a target group, and acquiring group information of the target group, wherein the group information comprises at least one of a group name of the target group, type information of the target group or attribute information of members in the target group;
acquiring at least one candidate text corresponding to the group information according to the group information of the target group;
displaying the at least one candidate text in a text candidate area of the avatar setting interface;
responding to a selection instruction of any candidate text, and generating an avatar of the target group according to the selected candidate text, wherein the avatar comprises the selected candidate text.
In some embodiments, the obtaining at least one candidate text corresponding to the group information according to the group information of the target group includes:
and matching the group information of the target group with candidate texts in a candidate text library to obtain at least one candidate text corresponding to the group information.
In some embodiments, the matching the group information of the target group with the candidate texts in the candidate text library to obtain at least one candidate text corresponding to the group information includes any one of:
responding to the group information as the group name of the target group, performing semantic analysis on the group name of the target group, and determining the matching degree between the group name of the target group and the candidate text in the candidate text library; based on the matching degree, at least one candidate text corresponding to the group name of the target group is obtained from the candidate text library;
responding to the group information as the type information of the target group, and acquiring at least one candidate text corresponding to the type information from the candidate text library as at least one candidate text corresponding to the group information;
and responding to the attribute information of the members in the target group, and acquiring at least one candidate text corresponding to the attribute information from the candidate text library as at least one candidate text corresponding to the group information.
In some embodiments, the displaying the at least one candidate text in a candidate area of text in the avatar setting interface includes:
determining the display sequence of the at least one candidate text according to the matching degree of the group information and the at least one candidate text;
and displaying the at least one candidate text in the text candidate area of the avatar setting interface according to the display sequence.
In some embodiments, the generating the avatar of the target group according to the selected candidate text includes:
determining the text content and the position in the head portrait of the target group according to the selected candidate text;
generating an avatar for the target group based on the text content and location.
In some embodiments, the method further comprises:
displaying at least one candidate style information in the avatar setting interface;
the generating the head portrait of the target group based on the text content and the position comprises:
and responding to the selection operation of any candidate style information, and generating the head portrait of the target group according to the selected candidate style information based on the text content and the position.
In some embodiments, the method further comprises:
responding to an establishment instruction of the target group, and establishing the target group;
displaying a group name modification control in a dialog interface of the target group;
responding to the touch operation of the modification control of the group name, and displaying a group name setting interface, wherein the head portrait generation control is displayed on the group name setting interface;
responding to the touch operation of the head portrait generation control, and acquiring group information of the target group;
and generating the head portrait of the target group according to the group information.
In some embodiments, the obtaining group information of a target group in response to an avatar generation instruction for the target group includes:
responding to touch operation of a group setting control, and displaying a setting interface of the target group, wherein the setting interface displays a head portrait generation control;
and responding to the touch operation of the avatar generation control, and acquiring the group information of the target group.
In some embodiments, the attribute information includes at least one of organization information, position, and age to which the attribute information belongs.
According to a second aspect of the embodiments of the present disclosure, there is provided a group head image generating apparatus, including:
an acquisition unit configured to execute an avatar generation instruction for a target group, and acquire group information of the target group, the group information including at least one of a group name of the target group, type information of the target group, or attribute information of members in the target group;
the obtaining unit is further configured to obtain at least one candidate text corresponding to the group information according to the group information of the target group;
a display unit configured to perform displaying the at least one candidate text in a text candidate area of an avatar setting interface;
the generating unit is configured to execute a selection instruction responding to any candidate text, and generate the head portrait of the target group according to the selected candidate text, wherein the head portrait comprises the selected candidate text.
In some embodiments, the obtaining unit is configured to perform matching between the group information of the target group and candidate texts in a candidate text library to obtain at least one candidate text corresponding to the group information.
In some embodiments, the obtaining unit is configured to perform any one of:
responding to the group information as the group name of the target group, performing semantic analysis on the group name of the target group, and determining the matching degree between the group name of the target group and the candidate text in the candidate text library; based on the matching degree, at least one candidate text corresponding to the group name of the target group is obtained from the candidate text library;
responding to the group information as the type information of the target group, and acquiring at least one candidate text corresponding to the type information from the candidate text library as at least one candidate text corresponding to the group information;
and responding to the attribute information of the members in the target group, and acquiring at least one candidate text corresponding to the attribute information from the candidate text library as at least one candidate text corresponding to the group information.
In some embodiments, the display unit is configured to perform:
determining the display sequence of the at least one candidate text according to the matching degree of the group information and the at least one candidate text;
and displaying the at least one candidate text in the text candidate area of the avatar setting interface according to the display sequence.
In some embodiments, the generating unit is configured to perform:
determining the text content and the position in the head portrait of the target group according to the selected candidate text;
generating an avatar for the target group based on the text content and location.
In some embodiments, the display unit is further configured to perform displaying at least one candidate style information in the avatar setting interface;
the generating unit is configured to execute, in response to a selection operation of any one of the candidate style information, generating an avatar of the target group in accordance with the selected candidate style information based on the text content and the position.
In some embodiments, the apparatus further comprises:
an establishing unit configured to execute an establishing instruction in response to the target group, to establish the target group;
the display unit is further configured to execute a modification control for displaying a group name in a dialog interface of the target group;
the display unit is also configured to execute a touch operation responding to the modification control of the group name, and display a group name setting interface which displays a head portrait generation control;
the acquisition unit is further configured to execute a touch operation in response to the avatar generation control to acquire group information of the target group;
the generating unit is further configured to perform generating the head portrait of the target group according to the group information.
In some embodiments, the obtaining unit is configured to perform:
responding to touch operation of a group setting control, and displaying a setting interface of the target group, wherein the setting interface displays a head portrait generation control;
and responding to the touch operation of the avatar generation control, and acquiring the group information of the target group.
In some embodiments, the attribute information includes at least one of organization information, position, and age to which the attribute information belongs.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
one or more processors;
one or more memories for storing the processor-executable program code;
wherein the one or more processors are configured to execute the program code to implement the group avatar generation method described above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, wherein when a program code in the storage medium is executed by a processor of an electronic device, the electronic device is enabled to execute the above group avatar generation method.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the group avatar generation method described above.
The embodiment of the disclosure provides a flexible group head portrait generation method, and a new reference factor is added in the generation method, so that a head portrait which is more vivid and can reflect the group property can be generated. In the generation mode, the name and type information of the group or the attribute information of the members are specifically used as factors for generating the head portrait, based on the factors, the candidate text which is consistent with the group is obtained and recommended to the user, the candidate text selected in the way is used as the display element in the head portrait, the generated head portrait can clearly and intuitively know the properties of the group, and the generation effect is better.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a block diagram illustrating the architecture of a group head image generation system according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of group avatar generation in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method of group avatar generation in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 9 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 10 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 11 is a schematic diagram illustrating a terminal interface in accordance with an exemplary embodiment;
FIG. 12 is a block diagram illustrating a group head image generation apparatus according to an example embodiment;
FIG. 13 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
FIG. 14 is a block diagram illustrating a terminal in accordance with an exemplary embodiment;
FIG. 15 is a block diagram illustrating a server in accordance with an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Hereinafter, a hardware environment of the embodiments of the present disclosure is exemplified.
Fig. 1 is a block diagram illustrating a group head image generation system according to an exemplary embodiment. The group head portrait generating system comprises: electronic device 101 and group avatar generation platform 110.
The electronic device 101 is connected to the group avatar generation platform 110 through a wireless network or a wired network. The electronic device 101 may be at least one of a smartphone, a desktop computer, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, and a laptop computer.
The electronic device 101 is installed and running with an application that supports group avatar generation. The application may be an office application, a social application, a multimedia application, and the like. Illustratively, the electronic device 101 is a terminal used by a user, and a user account is logged in an application. The electronic device 101 is connected to the group avatar generation platform 110 through a wireless network or a wired network.
The group avatar generation platform 110 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The group avatar generation platform 110 is used to provide background services for applications. Optionally, in the process of generating the group avatar, the group avatar generation platform 110 and the electronic device 101 work together. For example, the group avatar generation platform 110 undertakes primary work, and the electronic device 101 undertakes secondary work; or, the group avatar generation platform 110 undertakes the secondary work, and the electronic device 101 undertakes the primary work; alternatively, the group avatar generation platform 110 or the electronic device 101 may be respectively and individually responsible for the work.
Optionally, the group avatar generation platform 110 includes at least one server 1101 and a database 1102, where the database 1102 is used to store data, and in the embodiment of the present disclosure, the database 1102 can store a system avatar, a candidate text, or a group avatar generated this time, and provide data services for at least one server 1021.
The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform. The terminal can be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like.
One skilled in the art will appreciate that the number of electronic devices 101 may be greater or fewer. For example, the number of the electronic device 101 may be only one, or the number of the electronic devices 101 may be tens or hundreds, or more, in which case the group avatar generation system further includes other electronic devices. The number of electronic devices and the types of devices are not limited in the embodiments of the present disclosure.
FIG. 2 is a flow chart illustrating a method of group avatar generation in accordance with an exemplary embodiment. As shown in fig. 2, the group avatar generation method includes the following steps.
In step S21, the electronic device, in response to the avatar generation instruction for the target group, acquires group information of the target group, the group information including at least one of a group name of the target group, type information of the target group, or attribute information of members in the target group.
In step S22, the electronic device obtains at least one candidate text corresponding to the group information according to the group information of the target group.
In step S23, the electronic device displays the at least one candidate text in the text candidate area of the avatar setting interface.
In step S24, in response to a selection instruction for any one of the candidate texts, the electronic device generates an avatar of the target group according to the selected candidate text, wherein the avatar includes the selected candidate text.
The embodiment of the disclosure provides a flexible group head portrait generation method, and a new reference factor is added in the generation method, so that a head portrait which is more vivid and can reflect the group property can be generated. In the generation mode, the name and type information of the group or the attribute information of the members are specifically used as factors for generating the head portrait, based on the factors, the candidate text which is consistent with the group is obtained and recommended to the user, the candidate text selected in the way is used as the display element in the head portrait, the generated head portrait can clearly and intuitively know the properties of the group, and the generation effect is better.
In some embodiments, the obtaining at least one candidate text corresponding to the group information according to the group information of the target group includes:
and matching the group information of the target group with the candidate texts in the candidate text library to obtain at least one candidate text corresponding to the group information.
In some embodiments, the matching the group information of the target group with the candidate texts in the candidate text library to obtain at least one candidate text corresponding to the group information includes any one of:
responding to the group information as the group name of the target group, performing semantic analysis on the group name of the target group, and determining the matching degree between the group name of the target group and the candidate text in the candidate text library; based on the matching degree, at least one candidate text corresponding to the group name of the target group is obtained from the candidate text library;
responding to the type information of the target group of the group information, and acquiring at least one candidate text corresponding to the type information from the candidate text library as at least one candidate text corresponding to the group information;
and responding to the attribute information of the members in the target group, and acquiring at least one candidate text corresponding to the attribute information from the candidate text library as at least one candidate text corresponding to the group information.
In some embodiments, the displaying the at least one candidate text in the candidate area of text of the avatar setting interface includes:
determining the display sequence of the at least one candidate text according to the matching degree of the group information and the at least one candidate text;
and displaying the at least one candidate text in the display sequence in the text candidate area of the avatar setting interface.
In some embodiments, the generating the avatar of the target group according to the selected candidate text includes:
determining the text content and the position in the head portrait of the target group according to the selected candidate text;
generating an avatar for the target group based on the text content and the location.
In some embodiments, the method further comprises:
displaying at least one candidate style information in the avatar setting interface;
generating the head portrait of the target group based on the text content and the position comprises:
and responding to the selection operation of any candidate style information, and generating the head portrait of the target group according to the selected candidate style information based on the text content and the position.
In some embodiments, the method further comprises:
responding to the establishment instruction of the target group, and establishing the target group;
displaying a group name modification control in the conversation interface of the target group;
responding to the touch operation of the modification control of the group name, and displaying a group name setting interface which displays a head portrait generation control;
responding to the touch operation of the head portrait generation control, and acquiring group information of the target group;
and generating the head portrait of the target group according to the group information.
In some embodiments, the obtaining group information of the target group in response to the avatar generation instruction for the target group includes:
responding to touch operation of a group setting control, and displaying a setting interface of the target group, wherein the setting interface displays a head portrait generation control;
and responding to the touch operation of the avatar generation control, and acquiring the group information of the target group.
In some embodiments, the attribute information includes at least one of organization information, position, and age to which the attribute information belongs.
FIG. 3 is a flow chart illustrating a method of group avatar generation in accordance with an exemplary embodiment. The method can be executed by an electronic device, and the electronic device can be an electronic device or a server. Referring to fig. 3, the method includes the following steps.
In step S31, the electronic device establishes the target group in response to the establishment instruction of the target group.
In the embodiment of the disclosure, a user can log in a user account on an electronic device, and a group can be established through the user account, so as to communicate with the electronic device where other user accounts in the group are located.
The electronic device can be a computer, a tablet computer and other terminals, and can also be a mobile terminal such as a smart phone. In some embodiments, the electronic device may have a target application installed thereon, and the target application is used to enable the electronic device to establish a target group with other electronic devices and perform a conversation in the target group. For example, the target application may be an office application or a social application, etc.
In a specific possible embodiment, the establishment of the target group may be implemented by a server. The electronic equipment can respond to the establishment instruction of the target group, send the establishment request of the target group to the server, respond to the establishment request by the server, establish the target group, and further send the establishment result to the electronic equipment and the electronic equipment where other user accounts in the target group are located. The electronic device receiving the establishment result can add the target group to the logged-in user account, and the electronic device can also respond to an entry instruction for the target group and display a conversation interface of the target group.
For example, a user logs in a user account in a target application on the electronic device, and performs a group establishment operation in the target application to trigger an establishment instruction of the target group. For example, the group establishment operation may be a touch operation performed on a group establishment control, and the electronic device displays a group establishment interface in response to the touch operation performed on the group establishment control. The user can select other user accounts and perform touch operation on the determined establishment control. The electronic device may perform the step of establishing the target group, establish the target group, and display the target group in a default display interface (which may be referred to as a home page, for example) of the target application. The user may click on the target group, and the electronic device may display a dialog interface of the target group, for example, the dialog interface of the target group may be as shown in fig. 4. As shown in fig. 4, the conversation interface may display a group name 401 of the target group, the number 402 of members in the target group, a conversation content display area 403, and an input area 404.
In step S32, the electronic device displays a group name modification control in the dialog interface of the target group.
After the electronic device establishes the target group, the related information of the target group and the control or display area required by the dialog may be displayed in the dialog interface of the target group. For example, the group name of the target group, the number of members in the target group, a conversation content display area, and an input area may be displayed in the conversation interface. Therefore, the client operates in the input area, namely the conversation content can be input, and after the conversation content is sent, the conversation content sent by the current user account can be displayed in the conversation content display area. Of course, if the dialog content sent by other user accounts is received, the dialog content can also be displayed in the dialog content.
A control for modifying the relevant information of the target group, for example, a group name modification control, may also be displayed in the dialog interface, and the group name setting interface may be accessed through the group name modification control to set the group name. Specifically, when the target group is established, the electronic device may generate a default group name according to the name of the user account of the target group, and if the user wants to set the self-set group name, the electronic device may perform a touch operation on the modification control of the group name to enter the group name setting interface, and complete the setting operation of the group name in the group name setting interface.
It should be noted that the group name modification control may be displayed when the target group is established, and subsequently, as the dialog content increases, the group name modification control may not be displayed any more. For example, as shown in FIG. 4, the group name modification control may be a "set group name" control 405.
In some embodiments, a group setting control may be further displayed in the dialog interface, and by performing touch operation on the group setting control, the electronic device may display a group setting interface, and provide a setting function of related information of a group through the group setting interface. Such as a group name setting function, an avatar generation function, a notification or privacy setting function, etc. In some embodiments, the group setting interface may include a plurality of group setting controls, for example, a group name setting control, a head portrait generation control, a notification setting control, a privacy setting control, a display setting control of names of members in a target group, and the like, and the controls displayed in the group setting interface may be set by a relevant technician as needed, which is not limited in the embodiments of the present disclosure, and is not listed here. For example, for the group setup control, as shown in FIG. 4, a group setup control 406 may be displayed in the dialog interface.
In step S33, the electronic device displays a group name setting interface displaying the avatar generation control in response to the touch operation of the modification control for the group name.
If the user wants to set the group name of the target group, touch operation can be carried out on the modification control of the group name, the electronic device can jump the interface to the group name setting interface in response to the touch operation, the user can input the group name to be set in the group name setting interface, and the electronic device can set the group name based on the input of the user.
In some embodiments, a group name input box may be displayed in the group name setting interface, the user may input content in the group name input box, and the electronic device may obtain the content in the group name input box as the group name of the target group.
In some embodiments, a method for synchronously generating a head portrait when setting a group name is provided herein. Specifically, a head portrait generation control may be displayed in the group name setting interface, and a user may perform a touch operation on the head portrait generation control to trigger the electronic device to generate a head portrait. For example, as shown in fig. 4, the user clicks the "set group name" control 405 in the dialog interface, the electronic device may jump the interface to the group name setting interface as shown in fig. 5, the group name setting interface may display a group name input box 501 and an avatar generation control 502, for example, the avatar generation control 502 may be a check option. If the user selects the avatar generation control 502, the electronic device will perform the step of avatar generation; if the user does not check the avatar generation control 502, the electronic device may set the group name without performing the step of avatar generation.
In step S34, the electronic device acquires group information of the target group in response to a touch operation on the avatar generation control.
If the user performs touch operation on the avatar generation control, it indicates that the user wants to synchronously generate the avatar of the target group, the electronic device may acquire group information of the target group, and generate the avatar according with the property of the target group by using the group information as a factor referred by avatar generation.
In some embodiments, the group information includes at least one of a group name of the target group, type information of the target group, or attribute information of members in the target group. That is, in some embodiments, the group information may include a group name of the target group. In other embodiments, the group information may include type information of the target group. In other embodiments, the group information may include attribute information of the members of the target group. In other embodiments, the group information may include any two of the three information, such as a group name of the target group and type information of the target group, further such as type information of the target group and attribute information of members in the target group, further such as a group name of the target group and attribute information of members in the target group. In other embodiments, the group information may include the three kinds of information, that is, the group information may include a group name of the target group, type information of the target group, and attribute information of members in the target group.
In some embodiments, the group information is a group name set by the user, that is, in the step S34, the electronic device may acquire the content in the group name input box (that is, the group name of the target group) as the group information to perform the subsequent avatar generating step. Of course, the electronic device may also set the content in the group name input box as the group name of the target group.
In some embodiments, the type information of the target group may be used to indicate the type of the target group, for example, the type of the target group is a work item group, and for example, the type of the target group is an entertainment group. The type information of the target group may be selected by a user when the target group is established, or may be automatically determined by the electronic device according to the attribute information of the members in the target group, and the type information of the group may be set by a related technician according to a requirement.
In some embodiments, the attribute information includes at least one of organization information, position, and age to which the attribute information belongs. The organization information to which the member belongs is, for example, a department to which the member belongs, or a project group to which the member belongs. Of course, other organization information may also be used, for example, information of other groups to which the member belongs, which is not limited by the embodiment of the present disclosure. For the attribute information of the members in the target group, the attribute information is used for indicating the attributes of the members, and the characteristics of the members can be described to a certain extent. The content in the head portrait is determined by analyzing the attribute information of the members, so that the generated head portrait embodies the attribute information of the members in the target group, is more consistent with the members in the target group, can more clearly, intuitively and vividly embody the properties of the target group, and has better generation effect.
In step S35, the electronic device generates an avatar of the target group according to the group information.
After the electronic device acquires the group information of the target group, the corresponding avatar can be generated based on the group information, so that the avatar is generated based on the group information, the avatar contains some information in the group information, the property of the target group can be better embodied, and the generation effect is better.
In some embodiments, in response to that the group information is the group name of the target group, the electronic device may use the group name of the target group, the keywords in the group name, or the characters of the group name in the target number ranked in the top order as the text content in the avatar, and further generate an image containing the text content as the avatar of the target group.
For example, as shown in fig. 5, the avatar generation control 502 may be "set to a group avatar at the same time", and if the user clicks on the avatar generation control 502, the electronic device may set the content in the group name input box 501 as a group name and generate an image containing the first 4 words of the group name as an avatar. In some embodiments, the electronic device can generate a preview avatar and then display the preview avatar in the group name setting interface. If the user sees that the preview image feels that the preview image conforms to the expectation of the user, touch operation can be performed on the storage control, the electronic device can jump the interface back to the conversation interface, and at the moment, the group name displayed in the conversation interface is modified into the group name input by the user. And if the interface is switched back to the default display interface of the target application, the avatar of the target group is changed to the generated avatar. For example, as shown in fig. 5, upon the user checking the avatar generation control 502, the electronic device may generate and display a preview avatar 503.
In a specific example, when the target group is established, the electronic device generates a default group name and a default avatar according to the target group members, and the steps S32 to S35 are processes of modifying the group name or the avatar of the target group, in which the user may select to modify the group name or the avatar, or both. Accordingly, the steps executed by the electronic device are different, for example, the electronic device modifies the group name of the target group to the content input in the group name input box. For another example, the electronic device modifies the avatar of the target group to the generated avatar. For another example, the electronic device may modify the group name of the target group to the content input in the group name input box and modify the avatar of the target group to the generated avatar. In different situations, different prompts may be displayed when the electronic device jumps back to the dialog interface. For example, if the user enters the group name "AAAAA" but does not check avatar generation control 502, the electronic device may jump the interface back to the dialog interface as shown in fig. 6, where the group name 601 has been modified to "AAAAA", and may display prompt information 602 in the content display area: you modify the group name to "AAAAA". For another example, instead of re-entering the group name, the user has used the default group name, and the avatar generation control 502 is checked, the electronic device can jump the interface back to the conversation interface as shown in fig. 7, where the group name 701 is unchanged, or "XXX", and can display prompt information 702 in the content display area: you modify the cluster head portrait. For another example, the user enters the group name "AAAAA" and the avatar generation control 502 is checked, the electronic device may jump the interface back to a dialog interface as shown in fig. 8 in which the group name 801 has been modified to "AAAAA", and may display the prompt information 802 in the content display area: you modify the group name to "AAAAA" and you modify the group head portrait.
In some embodiments, the electronic device may also generate in accordance with the style information when generating the avatar. For example, the style information may include a background color of the avatar, or may include at least one of a font of the text, whether to be bolded, a font size, a character arrangement, and the like. The embodiment of the present disclosure does not limit what the style information specifically includes. Specifically, in this step S35, the electronic device may acquire style information in the current avatar generation setting information, and generate the avatar of the target group according to the style information. Alternatively, the electronic device may randomly select one candidate style information from the candidate style information, and generate the avatar of the target group according to the candidate style information.
The group information is other information, and the electronic device can generate different head portraits. For example, in response to the group information being the type information of the target group, assuming that the type information of the target group is a work item group, the electronic device may generate a head portrait including information of a product item and the like for the group information. Further, assuming that the type information of the target group is an entertainment group, the electronic device may generate an avatar including words of entertainment and leisure for the target group. For another example, in response to the group information being the attribute information of the members in the target group, for example, the attribute information being the ages, the ages of the members in the target group are all 18-25 years, that is, all young people, so that the electronic device can generate the head portrait including young, and the like. For example, if the attribute information is the position or the department to which the member belongs, and the members in the target group all belong to the product design department, the electronic device may generate a head portrait including words such as product design.
In other embodiments, in step S35, the client may further obtain at least one candidate text corresponding to the group information according to the group information, then display the at least one candidate text in a text candidate area of the avatar setting interface, and further execute the avatar generating step based on a selected candidate text based on a selected instruction for any candidate text, which may be specifically referred to in the following steps S38 to S40, where no description is made here.
Through the above steps S31 and S35, a shortcut for setting the group name and the avatar of the target group can be provided when the target group is established, after the target group is generated, the user can quickly jump to the group name setting interface through the modification control of the group name displayed in the dialog interface, and can generate the avatar while setting the group name, and the group name and the avatar of the target group can be quickly and efficiently modified through a small amount of simple operations, so that the avatar generation efficiency is high, and the generation effect is good.
In step S36, the electronic device displays a setting interface of the target group in response to a touch operation on the group setting control, where the setting interface displays the avatar generation control.
In addition to being able to set the group name and the avatar of the target group at the time of target group establishment, the electronic device also provides an avatar generation manner to support generation of the avatar of the target group at any time. In the head portrait generating mode, a group setting control is provided, and a setting function of the group can be triggered by performing touch operation on the group setting control. The avatar generation is a function for setting groups.
If a user wants to set some information of a group, for example, a group name or a head portrait, touch operation may be performed on a group setting control, the electronic device may display a setting interface of the target group in response to the touch operation on the group setting control, and a head portrait generation control may be displayed in the setting interface of the target group, and the head portrait generation control is used to trigger generation of the head portrait. Of course, other controls may also be displayed in the setting interface of the target group, for example, a group name setting, a notification setting, or a display setting of a member name, and the like, which is not limited in this disclosure.
In some embodiments, the group setting control may be displayed in the dialog interface of the target group, or may be displayed in the setting interface of the target application, which is not limited in this disclosure.
In step S37, the electronic device acquires group information of the target group in response to a touch operation on the avatar generation control.
If the user wants to set, modify or replace the avatar of the target group, the user can perform touch operation on the avatar generation control, and the electronic device can respond to the touch operation on the avatar generation control to acquire group information required for generating the avatar.
The steps S36 and S37 are processes of acquiring group information of a target group in response to a head portrait generation instruction for the target group, where the head portrait generation instruction is triggered by a touch operation on a head portrait generation control. As for the group information, as explained in step S34, in the embodiment of the present disclosure, the electronic device may determine at least one candidate text to recommend according to the group information, and the recommended candidate text is determined based on the group information and naturally conforms to the property of the target group, so that an avatar conforming to the property of the target group can be generated based on the candidate text.
In step S38, the electronic device obtains at least one candidate text corresponding to the group information according to the group information of the target group.
After the group message of the target group is acquired, the electronic device can acquire at least one candidate text based on the group information, wherein the candidate text is used as text content displayed in the avatar.
In some embodiments, the at least one candidate text can be screened from a library of candidate texts based on the cohort information. Specifically, the electronic device may match the group information of the target group with candidate texts in a candidate text library to obtain at least one candidate text corresponding to the group information. By matching the group information with the candidate texts in the candidate text library, the candidate text which is consistent with the group information can be determined, so that the determined candidate text can better highlight the properties of the target group, the probability of selecting the candidate text by the user is higher, namely the probability of selecting or applying the candidate text is higher, and the user can directly select the candidate text without manually inputting the text to generate the avatar. Therefore, the accuracy of the candidate text acquired by the method is better, the user operation can be effectively reduced, and the avatar generation efficiency is improved.
In some embodiments, when the group information is different, the process of acquiring the at least one candidate text by the electronic device may also be different. Several possibilities are provided below, and the electronic device may employ any one of the following or any combination of the following to obtain the at least one candidate text.
The first condition is as follows: the electronic equipment responds to the group information as the group name of the target group, carries out semantic analysis on the group name of the target group and determines the matching degree between the group name of the target group and the candidate text in the candidate text library; and acquiring at least one candidate text corresponding to the group name of the target group from the candidate text library based on the matching degree.
In this case one, the electronic device may obtain the recommended candidate text according to the group name. When the candidate text is obtained, semantic analysis can be performed on the group name, so that the candidate text with the same or similar semantics as the group name is determined. When candidate texts are determined based on semantic analysis, the group names and the candidate texts can be matched based on the result of the semantic analysis, and the candidate texts are determined to be more fit with the group names by taking the matching degree as a standard. For example, if the group name is "XX product design research discussion group", semantic analysis is performed on the group name by the electronic device, so that the target group can be determined to discuss the product design aspect of the XX product, and thus, the candidate text "product design" or "product design discussion" can be determined from the candidate text library.
In some embodiments, the semantic analysis process may be: the electronic equipment carries out semantic analysis on the group name of the target group to obtain the semantic characteristics of the group name; and matching the semantic features based on the group name with the semantic features of the candidate texts in the candidate text library to obtain the matching degree between the semantic features and the semantic features of the candidate texts in the candidate text library.
Wherein the semantic features of the group name are used to characterize the semantics of the group name. In some embodiments, the semantic feature obtaining process may be: the electronic equipment carries out embedding processing on the group name to obtain a word vector of each character, and then the semantic features of the character are determined according to the word vector of each character and the character vectors of the characters before and after the character, and further the semantic features of all the characters are used as the semantic features of the group name. The process can also be realized in other ways, for example, performing word segmentation processing on the group name, matching the word segmentation result with the candidate word group, and using the matched candidate word group as the semantic feature of the group name. The embodiments of the present disclosure are not limited thereto.
In some embodiments, when obtaining at least one candidate text based on the matching degree, at least one candidate text with the matching degree greater than the threshold matching degree may be obtained, or a target number of candidate texts with the maximum matching degree may also be obtained, which is not limited in this disclosure.
Case two: and the electronic equipment responds to the type information of the target group of the group information, and acquires at least one candidate text corresponding to the type information from the candidate text library as at least one candidate text corresponding to the group information.
In this case two, the electronic device may obtain the recommended candidate text according to the type information of the target group. The recommended candidate texts can more intuitively and vividly represent the type of the target group. For example, if the type information of the target group is a work item type, when the electronic device matches the candidate text in the candidate text library, the matching degree of the candidate text such as "item discussion", "product item" and the like may be relatively high.
Case three: and the electronic equipment responds to the attribute information of the members in the target group, and acquires at least one candidate text corresponding to the attribute information from the candidate text library as at least one candidate text corresponding to the group information.
In this case three, the electronic device may obtain the recommended candidate text according to the attribute information of the members in the target group. The recommended candidate texts can more intuitively and vividly represent the characteristics of the target group members. For example, the attribute information is age, and the age of the members in the target group is 18-25 years, that is, all the members are young people, so that the matching degree of "young circle", "young club", etc. may be higher when the electronic device matches the candidate text in the candidate text library. For another example, the attribute information is a department to which the target group belongs, most members in the target group belong to a product design department, and when the electronic device is matched with the candidate text in the candidate text library, the matching degree of "product design", "product design discussion", and the like may be relatively high.
In step S39, the electronic device displays the at least one candidate text in the text candidate area of the avatar setting interface.
After the electronic device acquires the at least one candidate text, the at least one candidate text can be recommended to the user for selection. The recommended candidate text may be displayed in a text candidate area, and the displayed candidate text supports a selection operation by the user. Therefore, the user can see the candidate texts automatically recommended by the electronic equipment in the avatar setting interface, and if the user wants to generate an image based on a certain candidate text, the user can select the candidate text.
In some embodiments, the electronic device determines the number of the at least one candidate text to be one, or a plurality. If the number of the candidate texts is multiple, the electronic equipment can also determine the display sequence of the candidate texts and display the candidate texts according to a certain display sequence. Specifically, the electronic device may determine a display order of the at least one candidate text according to the matching degree between the group information and the at least one candidate text, and then display the at least one candidate text in the text candidate area of the avatar setting interface according to the display order. For example, a display with a high degree of matching is displayed before and a display with a low degree of matching is displayed after. Therefore, according to the display sequence, the user can quickly see the more conforming candidate texts and then select the generated avatar, the display efficiency of the candidate texts can be improved, the selection probability of the candidate texts is further improved, and the avatar generation efficiency is improved.
In step S40, in response to a selection instruction for any one of the candidate texts, the electronic device generates an avatar of the target group according to the selected candidate text, wherein the avatar includes the selected candidate text.
The user can perform selection operation on any candidate text recommended by the electronic equipment to trigger a selection instruction, and the electronic equipment can generate an avatar based on the selected candidate text when receiving the selection instruction. The head portrait is a character head portrait, and the candidate text selected by the user is used as the text content in the head portrait.
In some embodiments, when generating the avatar, the text content and the position of the text content in the avatar may be determined first according to the selected candidate text, and the avatar may be generated again. Specifically, the electronic device determines the text content and the position in the avatar of the target group according to the selected candidate text, and then generates the avatar of the target group based on the text content and the position. For example, if the candidate text selected by the user is "product design", the electronic device may determine that the text content in the avatar is "product design", and determine the position of each word in the text content, for example, the four words are located at the upper left, lower left, upper right, and lower right positions in the avatar, and then the electronic device generates an image that includes the text content and the text content conforms to the positions as the avatar.
In some embodiments, the avatar may be generated according to a certain style, and specifically, the electronic device may display at least one candidate style information in the avatar setting interface, where the at least one candidate style information is provided for the user to select, and the user may select according to his or her own needs or preferences. In step S40, the electronic device may further generate an avatar of the target group according to the selected candidate style information based on the text content and the location in response to a selection operation of any one of the candidate style information. The candidate style information can be at least one of a background color of the avatar, a font of the text content and a font size, and the user can adjust the display style of the text content in the avatar more finely by selecting the candidate style information to generate the avatar more meeting the user requirements, so that the generation effect is better. For example, if the candidate style information is a background color and the user selects orange, the electronic device may generate an avatar with the background color being orange and the text content being the candidate text.
In some embodiments, a text input box may be further included in the avatar setting interface, and the user may input a desired text in the text input box as the text content in the avatar. The electronic device may obtain text entered in the text entry box in response to a text entry operation, and generate an avatar for the target group based on the text.
In some embodiments, other avatar generation controls may be further displayed in the avatar setting interface, and other ways of generating an avatar are provided, for example, a shooting control may be displayed in the avatar setting interface, and the electronic device may perform image shooting in response to a touch operation on the shooting control, and take a shot image as the avatar of the target group. For another example, an image library control may be displayed in the avatar setting interface, the electronic device may jump to the image library in response to a touch operation on the image library control, the user may select any image in the image library, and the electronic device may use the selected image as the avatar of the target group. For another example, an avatar library control may be displayed in the avatar setting interface, the electronic device may jump to the avatar library in response to a touch operation on the avatar library control, the user may select any avatar in the avatar library, and the electronic device may use the selected avatar as the avatar of the target group. The avatar library may be preset by the relevant technician.
In a specific example, the electronic device may display a group setting control 406 in the dialog interface as shown in fig. 4, and if the user clicks the group setting control 406, the electronic device may jump the interface to the setting interface of the target group as shown in fig. 9, and the setting interface may display related information of the group, such as a group avatar 901, an avatar generation control 902, group member information 903, a group name setting control 904, and other group setting controls 905. If the user clicks on the avatar generation control 902 in the settings interface, the electronic device may jump the interface to an avatar settings interface as shown in FIG. 10, which may display a text entry box 1001, a text candidate area 1002, candidate style information 1003, a capture control 1004, an image library control 1005, and an avatar library control 1006. In the text candidate area 1002, at least one candidate text acquired by the electronic device may be displayed, the user may select any candidate text, or may select one of the candidate style information 1003 to generate a preview avatar, and the preview avatar may be displayed in an avatar preview area 1007 in the avatar setting interface. If the preview image is as desired, the user can click on the save control and the electronic device can take the preview avatar as the avatar for the target group. Or the user may enter text in the text entry box 1001 to create an avatar. Still alternatively, as shown in fig. 11, if the user clicks the avatar library control 1006, the electronic device may display the candidate avatars in the avatar library in a form of a popup window, and if the user selects one, the electronic device may set it as the avatar of the target group.
The embodiment of the disclosure provides a flexible group head portrait generation method, and a new reference factor is added in the generation method, so that a head portrait which is more vivid and can reflect the group property can be generated. In the generation mode, the name and type information of the group or the attribute information of the members are specifically used as factors for generating the head portrait, based on the factors, the candidate text which is consistent with the group is obtained and recommended to the user, the candidate text selected in the way is used as the display element in the head portrait, the generated head portrait can clearly and intuitively know the properties of the group, and the generation effect is better.
The group head portrait generated by the method accords with the expectation of the user, so that the user can be prevented from being dissatisfied, the group head portrait can be repeatedly adjusted or generated, the generation times of the group head portrait are reduced, the generation efficiency of the group head portrait is improved, and the electric quantity consumption of the electronic equipment can be reduced to a certain extent.
Fig. 12 is a block diagram illustrating a group head image generating apparatus according to an example embodiment. Referring to fig. 12, the apparatus includes:
an obtaining unit 1201 configured to perform, in response to an avatar generation instruction for a target group, obtaining group information of the target group, the group information including at least one of a group name of the target group, type information of the target group, or attribute information of members in the target group;
the obtaining unit 1201 is further configured to perform obtaining at least one candidate text corresponding to the group information according to the group information of the target group;
a display unit 1202 configured to perform displaying the at least one candidate text in a text candidate area of the avatar setting interface;
a generating unit 1203 configured to execute, in response to a selection instruction for any one of the candidate texts, generating an avatar of the target group according to the selected candidate text, where the avatar includes the selected candidate text.
In some embodiments, the obtaining unit 1201 is configured to perform matching between the group information of the target group and candidate texts in a candidate text library, so as to obtain at least one candidate text corresponding to the group information.
In some embodiments, the obtaining unit 1201 is configured to perform any one of:
responding to the group information as the group name of the target group, performing semantic analysis on the group name of the target group, and determining the matching degree between the group name of the target group and the candidate text in the candidate text library; based on the matching degree, at least one candidate text corresponding to the group name of the target group is obtained from the candidate text library;
responding to the type information of the target group of the group information, and acquiring at least one candidate text corresponding to the type information from the candidate text library as at least one candidate text corresponding to the group information;
and responding to the attribute information of the members in the target group, and acquiring at least one candidate text corresponding to the attribute information from the candidate text library as at least one candidate text corresponding to the group information.
In some embodiments, the display unit 1202 is configured to perform:
determining the display sequence of the at least one candidate text according to the matching degree of the group information and the at least one candidate text;
and displaying the at least one candidate text in the display sequence in the text candidate area of the avatar setting interface.
In some embodiments, the generating unit 1203 is configured to perform:
determining the text content and the position in the head portrait of the target group according to the selected candidate text;
generating an avatar for the target group based on the text content and the location.
In some embodiments, the display unit 1202 is further configured to perform displaying at least one candidate style information in the avatar setting interface;
the generating unit 1203 is configured to perform, in response to a selection operation on any one of the candidate style information, generating an avatar of the target group in accordance with the selected candidate style information based on the text content and the position.
In some embodiments, the apparatus further comprises:
an establishing unit configured to execute an establishing instruction in response to the target group, establishing the target group;
the display unit 1202 is further configured to execute a modification control for displaying a group name in the dialog interface of the target group;
the display unit 1202 is further configured to perform a touch operation in response to the modification control for the group name, and display a group name setting interface on which a avatar generation control is displayed;
the obtaining unit 1201 is further configured to perform a touch operation on the avatar generation control to obtain group information of the target group;
the generating unit 1203 is further configured to perform generating a head portrait of the target group according to the group information.
In some embodiments, the obtaining unit 1201 is configured to perform:
responding to touch operation of a group setting control, and displaying a setting interface of the target group, wherein the setting interface displays a head portrait generation control;
and responding to the touch operation of the avatar generation control, and acquiring the group information of the target group.
In some embodiments, the attribute information includes at least one of organization information, position, and age to which the attribute information belongs.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 13 is a schematic structural diagram of an electronic device 1300 according to an embodiment of the present disclosure, where the electronic device 1300 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1301 and one or more memories 1302, where the memory 1302 stores at least one computer program, and the at least one computer program is loaded and executed by the processor 1301 to implement the group head portrait generation method according to the above-mentioned method embodiments. The electronic device can also include other components for implementing device functions, for example, the electronic device can also have components such as a wired or wireless network interface and an input/output interface for input/output. The embodiments of the present disclosure are not described herein in detail.
The electronic device in the above method embodiment can be implemented as a terminal. For example, fig. 14 is a block diagram of a terminal according to an embodiment of the present disclosure. The terminal 1400 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. Terminal 1400 can also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1400 includes: a processor 1401, and a memory 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). Processor 1401 may also include a main processor, which is a processor for processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one instruction for execution by processor 1401 to implement the group avatar generation method provided by method embodiments in the present disclosure.
In some embodiments, terminal 1400 may further optionally include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral device interface 1403 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a display 1405, a camera assembly 1406, audio circuitry 1407, a positioning assembly 1408, and a power supply 1409.
The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 may further include NFC (Near Field Communication) related circuits, which are not limited by the embodiments of the present disclosure.
The display screen 1405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 for processing as a control signal. At this point, the display 1405 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1405 may be one, disposed on the front panel of the terminal 1400; in other embodiments, display 1405 may be at least two, respectively disposed on different surfaces of terminal 1400 or in a folded design; in other embodiments, display 1405 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1400. Even further, the display 1405 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1405 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing or inputting the electric signals to the radio frequency circuit 1404 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1407 may also include a headphone jack.
The positioning component 1408 serves to locate the current geographic position of the terminal 1400 for navigation or LBS (Location Based Service). The Positioning component 1408 may be based on the Positioning component of the GPS (Global Positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1409 is used to power the various components of terminal 1400. The power source 1409 may be alternating current, direct current, disposable or rechargeable. When the power source 1409 comprises a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1414, optical sensor 1415, and proximity sensor 1416.
The acceleration sensor 1411 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 1400. For example, the acceleration sensor 1411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1401 can control the display 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1412 may detect a body direction and a rotation angle of the terminal 1400, and the gyro sensor 1412 and the acceleration sensor 1411 may cooperate to collect a 3D motion of the user on the terminal 1400. The processor 1401 can realize the following functions according to the data collected by the gyro sensor 1412: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1413 may be disposed on the side frames of terminal 1400 and/or underlying display 1405. When the pressure sensor 1413 is disposed on the side frame of the terminal 1400, the user's holding signal of the terminal 1400 can be detected, and the processor 1401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the display screen 1405, the processor 1401 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1414 is used for collecting a fingerprint of a user, and the processor 1401 identifies the user according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1414 may be disposed on the front, back, or sides of terminal 1400. When a physical button or vendor Logo is provided on terminal 1400, fingerprint sensor 1414 may be integrated with the physical button or vendor Logo.
The optical sensor 1415 is used to collect ambient light intensity. In one embodiment, processor 1401 may control the display brightness of display 1405 based on the ambient light intensity collected by optical sensor 1415. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1405 is increased; when the ambient light intensity is low, the display brightness of the display screen 1405 is reduced. In another embodiment, the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 according to the intensity of the ambient light collected by the optical sensor 1415.
Proximity sensor 1416, also known as a distance sensor, is typically disposed on the front panel of terminal 1400. The proximity sensor 1416 is used to collect the distance between the user and the front surface of the terminal 1400. In one embodiment, when proximity sensor 1416 detects that the distance between the user and the front face of terminal 1400 is gradually decreased, processor 1401 controls display 1405 to switch from a bright screen state to a dark screen state; when proximity sensor 1416 detects that the distance between the user and the front face of terminal 1400 is gradually increasing, display 1405 is controlled by processor 1401 to switch from the sniff state to the brighten state.
Those skilled in the art will appreciate that the configuration shown in fig. 14 is not intended to be limiting with respect to terminal 1400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The electronic device in the above method embodiment can be implemented as a server. For example, fig. 15 is a schematic structural diagram of a server provided in an embodiment of the present disclosure, where the server 1500 may generate relatively large differences due to different configurations or performances, and can include one or more processors (CPUs) 1501 and one or more memories 1502, where the memory 1502 stores at least one computer program, and the at least one computer program is loaded and executed by the processors 1501 to implement the group head portrait generation method provided in each method embodiment described above. Certainly, the server can also have components such as a wired or wireless network interface and an input/output interface to facilitate input and output, and the server can also include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a storage medium, such as a memory, including program code executable by a processor of an electronic device to perform the group avatar generation method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises one or more program codes, which are stored in a computer-readable storage medium. The one or more processors of the electronic device can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the electronic device can execute the group avatar generation method.
The user information to which the present disclosure relates may be information authorized by the user or sufficiently authorized by each party.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for generating a group avatar, the method comprising:
responding to an avatar generation instruction of a target group, and acquiring group information of the target group, wherein the group information comprises at least one of a group name of the target group, type information of the target group or attribute information of members in the target group;
acquiring at least one candidate text corresponding to the group information according to the group information of the target group;
displaying the at least one candidate text in a text candidate area of the avatar setting interface;
responding to a selection instruction of any candidate text, and generating an avatar of the target group according to the selected candidate text, wherein the avatar comprises the selected candidate text.
2. The method for generating a group avatar of claim 1, wherein the obtaining at least one candidate text corresponding to the group information according to the group information of the target group comprises:
and matching the group information of the target group with candidate texts in a candidate text library to obtain at least one candidate text corresponding to the group information.
3. The method for generating a group avatar of claim 2, wherein the matching of the group information of the target group with candidate texts in a candidate text library to obtain at least one candidate text corresponding to the group information comprises any one of:
responding to the group information as the group name of the target group, performing semantic analysis on the group name of the target group, and determining the matching degree between the group name of the target group and the candidate text in the candidate text library; based on the matching degree, at least one candidate text corresponding to the group name of the target group is obtained from the candidate text library;
responding to the group information as the type information of the target group, and acquiring at least one candidate text corresponding to the type information from the candidate text library as at least one candidate text corresponding to the group information;
and responding to the attribute information of the members in the target group, and acquiring at least one candidate text corresponding to the attribute information from the candidate text library as at least one candidate text corresponding to the group information.
4. The method of generating a group avatar of claim 1, further comprising:
displaying at least one candidate style information in the avatar setting interface;
the generating the head portrait of the target group based on the text content and the position comprises:
and responding to the selection operation of any candidate style information, and generating the head portrait of the target group according to the selected candidate style information based on the text content and the position.
5. The method of generating a group avatar of claim 1, further comprising:
responding to an establishment instruction of the target group, and establishing the target group;
displaying a group name modification control in a dialog interface of the target group;
responding to the touch operation of the modification control of the group name, and displaying a group name setting interface, wherein the head portrait generation control is displayed on the group name setting interface;
responding to the touch operation of the head portrait generation control, and acquiring group information of the target group;
and generating the head portrait of the target group according to the group information.
6. The method according to claim 1, wherein the acquiring group information of the target group in response to the avatar generation instruction for the target group comprises:
responding to touch operation of a group setting control, and displaying a setting interface of the target group, wherein the setting interface displays a head portrait generation control;
and responding to the touch operation of the avatar generation control, and acquiring the group information of the target group.
7. A cluster head image generating apparatus, comprising:
an acquisition unit configured to execute an avatar generation instruction for a target group, and acquire group information of the target group, the group information including at least one of a group name of the target group, type information of the target group, or attribute information of members in the target group;
the obtaining unit is further configured to obtain at least one candidate text corresponding to the group information according to the group information of the target group;
a display unit configured to perform displaying the at least one candidate text in a text candidate area of an avatar setting interface;
the generating unit is configured to execute a selection instruction responding to any candidate text, and generate the head portrait of the target group according to the selected candidate text, wherein the head portrait comprises the selected candidate text.
8. An electronic device, comprising:
one or more processors;
one or more memories for storing the one or more processor-executable program codes;
wherein the one or more processors are configured to execute the program code to implement the group avatar generation method of any of claims 1-6.
9. A computer-readable storage medium, wherein program code in the computer-readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the group avatar generation method of any of claims 1 to 6.
10. A computer program product comprising at least one computer program which, when executed by a processor, implements the group avatar generation method of any of claims 1 to 6.
CN202110328404.3A 2021-03-26 2021-03-26 Group head portrait generation method, device, equipment and storage medium Pending CN113064981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110328404.3A CN113064981A (en) 2021-03-26 2021-03-26 Group head portrait generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110328404.3A CN113064981A (en) 2021-03-26 2021-03-26 Group head portrait generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113064981A true CN113064981A (en) 2021-07-02

Family

ID=76563954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110328404.3A Pending CN113064981A (en) 2021-03-26 2021-03-26 Group head portrait generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113064981A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115065570A (en) * 2022-04-14 2022-09-16 深圳云之家网络有限公司 Group chat identification method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105141498A (en) * 2015-06-30 2015-12-09 腾讯科技(深圳)有限公司 Communication group creating method and device and terminal
CN105574015A (en) * 2014-10-13 2016-05-11 阿里巴巴集团控股有限公司 Search recommendation method and device
CN105786793A (en) * 2015-12-23 2016-07-20 百度在线网络技术(北京)有限公司 Method and device for analyzing semanteme of spoken language text information
CN107947951A (en) * 2017-12-21 2018-04-20 广东欧珀移动通信有限公司 Groups of users recommends method, apparatus and storage medium and server
CN108280458A (en) * 2017-01-05 2018-07-13 腾讯科技(深圳)有限公司 Group relation kind identification method and device
CN108431812A (en) * 2016-11-28 2018-08-21 华为技术有限公司 A kind of method that head portrait is shown and head portrait display device
CN110634168A (en) * 2018-06-21 2019-12-31 钉钉控股(开曼)有限公司 Method and device for generating group head portrait
CN111049735A (en) * 2019-12-23 2020-04-21 北京达佳互联信息技术有限公司 Group head portrait display method, device, equipment and storage medium
CN111698144A (en) * 2019-03-15 2020-09-22 钉钉控股(开曼)有限公司 Communication method, device and equipment, and group creation method, device and equipment
CN112148404A (en) * 2020-09-24 2020-12-29 游艺星际(北京)科技有限公司 Head portrait generation method, apparatus, device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574015A (en) * 2014-10-13 2016-05-11 阿里巴巴集团控股有限公司 Search recommendation method and device
CN105141498A (en) * 2015-06-30 2015-12-09 腾讯科技(深圳)有限公司 Communication group creating method and device and terminal
CN105786793A (en) * 2015-12-23 2016-07-20 百度在线网络技术(北京)有限公司 Method and device for analyzing semanteme of spoken language text information
CN108431812A (en) * 2016-11-28 2018-08-21 华为技术有限公司 A kind of method that head portrait is shown and head portrait display device
CN108280458A (en) * 2017-01-05 2018-07-13 腾讯科技(深圳)有限公司 Group relation kind identification method and device
CN107947951A (en) * 2017-12-21 2018-04-20 广东欧珀移动通信有限公司 Groups of users recommends method, apparatus and storage medium and server
CN110634168A (en) * 2018-06-21 2019-12-31 钉钉控股(开曼)有限公司 Method and device for generating group head portrait
CN111698144A (en) * 2019-03-15 2020-09-22 钉钉控股(开曼)有限公司 Communication method, device and equipment, and group creation method, device and equipment
CN111049735A (en) * 2019-12-23 2020-04-21 北京达佳互联信息技术有限公司 Group head portrait display method, device, equipment and storage medium
CN112148404A (en) * 2020-09-24 2020-12-29 游艺星际(北京)科技有限公司 Head portrait generation method, apparatus, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115065570A (en) * 2022-04-14 2022-09-16 深圳云之家网络有限公司 Group chat identification method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107885533B (en) Method and device for managing component codes
CN108874496B (en) Application management method, device, terminal, server and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN110932963B (en) Multimedia resource sharing method, system, device, terminal, server and medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN110139143B (en) Virtual article display method, device, computer equipment and storage medium
CN112363660B (en) Method and device for determining cover image, electronic equipment and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN111026992A (en) Multimedia resource preview method, device, terminal, server and storage medium
CN111858382A (en) Application program testing method, device, server, system and storage medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN112052354A (en) Video recommendation method, video display method and device and computer equipment
CN111880888A (en) Preview cover generation method and device, electronic equipment and storage medium
CN111459466B (en) Code generation method, device, equipment and storage medium
CN109995804B (en) Target resource information display method, information providing method and device
CN113613028A (en) Live broadcast data processing method, device, terminal, server and storage medium
CN113609358A (en) Content sharing method and device, electronic equipment and storage medium
CN114168369A (en) Log display method, device, equipment and storage medium
CN112764600A (en) Resource processing method, device, storage medium and computer equipment
CN113064981A (en) Group head portrait generation method, device, equipment and storage medium
CN114816600B (en) Session message display method, device, terminal and storage medium
CN113051485B (en) Group searching method, device, terminal and storage medium
CN111641853B (en) Multimedia resource loading method and device, computer equipment and storage medium
CN109618018B (en) User head portrait display method, device, terminal, server and storage medium
CN115905374A (en) Application function display method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination