WO2022062808A1 - 头像生成方法及设备 - Google Patents

头像生成方法及设备 Download PDF

Info

Publication number
WO2022062808A1
WO2022062808A1 PCT/CN2021/114362 CN2021114362W WO2022062808A1 WO 2022062808 A1 WO2022062808 A1 WO 2022062808A1 CN 2021114362 W CN2021114362 W CN 2021114362W WO 2022062808 A1 WO2022062808 A1 WO 2022062808A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
component
target
components
initial
Prior art date
Application number
PCT/CN2021/114362
Other languages
English (en)
French (fr)
Inventor
韩旭
Original Assignee
游艺星际(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 游艺星际(北京)科技有限公司 filed Critical 游艺星际(北京)科技有限公司
Publication of WO2022062808A1 publication Critical patent/WO2022062808A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Definitions

  • the present disclosure relates to the field of Internet technologies, and in particular, to a method, apparatus, device, and storage medium for generating an avatar.
  • various existing websites usually provide users with several default avatars.
  • the user wants to register an avatar he selects an avatar from the several default avatars and sets it as his own avatar.
  • a method for generating an avatar comprising:
  • the target avatar component is adjusted to obtain the target avatar.
  • determining the adjustment parameters of the target avatar component in response to the shape adjustment operation of the target avatar component in the initial avatar includes:
  • the position parameter of the end position is determined as the adjustment parameter of the target avatar component.
  • the method further includes:
  • the target avatar component changes in shape along the operation trajectory.
  • the method further includes:
  • the avatar generation interface includes a plurality of avatar components, the plurality of avatar components include multiple types of avatar components, and each type of avatar component includes at least one avatar component;
  • the plurality of first avatar components are determined based on the selection operation in the avatar generation interface.
  • the display avatar generation interface includes:
  • the avatar generation interface according to the attribute information of the target account, a plurality of avatar components corresponding to the attribute information are displayed.
  • the display avatar generation interface includes:
  • the avatar generation interface according to the avatar type of the historical avatar of the target account, multiple avatar components corresponding to the avatar type are displayed.
  • the display avatar generation interface includes:
  • the plurality of avatar components are displayed in the form of thumbnails
  • determining that the plurality of first avatar components includes:
  • the plurality of first avatar components are determined based on the selection operation of the plurality of thumbnails in the avatar generation interface.
  • the method further includes:
  • the method further includes:
  • the avatar component corresponding to the selection operation matches the selected at least one first avatar component, the avatar component corresponding to the selection operation is displayed.
  • the method further includes:
  • a second avatar component of the at least one second avatar component is displayed, and the at least one second avatar component and the at least one first avatar component are displayed. correspond.
  • the generating an initial avatar based on the plurality of first avatar components includes:
  • drawing is performed on the target canvas to obtain the initial avatar.
  • the method further includes:
  • a storage request carrying the binary file is sent to the server, where the storage request is used to instruct the server to store the binary file.
  • an apparatus for generating an avatar comprising:
  • a generating unit configured to generate an initial avatar of the target account based on a plurality of first avatar components
  • a determining unit configured to, in response to a shape adjustment operation on a target avatar component in the initial avatar, determine an adjustment parameter of the target avatar component, where the adjustment parameter is used to adjust the shape of the target avatar component;
  • the adjustment unit is configured to adjust the target avatar component based on the adjustment parameter to obtain the target avatar.
  • the determining unit includes:
  • a position determination subunit configured to, in response to the shape adjustment operation on the target avatar component in the initial avatar, determine the end position of the shape adjustment operation
  • the parameter determination subunit is configured to determine the position parameter of the end position as the adjustment parameter of the target avatar component.
  • the apparatus further includes a display unit configured to adjust the operation trajectory of the operation according to the shape, and display the shape change of the target avatar component along with the operation trajectory.
  • the apparatus further includes:
  • an interface display unit configured to display an avatar generation interface
  • the avatar generation interface includes a plurality of avatar components, the plurality of avatar components include a plurality of types of avatar components, and each type of avatar components includes at least one avatar component;
  • the component determination unit is configured to determine the plurality of first avatar components based on the selection operation in the avatar generation interface.
  • the interface display unit includes:
  • the first display subunit is configured to, in the avatar generation interface, terminal a plurality of avatar components corresponding to the attribute information according to the attribute information of the target account.
  • the interface display unit includes:
  • the second display subunit is configured to, in the avatar generation interface, display a plurality of avatar components corresponding to the avatar type according to the avatar type of the historical avatar of the target account.
  • the interface display unit is configured to display the plurality of avatar components in the form of thumbnails in the avatar generation interface
  • the component determination unit is configured to determine the plurality of first avatar components based on a selection operation on a plurality of thumbnails in the avatar generation interface.
  • the apparatus further includes:
  • a first sending unit configured to send an acquisition request to the server, where the acquisition request is used to acquire the plurality of avatar components
  • a receiving unit configured to receive the plurality of avatar components returned by the server based on the obtaining request.
  • the apparatus further includes a component display unit configured to:
  • the avatar component corresponding to the selection operation matches the selected at least one first avatar component, the avatar component corresponding to the selection operation is displayed.
  • the component display unit is further configured to display a second one of the at least one second avatar component if the avatar component corresponding to the selection operation does not match the at least one first avatar component An avatar component, wherein the at least one second avatar component corresponds to the at least one first avatar component.
  • the generating unit includes:
  • a drawing position determination subunit configured to determine the drawing positions of the plurality of first avatar components in the target canvas based on the types of the plurality of first avatar components
  • the drawing subunit is configured to perform drawing in the target canvas based on the plurality of first avatar components and corresponding drawing positions to obtain the initial avatar of the target account.
  • the apparatus further includes:
  • a file generating unit configured to generate a binary file of the target avatar
  • the second sending unit is configured to send a storage request carrying the binary file to the server, where the storage request is used to instruct the server to store the binary file.
  • an electronic device comprising:
  • processors one or more processors
  • the processor is configured to execute the program code to achieve the following steps:
  • the target avatar component is adjusted to obtain the target avatar.
  • a storage medium comprising: when program codes in the storage medium are executed by a processor of an electronic device, the electronic device can implement the following steps:
  • the target avatar component is adjusted to obtain the target avatar.
  • a computer program product comprising computer program code stored in a non-volatile computer-readable storage medium.
  • the processor of the electronic device reads the computer program code from the non-volatile computer-readable storage medium, and the processor executes the computer program code, so that the electronic device realizes the following steps:
  • the target avatar component is adjusted to obtain the target avatar.
  • the function of user-defined avatar is realized, the way of generating the avatar is simple and convenient, and the efficiency of human-computer interaction is improved.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for generating an avatar according to an exemplary embodiment
  • FIG. 2 is a flowchart of a method for generating an avatar according to an exemplary embodiment
  • FIG. 3 is a flowchart of a method for generating an avatar according to an exemplary embodiment
  • FIG. 4 is a block diagram of an apparatus for generating an avatar according to an exemplary embodiment
  • FIG. 5 is a block diagram of a terminal according to an exemplary embodiment
  • Fig. 6 is a block diagram of a server according to an exemplary embodiment.
  • the information involved in this disclosure may be authorized by the user or fully authorized by the parties.
  • FIG. 1 is a schematic diagram of an implementation environment of an avatar generation method provided by an embodiment of the present disclosure.
  • the implementation environment includes: a terminal 101 and a server 102 .
  • the terminal 101 can be at least one of a smart phone, a smart watch, a portable computer, a vehicle-mounted terminal, etc.
  • the terminal 101 has a communication function and can access the Internet.
  • the terminal 101 can generally refer to one of multiple terminals.
  • the terminal 101 is used as an example. Those skilled in the art may know that the number of the above-mentioned terminals may be more or less.
  • the terminal 101 may run with various browsers or various application programs.
  • the user can start a browser or an application program on the terminal, and can perform subsequent business operations by logging in the user account in the website or application program of the browser to realize corresponding business functions.
  • a user can implement online shopping, video playback, social chat, and the like through a website or application program of a browser.
  • the website or application of the browser supports the setting of the user avatar.
  • the server 102 may be an independent physical server, or a server cluster or a distributed file system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, and cloud communications , middleware services, domain name services, security services, Content Delivery Network (CDN), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the server 102 is associated with an avatar information database, and the avatar information data is used to store the corresponding relationship between the identifiers of the plurality of avatar components and the plurality of avatar components.
  • the server 102 and the terminal 101 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present disclosure.
  • the number of the foregoing servers 102 may be more or less, which is not limited in this embodiment of the present disclosure.
  • the server 102 may also include other functional servers in order to provide more comprehensive and diversified services.
  • the terminal 101 and the server 102 jointly execute the process to complete.
  • the user wants to register the avatar
  • the user logs in the user account in the website webpage or application of the browser, and performs a click operation on the generate avatar option in the website webpage on the terminal 101.
  • the terminal 101 responds to the user's click operation and triggers the terminal 101 sends a request for obtaining the avatar component to the server 102 to obtain the avatar generation interface including the avatar component, and displays the avatar generation interface.
  • the server 102 After receiving the acquisition request, the server 102 acquires multiple avatar components corresponding to the acquisition request from the avatar information database, and sends the multiple avatar components to the terminal 101, and then the terminal 101 acquires the avatar components, using the embodiments of the present disclosure
  • the provided avatar generation method generates the avatar of the user account. Subsequently, the target account is used to represent the user account of the avatar to be registered.
  • the avatar component may also be referred to as an avatar component element.
  • FIG. 2 is a flowchart of a method for generating an avatar according to an exemplary embodiment, as shown in FIG. 2 , including the following steps:
  • step 201 the terminal generates an initial avatar based on the plurality of first avatar components.
  • the plurality of first avatar components are avatar components selected by a target account
  • the target account is a user account logged in by the terminal.
  • the terminal can generate the initial avatar of the target account based on the plurality of first avatar components selected by the target account.
  • step 202 in response to the shape adjustment operation of the target avatar component in the initial avatar, the terminal determines an adjustment parameter of the target avatar component, where the adjustment parameter is used to adjust the shape of the target avatar component.
  • the target avatar component is any first avatar component in the initial avatar components.
  • the terminal determines an adjustment parameter of the any first avatar component, where the adjustment parameter is used to adjust the shape of the first avatar component.
  • step 203 the terminal adjusts the target avatar component based on the adjustment parameters to obtain the target avatar.
  • the terminal can adjust any one of the first avatar components according to the adjustment parameters to obtain the target avatar of the target account.
  • the user selects and combines multiple avatar components to generate an initial avatar, and then the user adjusts the shape of the generated initial avatar to generate a target avatar, thereby realizing the user
  • Customize the avatar the steps to customize the avatar are simple and fast, and the man-machine efficiency is high.
  • the avatar generation method can also effectively reduce the repetition of avatars.
  • FIG. 3 is a flow chart of a method for generating an avatar according to an exemplary embodiment. Referring to Figure 3, the method includes:
  • step 301 the terminal sends an acquisition request to the server, where the acquisition request is used to acquire multiple avatar components.
  • the terminal can send an acquisition request for multiple avatar components to the server.
  • the avatar component is also called the avatar component element, which refers to the elements required to form an avatar. For example, hairstyle, face shape, facial features, whether to wear glasses, etc.
  • the obtaining request is used to instruct the server to return the avatar components, and the terminal displays the plurality of avatar components returned by the server.
  • the user when a user wants to register an avatar, the user can log in to the target account in the website webpage or application of the browser, and then click the generate avatar option in the website webpage on the terminal, and the terminal responds to the user
  • the click operation triggers the terminal to send an acquisition request for the avatar component to the server, so as to acquire the avatar generation interface including the avatar component, and display the avatar generation interface.
  • the acquisition request carries the target account number.
  • step 302 the server receives the acquisition request, determines multiple avatar components corresponding to the acquisition request, and returns the multiple avatar components to the terminal.
  • the plurality of avatar components include multiple types of avatar components, and each type of avatar component includes at least one avatar component, and avatar components belonging to the same type may be avatar components of different styles.
  • the types of avatar components include hairstyles, face shapes, facial features (eyebrows, eyes, noses), hair accessories, etc., where the hairstyles include long curly hair, short curly hair, long straight hair, short straight hair, and hairstyles of different colors.
  • the avatar component of including the avatar components of round face, long face, square face and other styles.
  • avatar components of various types and styles are provided, and when an avatar is subsequently generated, a rich selection of avatar components is provided, which can meet the user's personalized selection requirements.
  • the server can obtain the target account carried in the obtaining request, obtain the avatar component from the avatar information database associated with the server, and then send the obtained avatar component to the terminal where the target account is located.
  • the avatar information database is used to store the corresponding relationship between the identifiers of the plurality of avatar components and the plurality of avatar components.
  • the server can send the avatar components to the terminal in the form of data packets, and the data packets can also include the corresponding relationship between the identifiers of the plurality of avatar components and the plurality of avatar components.
  • step 303 the terminal receives multiple avatar components returned by the server based on the obtaining request.
  • the terminal after receiving the multiple avatar components returned by the server, stores the multiple avatar components locally in the browser or the application.
  • the terminal displays an avatar generation interface, the avatar generation interface includes a plurality of avatar components, the plurality of avatar components include a plurality of types of avatar components, and each type of avatar components includes at least one avatar component.
  • the terminal after acquiring the multiple avatar components, performs scaling processing on the multiple avatar components to obtain thumbnails of the multiple avatar components, and then displays the multiple avatar components in the form of thumbnails in the avatar generation interface. Multiple avatar components. By displaying the avatar components in the form of thumbnails, a page can display more avatar components, which is more convenient for users to browse and improves the efficiency of human-computer interaction.
  • the terminal can also generate a corresponding relationship between the avatar component and the thumbnail, so as to facilitate the subsequent determination of the selected avatar component and further improve the man-machine efficiency.
  • the terminal after determining the thumbnails of the multiple avatar components, the terminal generates the identifiers of the multiple thumbnails and establishes a correspondence between the identifiers of the thumbnails and the avatar components. Based on the correspondence, the terminal can determine each The avatar component corresponding to the thumbnail.
  • a hyperlink is added to the multiple thumbnails, and the hyperlink points to the original images of the avatar components corresponding to the thumbnails. to get the avatar component corresponding to each thumbnail.
  • the original image of the avatar component is the avatar component itself, and the identifier corresponding to the original image is the identifier of the avatar component.
  • the above step 304 is a process in which the terminal displays all the avatar components.
  • the terminal can also selectively display the avatar component according to the attribute information of the target account.
  • the process of displaying multiple avatar components by the terminal to the target account includes any of the following:
  • the terminal determines, among the plurality of avatar components, a plurality of avatar components corresponding to the attribute information, and then displays a plurality of avatar components corresponding to the attribute information in the avatar generation interface.
  • Avatar component refers to the profile information of the target account, such as gender information, age information, occupation information, and the like. Taking gender information as an example, if the terminal determines that the gender information of the target account is male, it displays multiple avatar components corresponding to the male gender for the target account.
  • the terminal can display multiple corresponding avatar components according to the attribute information of different accounts, that is, to display the required avatar components for the user instead of displaying all the avatar components, which ensures the intuitiveness and simplicity of the page and facilitates the user to quickly determine the desired image.
  • the avatar component to be used improves the efficiency of human-computer interaction, and solves the problem that users' browsing time is too long due to displaying more avatar components.
  • the process for the terminal to determine the avatar component according to the attribute information includes: the terminal determines, according to the attribute information of the target account and the identifiers of the plurality of avatar components, among the plurality of avatar components, an avatar component.
  • the identifier of the avatar component includes a first character string, and the first character string is used to represent attribute information.
  • the terminal obtains multiple avatar components corresponding to the attribute information.
  • the avatar component corresponding to the attribute information can be determined by the first character string in the identifier of the avatar component, the avatar component corresponding to different attributes can be quickly determined, and the speed of determining the avatar component is improved.
  • the process for the terminal to determine the avatar component according to the attribute information includes: according to the attribute information of the target account and the corresponding relationship between the attribute identifier and multiple avatar components, among the multiple avatar components, determine the Multiple avatar components corresponding to attribute information.
  • the attribute information is represented by an attribute identifier.
  • the data packet received by the terminal (see step 302 ) further includes the correspondence between the attribute identifier and the avatar component.
  • the terminal obtains the Multiple avatar components corresponding to attribute information.
  • the avatar components corresponding to different attributes can also be quickly determined, which improves the speed of determining the avatar components.
  • the above two implementation manners can quickly determine the avatar components corresponding to different attributes, and meet the user's demand for displaying attribute information corresponding to the avatar components without reducing the processing efficiency.
  • the terminal can determine, among the plurality of avatar components, a plurality of avatar components corresponding to the avatar type according to the avatar type of the historical avatar of the target account, and display the corresponding avatar type in the avatar generation interface. of multiple avatar components.
  • the avatar type refers to the style type of the avatar.
  • the terminal determines that the avatar type is a secondary element type according to the avatar type of the target account, it displays multiple avatar components corresponding to the secondary element type for the target account.
  • the avatar component of improves the efficiency of human-computer interaction, and solves the problem of long browsing time caused by the display of all elements.
  • This embodiment of the present disclosure does not limit the manner in which the avatar component is displayed.
  • the above process of determining the avatar component according to the avatar type is similar to the process of determining the avatar component according to the attribute information, and will not be described again.
  • the above process of determining the corresponding avatar component according to the attribute information or the avatar type is described by taking the terminal as the execution subject as an example.
  • the process can also be performed by the server, that is, the server determines the attribute information or the avatar type corresponding to the avatar component in the avatar information database according to the attribute information or the avatar type, and then returns the determined avatar component To the terminal, the terminal will display it on the avatar generation interface.
  • the avatar information database stores the corresponding relationship among attribute identifiers, avatar types and avatar components. Since the server does not need to send all the avatar information to the terminal, the storage pressure and processing pressure of the terminal are relieved, and the processing efficiency of the terminal is improved.
  • step 305 the terminal determines a plurality of first avatar components based on the selection operation on the avatar generation interface.
  • the first avatar component is used to represent the avatar component selected by the user.
  • the user when the user browses multiple avatar components in the avatar generation interface, the user performs a selection operation on the desired avatar component through the terminal.
  • the terminal determines the avatar component corresponding to the thumbnail as the first avatar component, that is, determines the selected first avatar component.
  • the process for the terminal to determine the avatar component corresponding to the thumbnail includes: in response to the target account selecting any thumbnail in the avatar generation interface, the terminal obtains the identifier of the selected thumbnail, and according to the thumbnail The identifier of the thumbnail image and the corresponding relationship between the identifier of the thumbnail image and the avatar component, if the avatar component corresponding to the identifier of the thumbnail image is determined, the first avatar component corresponding to the thumbnail image is determined.
  • the avatar component can be quickly determined, the efficiency of determining the first avatar component is improved, the efficiency of avatar generation is further improved, and the human-computer interaction efficiency is also improved.
  • the process for the terminal to determine the avatar component corresponding to the thumbnail includes: in response to the target account selecting any thumbnail in the avatar generation interface, the terminal determines, according to the hyperlink in the thumbnail, that the hyperlink corresponds to The original image of the avatar component is obtained, the identity of the original image is obtained, and the avatar component can be determined according to the identity of the original image and the corresponding relationship between the identity and the avatar component, that is, the first avatar component corresponding to the thumbnail is determined.
  • the avatar component can also be quickly determined, which improves the efficiency of determining the first avatar component, and further improves the efficiency of generating the avatar. This embodiment of the present disclosure does not limit the method used to determine the first avatar component.
  • step 306 the terminal determines whether the avatar component corresponding to the selection operation matches the selected at least one first avatar component, and if the avatar component corresponding to the selection operation matches the selected at least one first avatar component, the selection operation is displayed The corresponding avatar component.
  • the terminal can determine whether the newly selected avatar component matches the at least one first avatar component that has been selected by the target account, and if the newly selected avatar component matches the at least one first avatar component that has been selected by the target account , the newly selected avatar component is displayed as the first avatar component.
  • whether the avatar components match refers to whether the newly selected avatar component matches the style type of the selected at least one first avatar component. If the style types match, it means that the newly selected avatar component and the at least one first avatar component that has been selected belong to the same style type, such as a two-dimensional style head shape and a two-dimensional style hairstyle. If it does not match, it means that the newly selected avatar component and the selected at least one first avatar component belong to different style types, such as a two-dimensional style head shape and a cartoon style hairstyle.
  • the terminal can judge whether the newly selected avatar component matches the already selected avatar component according to a preset rule.
  • the rule is the corresponding relationship between the avatar component and the associated avatar component. If there is a corresponding relationship between the newly selected avatar component and the selected at least one first avatar component in the corresponding relationship, the new avatar component is determined. The selected avatar component matches the selected at least one first avatar component; if there is no corresponding relationship between the newly selected avatar component and the selected at least one first avatar component in the corresponding relationship, it is determined that the new The selected avatar component does not match the selected at least one first avatar component.
  • the associated avatar component is used to represent the avatar component that matches the avatar component.
  • the avatar information database can also be used to store the correspondence between the avatar components and the associated avatar components.
  • the data packet returned by the server to the terminal in step 302 further includes the correspondence between the avatar component and the associated avatar component.
  • the correspondence is in the form of a list. As shown in Table 1, the avatar components matched with IDA1 include IDA2 and IDA3.
  • step 306 is a process of matching the avatar component corresponding to the selection operation with the selected at least one first avatar component.
  • the newly selected avatar component does not match the selected at least one first avatar component, select a first avatar component from at least one second avatar component corresponding to the selected at least one first avatar component.
  • a second avatar component displaying the second avatar component.
  • the second avatar component is used to represent the avatar component that matches the selected at least one first avatar component.
  • the step of selecting the second avatar component by the terminal includes: the terminal selects a second avatar component through a random number matching algorithm in at least one second avatar component corresponding to the selected at least one first avatar component .
  • the random number matching algorithm is used to select representative samples from the overall sample.
  • the step of selecting the second avatar component by the terminal is: the terminal determines a sequence set according to the sequence number of the at least one second avatar component, and in the sequence set, a random number generator (that is, a random number generator) is used in the sequence set.
  • Number generating function such as Rand function, Srand function, etc., determine a random number (ie, a random sequence number), and use the avatar component corresponding to the random number as the second avatar component obtained by selection.
  • the step of selecting the second avatar component by the terminal is: the terminal determines a sequence set according to the sequence number of the at least one second avatar component, and in the sequence set, a random number generation algorithm such as Monte Carlo is used. Algorithms (also known as random sampling algorithms), normal random number algorithms, etc., determine a random number (ie, a random sequence number), and use the avatar component corresponding to the random number as the second avatar component obtained by selection. Through the random number generation algorithm, a random number can also be quickly determined, and then the second avatar component can be determined.
  • a random number generation algorithm such as Monte Carlo is used.
  • This embodiment of the present disclosure does not limit the method used to select the second avatar component. Through random selection, the second avatar component can be quickly determined, the processing flow is simple, and the efficiency of avatar generation is improved.
  • the step of selecting the second avatar component by the terminal includes: the terminal selects the second avatar component with the highest matching degree from at least one second avatar component corresponding to the selected at least one first avatar component.
  • the matching degree is used to represent the style matching degree between the selected at least one first avatar component and the second avatar component.
  • the terminal or server can obtain the matching degree between the any avatar component and its corresponding multiple associated avatar components, and then combine the avatar component, the associated avatar component and the corresponding avatar component.
  • the matching degree is correspondingly stored in the avatar information database.
  • the data packet received by the terminal (see step 302 ) further includes the correspondence between the avatar component, the associated avatar component and the matching degree.
  • the step of obtaining the matching degree by the terminal or the server includes: the technician sets a weight for the multiple associated avatar components corresponding to each avatar component, and the weight is used to represent the associated avatar component The match with each avatar component.
  • the weight can be set adaptively according to the style previously defined by the technician, and the matching degree can be determined more accurately, and it is not easy to make mistakes.
  • the step of obtaining the matching degree by the server includes: the server extracts the image features of the plurality of avatar components based on the image feature extraction model, and for each avatar component, the server extracts the image features of the avatar components according to the The first image feature and the second image features of multiple associated avatar components corresponding to the avatar component are used to calculate the distance between the first image feature and the second image feature, and use the distance as a matching degree.
  • the first image feature refers to the image feature of each avatar component.
  • the second image feature refers to the image feature of the associated avatar component corresponding to each avatar component.
  • the matching degree is represented by the distance between the first image feature and the second image feature, for example, Euclidean distance, Manhattan distance, Chebyshev distance, Chi-square distance, Cosine distance, Hamming distance, etc.
  • the embodiment of the present disclosure does not limit which distance calculation matching degree is selected. Among them, the smaller the distance, the greater the matching degree, and the greater the distance, the smaller the matching degree.
  • the terminal judges whether the style matches based on the uniqueness of the ID of the avatar component and the preset rules, and can determine multiple avatar components with matching styles, and then generate avatars with matching styles, which improves the accuracy of avatar generation. sex.
  • step 306 is used to illustrate the solution by taking the terminal determining whether the style type matches as an example.
  • the terminal determines whether the types of the newly selected avatar component and the selected at least one first avatar component are the same. If the type is not repeated, the newly selected avatar component will be displayed; if the newly selected avatar component is of the same type as the selected at least one first avatar component, the newly selected avatar component will not be displayed, and the duplicated avatar component type will pop up. prompt window.
  • a prompt window for repeating the avatar component type will pop up to remind the user of the avatar If the component type is repeated, the user selects the avatar component again.
  • step 307 the terminal determines the drawing positions of the multiple first avatar components in the target canvas based on the types of the multiple first avatar components.
  • the target canvas is used to represent a canvas that is drawn based on multiple avatar components.
  • the drawing position is represented by the coordinates of the avatar component in the target canvas.
  • the terminal determines the drawing positions of the multiple first avatar components in the target canvas based on the component types of the multiple first avatar components and the corresponding relationship between the component types and the drawing positions.
  • the timing when the terminal determines the drawing position has the following two situations:
  • the user after the user selects multiple first avatar components, the user performs a click operation on the save option in the avatar generation interface on the terminal, and the terminal responds to the click operation of the target account to determine the multiple selected in the avatar generation interface.
  • the first avatar component based on the component types of the multiple first avatar components, determines the drawing positions of the multiple first avatar components in the target canvas, and then performs a subsequent drawing process.
  • the terminal determines the drawing position of the first avatar component in the target canvas based on the component type of the first avatar component, and then performs subsequent steps. drawing process.
  • drawing process each time an avatar component is selected, its drawing position is determined, and subsequent drawing is performed.
  • the combined image of the avatar component can be displayed in real time, so that the user can instantly view the combined effect of the avatar component, which is convenient for follow-up. Modify or replace the avatar component to improve the efficiency of human-computer interaction.
  • This embodiment of the present disclosure does not limit the timing at which the terminal determines the drawing position.
  • step 308 the terminal draws on the target canvas based on the plurality of first avatar components and corresponding drawing positions to obtain the initial avatar of the target account.
  • the terminal draws on the target canvas based on the plurality of first avatar components and corresponding drawing positions, obtains the initial avatar of the target account, and displays the drawn initial avatar of the target account.
  • the drawing process by the terminal is as follows: the terminal uses the Canvas drawing technology to perform image aggregation processing on the plurality of first avatar components to generate a unified avatar image as the avatar of the target account.
  • the Canvas drawing technology is used to extract multiple first avatar components in the original page, and then draw on the target canvas. Resolved an issue with blank gaps due to a white background on the original page.
  • the picture generated by the terminal is a base64 encoded picture. Among them, the base64 is an encoding based on 64 characters to represent binary data. It should be understood that the avatar is actually in the form of a picture.
  • step 309 in response to the shape adjustment operation of the target avatar component in the initial avatar, the terminal determines an adjustment parameter of the target avatar component, where the adjustment parameter is used to adjust the shape of the target avatar component.
  • the target avatar component is any avatar component in the initial avatar.
  • the shape adjustment operation may be a sliding operation.
  • the adjustable parts include the shape of the head, the shape of the face, or the shape of the facial features.
  • the terminal when the user wants to adjust the shape of the initial avatar, and performs a shape adjustment operation on any first avatar component in the initial avatar, that is, a sliding operation, the terminal responds to the target account to the initial avatar.
  • the target position of the shape adjustment operation is determined, and the position parameter of the target position is determined as the adjustment parameter of the any first avatar component.
  • the target position is the end position of the shape adjustment operation. That is, in response to the shape adjustment operation of the target avatar component in the initial avatar, the terminal determines the end position of the shape adjustment operation; the position parameter of the end position is determined as the adjustment parameter of the target avatar component.
  • the adjustment parameter refers to the position parameter at the end of the shape adjustment operation.
  • the adjustment parameter is the position parameter of the finger contact point on the terminal screen at the end; if the shape adjustment operation is a mouse-based sliding operation, the adjustment parameter is the mouse point on the terminal screen. Positional argument at the end.
  • the adjustment parameters are expressed in position coordinates. By using the position parameters at the end of the shape adjustment operation to adjust the shape of the avatar component, the adjustment parameters can be quickly determined, which facilitates the subsequent adjustment process of the avatar component.
  • step 310 the terminal adjusts the target avatar component based on the adjustment parameter to obtain the target avatar of the target account.
  • the element point corresponding to the shape adjustment parameter is determined, and the position parameter of the element point is adjusted as the adjustment parameter to obtain The target avatar of the target account.
  • the element adjustment point refers to an adjustment point corresponding to a shape adjustment operation, such as an element point corresponding to a finger contact point, or an element point corresponding to a mouse point.
  • the terminal when the terminal adjusts the shape based on the adjustment parameters, the terminal can also perform optimization processing on the trajectory curve of the target avatar component, so as to ensure the generation of a smooth trajectory curve, so that the lines of the adjusted avatar component are connected smoothly, which improves the Avatar visuals.
  • the terminal is also capable of performing symmetrical shape adjustment to the other side based on the shape adjustment to one side.
  • the terminal detects that the target account has adjusted the shape of the first eye component (such as the left eye) in the initial avatar, it will determine the second eye component (such as the right eye) according to the position parameter of the first eye component. ) position parameters, and perform symmetrical shape adjustment on the second eye component, so that the terminal can realize symmetrical adjustment of the same type of elements, and improve the efficiency of shape adjustment.
  • the terminal can adjust the operation trajectory of the operation according to the shape, and show that the target avatar component changes shape according to the operation trajectory.
  • the shape adjustment operation is a manual sliding operation
  • the shape change of the target avatar component is displayed along with the sliding track of the user's finger contact point on the terminal screen.
  • the shape adjustment operation is a mouse-based sliding operation
  • the shape change of the target avatar component is displayed along with the sliding track of the mouse point on the terminal screen.
  • step 311 the terminal generates a binary file of the target avatar.
  • binary files can be understood as binary pictures.
  • the terminal converts the picture string of the target avatar into a binary data format to obtain a binary file of the target avatar of the target account.
  • step 312 the terminal sends a storage request carrying the binary file to the server, where the storage request is used to instruct the server to store the binary file.
  • step 313 the server receives the storage request and stores the binary file.
  • the server after receiving the storage request sent by the terminal, stores the binary file in the hard disk of the server, or stores the binary file in the avatar information database associated with the server.
  • the binary file of the avatar is generated and stored, which realizes the storage and recording of the avatar information, so that the avatar can be displayed quickly when the subsequent target account logs in again.
  • the technical solutions provided by the embodiments of the present disclosure provide a rich selection of avatar components for the user by displaying multiple avatar components.
  • the user selects and combines the multiple avatar components, and the terminal generates an initial avatar component according to the selected avatar components. avatar, and then the user adjusts the shape of the generated initial avatar to generate the target avatar, which realizes the user-defined avatar, the customization steps are simple and convenient, and the human-computer interaction efficiency is high.
  • unique avatars can be determined, which effectively reduces the repetition of avatars.
  • Fig. 4 is a block diagram of an apparatus for generating an avatar according to an exemplary embodiment.
  • the apparatus includes a generating unit 401 , a determining unit 402 and an adjusting unit 403 .
  • the generating unit 401 is configured to generate an initial avatar of the target account based on a plurality of first avatar components
  • the determining unit 402 is configured to, in response to a shape adjustment operation of the target avatar component in the initial avatar, determine an adjustment parameter of the target avatar component, where the adjustment parameter is used to adjust the shape of the target avatar component;
  • the adjustment unit 403 is configured to adjust the target avatar component based on the adjustment parameter to obtain the target avatar.
  • the determining unit 402 includes:
  • a position determination subunit configured to, in response to the shape adjustment operation of the target avatar component in the initial avatar, determine the end position of the shape adjustment operation
  • the parameter determination subunit is configured to determine the position parameter of the end position as the adjustment parameter of the target avatar component.
  • the apparatus further includes a display unit configured to adjust the operation trajectory of the operation according to the shape, and display the shape change of the target avatar component along with the operation trajectory.
  • the apparatus further includes:
  • an interface display unit configured to display an avatar generation interface
  • the avatar generation interface includes a plurality of avatar components, the plurality of avatar components include a plurality of types of avatar components, and each type of avatar components includes at least one avatar component;
  • the component determination unit is configured to determine the plurality of first avatar components based on the selection operation in the avatar generation interface.
  • the interface display unit includes:
  • the first display subunit is configured to, in the avatar generation interface, terminal a plurality of avatar components corresponding to the attribute information according to the attribute information of the target account.
  • the interface display unit includes:
  • the second display subunit is configured to, in the avatar generation interface, display a plurality of avatar components corresponding to the avatar type according to the avatar type of the historical avatar of the target account.
  • the interface display unit is configured to execute in the avatar generation interface, and display the plurality of avatar components to the target account in the form of thumbnails;
  • the component determination unit is configured to determine the plurality of first avatar components based on the selection operation of the plurality of thumbnails in the avatar generation interface.
  • the apparatus further includes:
  • a first sending unit configured to send an acquisition request to the server, where the acquisition request is used to acquire the plurality of avatar components
  • the receiving unit is configured to receive the plurality of avatar components returned by the server based on the obtaining request.
  • the apparatus further includes a component presentation unit configured to perform:
  • the avatar component corresponding to the selection operation matches the selected at least one first avatar component, the avatar component corresponding to the selection operation is displayed.
  • the component display unit is further configured to display a second avatar component in the at least one second avatar component if the avatar component corresponding to the selection operation does not match the at least one first avatar component, the At least one second avatar component corresponds to the at least one first avatar component.
  • the generating unit 401 includes:
  • a drawing position determination subunit configured to determine the drawing positions of the plurality of first avatar components in the target canvas based on the types of the plurality of first avatar components
  • the drawing subunit is configured to draw on the target canvas based on the plurality of first avatar components and corresponding drawing positions to obtain the initial avatar of the target account.
  • the apparatus further includes:
  • a file generation unit configured to generate a binary file of the target avatar
  • the second sending unit is configured to send a storage request carrying the binary file to the server, where the storage request is used to instruct the server to store the binary file.
  • the user selects and combines multiple avatar components to generate an initial avatar, and then the user adjusts the shape of the generated initial avatar to generate a target avatar, thereby realizing the user
  • Customize the avatar the customization steps are simple and convenient, and the human-computer interaction efficiency is high.
  • unique avatars can be determined, which effectively reduces the repetition of avatars.
  • the device for generating an avatar when generating an avatar, only takes the division of the above functional modules as an example.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the avatar generating apparatus provided in the above embodiments and the avatar generating method embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments, which will not be repeated here.
  • FIG. 5 is a block diagram of a terminal 500 according to an exemplary embodiment.
  • the terminal 500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, the standard audio level 3 of moving picture expert compression), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compression standard audio Level 4) Player, laptop or desktop computer.
  • Terminal 500 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
  • the terminal 500 includes: a processor 501 and a memory 502 .
  • the processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 501 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), and PLA (Programmable Logic Array, programmable logic array).
  • the processor 501 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 501 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 501 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 502 may include one or more non-volatile computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, a non-transitory, non-volatile computer-readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the following steps:
  • the target avatar component is adjusted to obtain the target avatar.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the position parameter of the end position is determined as the adjustment parameter of the target avatar component.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the target avatar component changes in shape along the operation trajectory.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the avatar generation interface includes a plurality of avatar components, the plurality of avatar components include multiple types of avatar components, and each type of avatar component includes at least one avatar component;
  • the plurality of first avatar components are determined based on the selection operation in the avatar generation interface.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the avatar generation interface according to the attribute information of the target account, a plurality of avatar components corresponding to the attribute information are displayed.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the avatar generation interface according to the avatar type of the historical avatar of the target account, multiple avatar components corresponding to the avatar type are displayed.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the plurality of avatar components are displayed in the form of thumbnails
  • the plurality of first avatar components are determined based on the selection operation of the plurality of thumbnails in the avatar generation interface.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the processor is configured to execute the program code, further for implementing the following steps:
  • the avatar component corresponding to the selection operation matches the selected at least one first avatar component, the avatar component corresponding to the selection operation is displayed.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • a second avatar component of the at least one second avatar component is displayed, and the at least one second avatar component and the at least one first avatar component are displayed. correspond.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • drawing is performed on the target canvas to obtain the initial avatar.
  • the processor is configured to execute the program code, further for implementing the following steps:
  • a storage request carrying the binary file is sent to the server, where the storage request is used to instruct the server to store the binary file.
  • the terminal 500 may optionally further include: a peripheral device interface 503 and at least one peripheral device.
  • the processor 501, the memory 502 and the peripheral device interface 503 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 503 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 504 , a display screen 505 , a camera assembly 506 , an audio circuit 507 , a positioning assembly 508 and a power supply 509 .
  • the peripheral device interface 503 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 501 and the memory 502 .
  • processor 501, memory 502, and peripherals interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 501, memory 502, and peripherals interface 503 or The two may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 504 communicates with the communication network and other communication devices via electromagnetic signals.
  • the radio frequency circuit 504 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • radio frequency circuitry 504 includes: an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and the like.
  • Radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: metropolitan area network, mobile communication networks of various generations (2G, 3G, 4G and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 504 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in the present disclosure.
  • the display screen 505 is used for displaying UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 505 also has the ability to acquire touch signals on or above the surface of the display screen 505 .
  • the touch signal may be input to the processor 501 as a control signal for processing.
  • the display screen 505 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 505 there may be one display screen 505, which is arranged on the front panel of the terminal 500; in other embodiments, there may be at least two display screens 505, which are respectively arranged on different surfaces of the terminal 500 or in a folded design; In other embodiments, the display screen 505 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the terminal 500 . Even, the display screen 505 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 505 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
  • the camera assembly 506 is used to capture images or video.
  • camera assembly 506 includes a front-facing camera and a rear-facing camera.
  • the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal.
  • there are at least two rear cameras which are any one of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function
  • the main camera It is integrated with the wide-angle camera to achieve panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions or other integrated shooting functions.
  • VR Virtual Reality, virtual reality
  • the camera assembly 506 may also include a flash.
  • the flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • Audio circuitry 507 may include a microphone and speakers.
  • the microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals and input them to the processor 501 for processing, or input them to the radio frequency circuit 504 to realize voice communication.
  • the microphone may also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 501 or the radio frequency circuit 504 into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 507 may also include a headphone jack.
  • the positioning component 508 is used to locate the current geographic location of the terminal 500 to implement navigation or LBS (Location Based Service).
  • the positioning component 508 may be a positioning component based on the GPS (Global Positioning System, global positioning system) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • the power supply 509 is used to power various components in the terminal 500 .
  • the power source 509 may be alternating current, direct current, disposable batteries or rechargeable batteries.
  • the rechargeable battery can support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 500 also includes one or more sensors 510 .
  • the one or more sensors 510 include, but are not limited to, an acceleration sensor 511 , a gyro sensor 512 , a pressure sensor 513 , a fingerprint sensor 514 , an optical sensor 515 and a proximity sensor 516 .
  • the acceleration sensor 511 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 500 .
  • the acceleration sensor 511 can be used to detect the components of the gravitational acceleration on the three coordinate axes.
  • the processor 501 may control the display screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511 .
  • the acceleration sensor 511 can also be used for game or user movement data collection.
  • the gyroscope sensor 512 can detect the body direction and rotation angle of the terminal 500 , and the gyroscope sensor 512 can cooperate with the acceleration sensor 511 to collect 3D actions of the user on the terminal 500 .
  • the processor 501 can implement the following functions according to the data collected by the gyro sensor 512 : motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 513 may be disposed on the side frame of the terminal 500 and/or the lower layer of the display screen 505 .
  • the processor 501 can perform left and right hand identification or quick operation according to the holding signal collected by the pressure sensor 513.
  • the processor 501 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 505.
  • the operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
  • the fingerprint sensor 514 is used to collect the user's fingerprint, and the processor 501 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings.
  • the fingerprint sensor 514 may be disposed on the front, back or side of the terminal 500 . When the terminal 500 is provided with physical buttons or a manufacturer's logo, the fingerprint sensor 514 may be integrated with the physical buttons or the manufacturer's logo.
  • Optical sensor 515 is used to collect ambient light intensity.
  • the processor 501 may control the display brightness of the display screen 505 according to the ambient light intensity collected by the optical sensor 515 . Specifically, when the ambient light intensity is high, the display brightness of the display screen 505 is increased; when the ambient light intensity is low, the display brightness of the display screen 505 is decreased.
  • the processor 501 may also dynamically adjust the shooting parameters of the camera assembly 506 according to the ambient light intensity collected by the optical sensor 515 .
  • a proximity sensor 516 also called a distance sensor, is usually provided on the front panel of the terminal 500 .
  • the proximity sensor 516 is used to collect the distance between the user and the front of the terminal 500 .
  • the processor 501 controls the display screen 505 to switch from the bright screen state to the off screen state; when the proximity sensor 516 detects When the distance between the user and the front of the terminal 500 gradually increases, the processor 501 controls the display screen 505 to switch from the closed screen state to the bright screen state.
  • FIG. 5 does not constitute a limitation on the terminal 500, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • FIG. 6 is a block diagram of a server according to an exemplary embodiment.
  • the server 600 may vary greatly due to different configurations or performance, and may include one or more processors (Central Processing Units, CPU) 601 and One or more memories 602, wherein, at least one piece of program code is stored in the one or more memories 602, and the at least one piece of program code is loaded and executed by the one or more processors 601 to realize the above-mentioned various method embodiments provided. avatar generation method.
  • the server 600 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface for input and output, and the server 600 may also include other components for realizing device functions, which will not be repeated here.
  • a storage medium including program codes is also provided, such as a memory 602 including program codes, and the program codes can be executed by the processor 601 of the server 600 to implement the following steps: based on the plurality of first avatars component to generate an initial avatar; in response to the shape adjustment operation of the target avatar component in the initial avatar, determine the adjustment parameter of the target avatar component, the adjustment parameter is used to adjust the shape of the target avatar component; based on the adjustment parameter, the The target avatar component is adjusted to obtain the target avatar.
  • the storage medium may be a non-transitory non-volatile computer-readable storage medium, for example, the non-transitory non-volatile computer-readable storage medium may be ROM, random access memory (RAM), CD - ROM, magnetic tape, floppy disk and optical data storage devices, etc.
  • the non-transitory non-volatile computer-readable storage medium may be ROM, random access memory (RAM), CD - ROM, magnetic tape, floppy disk and optical data storage devices, etc.
  • a computer program product comprising a computer program that, when executed by a processor, implements the following steps: generating an initial avatar based on a plurality of first avatar components; in response to the initial avatar In the shape adjustment operation of the target avatar component, the adjustment parameters of the target avatar component are determined, and the adjustment parameters are used to adjust the shape of the target avatar component; based on the adjustment parameters, the target avatar component is adjusted to obtain the target avatar.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开关于一种头像生成方法及设备。该方法包括:基于多个第一头像组件,生成初始头像;响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数;基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。

Description

头像生成方法及设备
本公开基于申请号为202011016905.X、申请日为2020年09月24日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及互联网技术领域,尤其涉及一种头像生成方法、装置、设备以及存储介质。
背景技术
随着计算机技术和移动互联网的飞速发展,各种各样的网站逐渐兴起,用户通过浏览器即可访问网站,进而通过浏览网站网页的方式,能够实现相应的业务功能。通常,用户需要注册该网站对应的账号,进而在访问网站时登录对应的账号,以实现更多的业务功能。在用户注册账号的过程中,还可以注册属于自己的头像,起到标识自己身份的作用。
目前,现有的各种网站,通常会为用户提供若干默认头像,在用户想要注册头像时,从该若干默认头像中,选取一个头像,设置为自己的头像。
发明内容
根据本公开实施例的第一方面,提供一种头像生成方法,该方法包括:
基于多个第一头像组件,生成初始头像;
响应于对该初始头像中目标头像组件的形状调整操作,确定该目标头像组件的调整参数,该调整参数用于调整该目标头像组件的形状;
基于该调整参数,对该目标头像组件进行调整,得到目标头像。
在一些实施例中,该响应于对该初始头像中目标头像组件的形状调整操作,确定该目标头像组件的调整参数,包括:
响应于对该初始头像中目标头像组件的形状调整操作,确定该形状调整操作的终点位置;
将该终点位置的位置参数确定为该目标头像组件的调整参数。
在一些实施例中,该方法还包括:
根据该形状调整操作的操作轨迹,展示该目标头像组件随该操作轨迹发生形状变化。
在一些实施例中,该方法还包括:
展示头像生成界面,该头像生成界面包括多个头像组件,该多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件;
基于在该头像生成界面中的选中操作,确定该多个第一头像组件。
在一些实施例中,该展示头像生成界面,包括:
在该头像生成界面中,根据目标账号的属性信息,展示该属性信息对应的多个头像组件。
在一些实施例中,该展示头像生成界面,包括:
在该头像生成界面中,根据目标账号的历史头像的头像类型,展示该头像类型对应的多个头像组件。
在一些实施例中,该展示头像生成界面,包括:
在该头像生成界面中,以缩略图形式展示该多个头像组件;
该基于在该头像生成界面中的选中操作,确定该多个第一头像组件包括:
基于对该头像生成界面中多个缩略图的选中操作,确定该多个第一头像组件。
在一些实施例中,该方法还包括:
向服务器发送获取请求,该获取请求用于获取该多个头像组件;
接收该服务器基于该获取请求返回的该多个头像组件。
在一些实施例中,该方法还包括:
若该选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示该选中操作对应的头像组件。
在一些实施例中,该方法还包括:
若该选中操作对应的头像组件与该至少一个第一头像组件不匹配,则展示至少一个第二头像组件中的一个第二头像组件,该至少一个第二头像组件与该至少一个第一头像组件对应。
在一些实施例中,该基于多个第一头像组件,生成初始头像包括:
基于该多个第一头像组件的类型,确定该多个第一头像组件在目标画布中的绘制位置;
基于该多个第一头像组件以及对应的绘制位置,在该目标画布中进行绘制,得到该初始头像。
在一些实施例中,该方法还包括:
生成该目标头像的二进制文件;
向服务器发送携带该二进制文件的存储请求,该存储请求用于指示该服务器存储该二进制文件。
根据本公开实施例的第二方面,提供一种头像生成装置,该装置包括:
生成单元,被配置为基于多个第一头像组件,生成所述目标账号的初始头像;
确定单元,被配置为响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
调整单元,被配置为基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
在一些实施例中,该确定单元包括:
位置确定子单元,被配置为响应于对所述初始头像中目标头像组件的形状调整操作,确定所述形状调整操作的终点位置;
参数确定子单元,被配置为将所述终点位置的位置参数确定为所述目标头像组件的调整参数。
在一些实施例中,所述装置还包括展示单元,被配置为根据所述形状调整操作的操作 轨迹,展示所述目标头像组件随所述操作轨迹发生形状变化。
在一些实施例中,所述装置还包括:
界面展示单元,被配置为展示头像生成界面,所述头像生成界面包括多个头像组件,所述多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件;
组件确定单元,被配置为基于在所述头像生成界面中的选中操作,确定所述多个第一头像组件。
在一些实施例中,所述界面展示单元包括:
第一展示子单元,被配置为在所述头像生成界面中,根据目标账号的属性信息,终端所述属性信息对应的多个头像组件。
在一些实施例中,所述界面展示单元包括:
第二展示子单元,被配置为在所述头像生成界面中,根据目标账号的历史头像的头像类型,展示所述头像类型对应的多个头像组件。
在一些实施例中,所述界面展示单元,被配置为在所述头像生成界面中,以缩略图形式展示所述多个头像组件;
所述组件确定单元,被配置为基于对所述头像生成界面中多个缩略图的选中操作,确定所述多个第一头像组件。
在一些实施例中,所述装置还包括:
第一发送单元,被配置为向服务器发送获取请求,所述获取请求用于获取所述多个头像组件;
接收单元,被配置为接收所述服务器基于所述获取请求返回的所述多个头像组件。
在一些实施例中,所述装置还包括组件展示单元,被配置为:
若所述选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示所述选中操作对应的头像组件。
在一些实施例中,所述组件展示单元,还被配置为若所述选中操作对应的头像组件与所述至少一个第一头像组件不匹配,则展示至少一个第二头像组件中的一个第二头像组件,所述至少一个第二头像组件与所述至少一个第一头像组件对应。
在一些实施例中,该生成单元包括:
绘制位置确定子单元,被配置为基于所述多个第一头像组件的类型,确定所述多个第一头像组件在目标画布中的绘制位置;
绘制子单元,被配置为基于所述多个第一头像组件以及对应的绘制位置,在所述目标画布中进行绘制,得到所述目标账号的初始头像。
在一些实施例中,该装置还包括:
文件生成单元,被配置为生成所述目标头像的二进制文件;
第二发送单元,被配置为向服务器发送携带所述二进制文件的存储请求,所述存储请求用于指示所述服务器存储所述二进制文件。
根据本公开实施例的第三方面,提供一种电子设备,该电子设备包括:
一个或多个处理器;
用于存储该处理器可执行程序代码的存储器;
其中,该处理器被配置为执行该程序代码,以实现下述步骤:
基于多个第一头像组件,生成初始头像;
响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
根据本公开实施例的第四方面,提供一种存储介质,该存储介质包括:当该存储介质中的程序代码由电子设备的处理器执行时,使得电子设备能够实现下述步骤:
基于多个第一头像组件,生成初始头像;
响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
根据本公开实施例的第五方面,提供一种计算机程序产品,该计算机程序产品包括计算机程序代码,该计算机程序代码存储在非易失性计算机可读存储介质中。电子设备的处理器从非易失性计算机可读存储介质读取该计算机程序代码,处理器执行该计算机程序代码,使得该电子设备实现下述步骤:
基于多个第一头像组件,生成初始头像;
响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
根据本公开的实施例,实现了用户自定义头像的功能,生成头像的方式简单便捷,提高了人机交互效率。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据一示例性实施例示出的一种头像生成方法的实施环境示意图;
图2是根据一示例性实施例示出的一种头像生成方法的流程图;
图3是根据一示例性实施例示出的一种头像生成方法的流程图;
图4是根据一示例性实施例示出的一种头像生成装置的框图;
图5是根据一示例性实施例示出的一种终端的框图;
图6是根据一示例性实施例示出的一种服务器的框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的 数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开所涉及的信息可以为经用户授权或者经过各方充分授权的信息。
图1是本公开实施例提供的一种头像生成方法的实施环境示意图,参见图1,该实施环境中包括:终端101和服务器102。
终端101可以为智能手机、智能手表、便携计算机、车载终端等设备中的至少一种,终端101具有通信功能,可以接入互联网,终端101可以泛指多个终端中的一个,本实施例以终端101来举例说明。本领域技术人员可以知晓,上述终端的数量可以更多或更少。终端101可以运行有多种浏览器或者多种应用程序。用户能够在终端上启动浏览器或者应用程序,通过在浏览器的网站或者应用程序中登录用户账号,能够进行后续的业务操作,以实现相应的业务功能。例如,用户通过浏览器的网站或者应用程序,能够实现网上购物、视频播放、社交聊天等。其中,浏览器的网站或者应用程序支持用户头像的设置。
服务器102可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式文件系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。服务器102关联有头像信息数据库,该头像信息数据用于存储多个头像组件的标识与该多个头像组件的对应关系。服务器102与终端101可以通过有线或无线通信方式进行直接或间接的连接,本公开实施例对此不作限定。在一些实施例中,上述服务器102的数量可以更多或更少,本公开实施例对此不加以限定。当然,服务器102还可以包括其他功能服务器,以便提供更全面且多样化的服务。
在实现本公开实施例的过程中,由终端101和服务器102共同执行来完成。当用户想要注册头像时,用户在浏览器的网站网页或者应用程序中登录用户账号,在终端101上对网站网页中的生成头像选项实施点击操作,终端101响应于用户的点击操作,触发终端101向服务器102发送对头像组件的获取请求,以获取包含有头像组件的头像生成界面,并展示该头像生成界面。服务器102接收到该获取请求后,从头像信息数据库中,获取该获取请求对应的多个头像组件,并向终端101发送该多个头像组件,进而终端101获取到头像组件,利用本公开实施例提供的头像生成方法,生成用户账号的头像。后续采用目标账号来表示待注册头像的用户账号。其中,该头像组件也可以称为头像组件元素。
图2是根据一示例性实施例示出的一种头像生成方法的流程图,如图2所示,包括以下步骤:
在步骤201中,终端基于多个第一头像组件,生成初始头像。
其中,该多个第一头像组件为目标账号选中的头像组件,该目标账号为终端登录的用户账号。终端能够基于目标账号选中的多个第一头像组件,生成该目标账号的初始头像。
在步骤202中,响应于对初始头像中目标头像组件的形状调整操作,终端确定该目标头像组件的调整参数,该调整参数用于调整该目标头像组件的形状。
其中,该目标头像组件为初始头像组件中的任一第一头像组件。终端响应于该目标账号对该初始头像中任一个第一头像组件的形状调整操作,确定该任一个第一头像组件的调整参数,该调整参数用于调整该第一头像组件的形状。
在步骤203中,终端基于调整参数,对目标头像组件进行调整,得到目标头像。
换而言之,终端能够按照调整参数,对该任一个第一头像组件进行调整,得到该目标账号的目标头像。
本公开实施例提供的技术方案,由用户在多个头像组件中,进行挑选和组合,来生成初始头像,再由用户对所生成的初始头像进行形状的调整,来生成目标头像,实现了用户自定义头像,自定义头像的步骤简单快捷,人机效率高。并且该头像生成方法还能够有效减少头像重复的情况。
上述图2所示为本公开的基本流程,下面基于一种具体实施方式,来对本公开提供的方案进行进一步阐述,图3是根据一示例性实施例示出的一种头像生成方法的流程图,参见图3,该方法包括:
在步骤301中,终端向服务器发送获取请求,该获取请求用于获取多个头像组件。
换而言之,终端能够向服务器发送对多个头像组件的获取请求。其中,头像组件也称为头像组件元素,是指组成一个头像所需的元素。例如,发型、脸型、五官、是否戴眼镜等。该获取请求用于指示服务器返回头像组件,终端展示服务器返回的多个头像组件。
在一些实施例中,当用户想要注册头像时,用户能够在浏览器的网站网页或者应用程序中登录目标账号,然后在终端上对网站网页中的生成头像选项实施点击操作,终端响应于用户的点击操作,触发终端向服务器发送对头像组件的获取请求,以获取包含有头像组件的头像生成界面,并展示该头像生成界面。其中,该获取请求携带有目标账号。
在步骤302中,服务器接收该获取请求,确定该获取请求对应的多个头像组件,并向终端返回该多个头像组件。
其中,多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件,属于同一类型的头像组件可以为不同样式的头像组件。例如,头像组件的类型包括发型、脸型、五官(眉形、眼睛、鼻子)、发饰等类型,其中,发型包括长卷发、短卷发、长直发、短直发以及不同颜色的发型等样式的头像组件,脸型包括圆脸、长脸、方脸等样式的头像组件。本公开实施例中,提供了多种类型、多种样式的头像组件,进而在后续生成头像时,提供了丰富的头像组件可供选择,能够满足用户个性化的选择需求。
在一些实施例中,服务器接收到该获取请求后,能够获取该获取请求携带的目标账号,以及从服务器关联的头像信息数据库中获取头像组件,然后向目标账号所在终端发送获取到的头像组件。其中,头像信息数据库用于存储多个头像组件的标识与该多个头像组件的对应关系。其中,服务器能够以数据包的形式向终端发送头像组件,该数据包内还能够包括多个头像组件的标识与该多个头像组件之间的对应关系。
需要说明的是,在实施本方案之前,技术人员能够提前定义好多种类型、多种样式的头像组件,通过MD5(Message Digest Algorithm 5,信息摘要算法)或者其它算法,生成每个头像组件的唯一标识ID(Identification,身份标识号码),将该多个头像组件的标识与该多个头像组件对应存储至头像信息数据库中。
在步骤303中,终端接收该服务器基于该获取请求返回的多个头像组件。
在一些实施例中,终端接收该服务器返回的多个头像组件后,将该多个头像组件存储至浏览器或者应用程序的本地。
在步骤304中,终端展示头像生成界面,该头像生成界面包括多个头像组件,该多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件。
在一些实施例中,终端获取到该多个头像组件后,对该多个头像组件进行缩放处理,得到该多个头像组件的缩略图,然后在该头像生成界面中,以缩略图形式展示该多个头像组件。通过以缩略图的形式展示头像组件,使得一个页面能够展示较多的头像组件,更加便于用户的浏览,提高了人机交互效率。
需要说明的是,终端在确定出缩略图后,还能够生成头像组件和缩略图的对应关系,从而便于后续确定被选中头像组件,进一步的提高人机效率。
一些实施例中,终端确定该多个头像组件的缩略图后,生成该多个缩略图的标识以及建立缩略图的标识与头像组件之间的对应关系,终端基于该对应关系,能够确定每个缩略图对应的头像组件。
在一些实施例中,终端确定该多个头像组件的缩略图后,在该多个缩略图中添加超链接,该超链接指向缩略图对应的头像组件的原图,终端能够基于每个缩略图中的超链接,获取每个缩略图对应的头像组件。其中,头像组件的原图即为头像组件本身,该原图对应的标识即为头像组件的标识。通过添加超链接的方式,还能够为用户展示缩略图对应的放大图,也即是若用户选中缩略图,则终端响应于用户基于头像生成界面的选中操作,展示该缩略图中超链接指向的头像组件的原图,实现了对缩略图放大展示的效果。
上述步骤304是终端展示全部头像组件的过程。终端还能够根据目标账号的属性信息等,选择性地展示头像组件。在一些实施例中,终端向目标账号展示多个头像组件的过程包括下述任一项:
在一些实施例中,终端根据该目标账号的属性信息,在该多个头像组件中,确定该属性信息对应的多个头像组件,然后在该头像生成界面中,展示该属性信息对应的多个头像组件。其中,属性信息是指目标账号的资料信息,如性别信息、年龄信息、职业信息等。以性别信息为例,若终端确定目标账号的性别信息为男,则为目标账号展示性别男对应的多个头像组件。终端能够根据不同账号的属性信息,展示对应的多个头像组件,也即为用户展示其所需的头像组件,而不是展示全部的头像组件,保证了页面的直观和简洁,便于用户快速确定想要使用的头像组件,提高了人机交互效率,解决了展示较多的头像组件而导致用户浏览时间过长的问题。
在一些实施例中,上述终端根据属性信息确定头像组件的过程包括:终端根据该目标账号的属性信息以及该多个头像组件的标识,在该多个头像组件中,确定该属性信息对应的多个头像组件。其中,头像组件的标识包含第一字符串,该第一字符串用于表示属性信息。
例如,以性别信息为例,如采用标识1表示男,采用标识0表示女,若目标账号的属性信息为性别男,则在该多个头像组件中,确定第一字符串携带标识1的多个头像组件,则终端得到该属性信息对应的多个头像组件。通过头像组件的标识中的第一字符串,来确定属性信息对应的头像组件,能够快速的确定出不同属性对应的头像组件,提高了确定头 像组件的速率。
在一些实施例中,上述终端根据属性信息确定头像组件的过程包括:终端根据该目标账号的属性信息以及属性标识与多个头像组件之间的对应关系,在该多个头像组件中,确定该属性信息对应的多个头像组件。其中,属性信息采用属性标识来表示。在一些实施例中,终端接收到的数据包(参见步骤302)内还包含属性标识和头像组件之间的对应关系。
例如,以性别信息为例,若目标账号的属性信息为性别男,则根据性别男的标识以及性别标识和头像组件的对应关系,确定性别男的标识对应的多个头像组件,则终端得到该属性信息对应的多个头像组件。
通过基于对应关系来确定属性信息对应的头像组件,同样能够快速的确定出不同属性对应的头像组件,提高了确定头像组件的速率。
上述两种实现方式,均能快速的确定出不同属性对应的头像组件,在没有降低处理效率的情况下,还满足了用户对于展示属性信息对应头像组件的需求。
在一些实施例中,终端能够根据该目标账号的历史头像的头像类型,在该多个头像组件中,确定该头像类型对应的多个头像组件,在该头像生成界面中,展示该头像类型对应的多个头像组件。其中,头像类型是指头像的风格类型。
例如,若终端根据目标账号的头像类型,确定头像类型为二次元类型,则为目标账号展示二次元类型对应的多个头像组件。
通过根据不同账号的头像类型,展示对应的多个头像组件,能够为用户展示其感兴趣的头像组件,无需展示全部的头像组件,保证了网页页面的直观和简洁,便于用户快速确定想要使用的头像组件,提高了人机交互效率,并且解决了元素全部展示而导致用户浏览时间过长的问题。
本公开实施例对选用何种方式展示头像组件不作限定。上述根据头像类型确定头像组件的过程,与根据属性信息确定头像组件的过程类似,不再赘述。
需要说明的是,上述根据属性信息或头像类型,确定对应头像组件的过程,以终端为执行主体为例进行说明。在一些实施例中,还能够由服务器执行该过程,也即由服务器根据属性信息或头像类型,在头像信息数据库中,确定该属性信息或该头像类型对应头像组件,然后将确定的头像组件返回至终端,由终端在头像生成界面进行显示。其中,头像信息数据库中存储有属性标识、头像类型与头像组件之间的对应关系。由于服务器无需将所有的头像信息发送至终端,缓解了终端的存储压力和处理压力,提高了终端的处理效率。
在步骤305中,终端基于在头像生成界面的选中操作,确定多个第一头像组件。
其中,第一头像组件用于表示用户所选中的头像组件。
在一些实施例中,用户在浏览头像生成界面中的多个头像组件时,通过终端对想要使用的头像组件实施选中操作。终端响应于该目标账号对头像生成界面中任一缩略图的选中操作,将该缩略图对应的头像组件确定为第一头像组件,也即是确定出了被选中的第一头像组件。
在一些实施例中,终端确定缩略图对应的头像组件的过程包括:终端响应于该目标账号对头像生成界面中任一缩略图的选中操作,获取被选中的缩略图的标识,根据该缩略图的标识以及缩略图的标识与头像组件之间的对应关系,确定该缩略图的标识对应的头像组件,则确定出该缩略图对应的第一头像组件。通过缩略图的标识和头像组件的对应关系, 能够快速的确定出头像组件,提高了确定第一头像组件的效率,进而提高了头像生成的效率,也提高了人机交互效率。
在一些实施例中,终端确定缩略图对应的头像组件的过程包括:终端响应于该目标账号对头像生成界面中任一缩略图的选中操作,根据缩略图中的超链接,确定该超链接对应的头像组件的原图,获取该原图的标识,根据该原图的标识以及标识和头像组件的对应关系,能够确定出头像组件,也即是确定出了缩略图对应的第一头像组件。通过超链接的方式,确定第一头像组件,同样能够快速的确定出头像组件,提高了确定第一头像组件的效率,进而提高了头像生成的效率。本公开实施例对选用何种方式确定第一头像组件不作限定。
在步骤306中,终端判断选中操作对应的头像组件与已选中的至少一个第一头像组件是否匹配,若选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示该选中操作对应的头像组件。
换而言之,终端能够判断新选中的头像组件与该目标账号已选中的至少一个第一头像组件是否匹配,若该新选中的头像组件与该目标账号已选中的至少一个第一头像组件匹配,则展示该新选择的头像组件作为第一头像组件。
其中,头像组件是否匹配是指新选择的头像组件与已选择的至少一个第一头像组件的风格类型是否匹配。若风格类型匹配,则表示新选择的头像组件与已选择的至少一个第一头像组件属于同一种风格类型,如二次元风格的头型和二次元风格的发型。若不匹配,则表示新选择的头像组件与已选择的至少一个第一头像组件属于不同的风格类型,如二次元风格的头型和卡通风格的发型。
在一些实施例中,终端确定新选中的头像组件后,能够根据预先设定的规则,判断该新选中的头像组件与已选中的头像组件是否匹配。其中,该规则为头像组件与关联头像组件之间的对应关系,若该对应关系中,存在该新选中的头像组件与该已选中的至少一个第一头像组件之间的对应关系,则确定新选中的头像组件与该已选中的至少一个第一头像组件匹配;若该对应关系中,不存在该新选中的头像组件与已选中的至少一个第一头像组件之间的对应关系,则确定新选中的头像组件与已选中的至少一个第一头像组件不匹配。其中,关联头像组件用于表示与头像组件匹配的头像组件。
需要说明的是,头像信息数据库还能够用于存储头像组件与关联头像组件之间的对应关系。可选地,步骤302中服务器向终端返回的数据包内还包含头像组件与关联头像组件之间的对应关系。在一些实施例中,该对应关系为列表的形式。如表1所示,与IDA1匹配的头像组件包括IDA2和IDA3。
表1
Figure PCTCN2021114362-appb-000001
需要说明的是,步骤306是选中操作对应的头像组件与已选中的至少一个第一头像组件匹配的过程。在一些实施例中,若新选中的头像组件与已选中的至少一个第一头像组件不匹配,则在该已选中的至少一个第一头像组件对应的至少一个第二头像组件中,选取一 个第二头像组件,展示该第二头像组件。其中,第二头像组件用于表示与已选中的至少一个第一头像组件匹配的头像组件。
在一些实施例中,终端选取第二头像组件的步骤包括:终端在该已选中的至少一个第一头像组件对应的至少一个第二头像组件中,通过随机数匹配算法,选取一个第二头像组件。其中,随机数匹配算法用于在总体样本中选取具有代表性的样本。
在一些实施例中,终端选取第二头像组件的步骤为:终端根据该至少一个第二头像组件的顺序号,确定一个序列集合,在该序列集合中,利用随机数生成器(也即是随机数生成函数),如Rand函数、Srand函数等,确定一个随机数(也即是随机的顺序号),将该随机数对应的头像组件,作为选取得到的第二头像组件。通过程序语言中的随机函数,能够快速地确定出一个随机数,进而能够快速地确定出第二头像组件。
在一些实施例中,终端选取第二头像组件的步骤为:终端根据该至少一个第二头像组件的顺序号,确定一个序列集合,在该序列集合中,利用随机数生成算法,如蒙特卡洛算法(又称为随机抽样算法)、正态随机数算法等,确定一个随机数(也即是随机的顺序号),将该随机数对应的头像组件,作为选取得到的第二头像组件。通过随机数生成算法,同样能够快速地确定出一个随机数,进而确定出第二头像组件。
本公开实施例对选用何种方式来选取第二头像组件不作限定。通过随机选取的方式,能够快速地确定出第二头像组件,处理流程简单,提高了头像生成的效率。
在一些实施例中,终端选取第二头像组件的步骤包括:终端在该已选中的至少一个第一头像组件对应的至少一个第二头像组件中,选取匹配度最大的第二头像组件。其中,匹配度用于表示已选中的至少一个第一头像组件与第二头像组件之间的风格匹配程度。通过选取匹配度最大的,能够确定出较为合理的第二头像组件,能够确定出用户感兴趣的第二头像组件。本公开实施例对选用何种方式选取第二头像组件不作限定。
需要说明的是,在实施本方案之前,对于任一个头像组件,终端或服务器能够获取该任一个头像组件与其对应的多个关联头像组件之间的匹配度,然后将头像组件、关联头像组件和匹配度对应存储至头像信息数据库中。并且,终端接收到的数据包(参见步骤302)内还包含头像组件、关联头像组件和匹配度之间的对应关系。
在一些实施例中,在实施本方案之前,终端或服务器获取匹配度的步骤包括:技术人员为每个头像组件对应的多个关联头像组件,分别设置一个权重,该权重用于表征关联头像组件与每个头像组件之间的匹配度。通过由技术人员手动设置匹配度,能够根据技术人员之前定义的风格,来适应性的设置权重,能够较为准确的确定出匹配度,不容易出错。
在一些实施例中,在实施本方案之前,服务器获取匹配度的步骤包括:服务器基于图像特征提取模型,提取该多个头像组件的图像特征,对于每个头像组件,服务器根据该该头像组件的第一图像特征,以及该头像组件对应的多个关联头像组件的第二图像特征,计算第一图像特征和第二图像特征之间的距离,将该距离作为匹配度。其中,第一图像特征是指每个头像组件的图像特征。第二图像特征是指每个头像组件对应的关联头像组件的图像特征。在一些实施例中,匹配度采用第一图像特征与第二图像特征之间的距离来表示,例如,欧氏距离、曼哈顿距离、切比雪夫距离、卡方距离、余弦距离及汉明距离等,本公开实施例对选择何种距离计算匹配度不作限定。其中,距离越小,则匹配度越大,距离越大,则匹配度越小。通过由服务器基于图像特征,计算距离的方式来表征匹配度,同样能 够准确地确定出头像组件与关联头像组件之间的匹配度,并且通过服务器来计算匹配度,计算耗时短,效率很快。
通过上述过程,终端基于头像组件的ID的唯一性和预先设定的规则,来判断风格是否匹配,能够确定出风格匹配的多个头像组件,进而生成风格匹配的头像,提高了头像生成的准确性。
另外,上述步骤306以终端判断风格类型是否匹配为例对方案进行说明。在一些实施例中,在步骤305之后,终端判断新选中的头像组件与已选中的至少一个第一头像组件的类型是否重复,若新选中的头像组件与已选中的至少一个第一头像组件的类型未重复,则展示该新选中的头像组件;若该新选中的头像组件与已选中的至少一个第一头像组件的类型重复,则不展示该新选中的头像组件,并弹出头像组件类型重复的提示窗口。
例如,若确定新选中的头像组件的类型为头型,而该已选中的至少一个第一头像组件中包括头型类型对应的头像组件,则弹出头像组件类型重复的提示窗口,以提示用户头像组件类型重复,则用户重新选取头像组件。
在步骤307中,终端基于多个第一头像组件的类型,确定该多个第一头像组件在目标画布中的绘制位置。
其中,目标画布用于表示基于多个头像组件进行绘制的画布。在一些实施例中,采用头像组件在目标画布中的坐标来表示绘制位置。
在一些实施例中,终端基于该多个第一头像组件的组件类型以及组件类型和绘制位置的对应关系,确定该多个第一头像组件在目标画布中的绘制位置。
在一些实施例中,终端确定绘制位置的时机有如下两种情况:
第一种情况,用户选中多个第一头像组件后,在终端上对头像生成界面中的保存选项实施点击操作,则终端响应于目标账号的点击操作,确定头像生成界面中被选中的多个第一头像组件,基于该多个第一头像组件的组件类型,确定该多个第一头像组件在目标画布中的绘制位置,再进行后续绘制的过程。
第二种情况,用户每选中一个第一头像组件,终端响应于目标账号的选中操作,基于该第一头像组件的组件类型,确定该第一头像组件在目标画布中的绘制位置,再进行后续绘制的过程。在该过程中,每选中一个头像组件,就确定其绘制位置,并进行后续绘制,能够随着用户的选择,实时展示头像组件的组合图像,使得用户能够即时查看头像组件的组合效果,便于后续修改或更换头像组件,提升了人机交互效率。本公开实施例对终端确定绘制位置的时机不作限定。
在步骤308中,终端基于该多个第一头像组件以及对应的绘制位置,在该目标画布中进行绘制,得到该目标账号的初始头像。
在一些实施例中,终端基于该多个第一头像组件以及对应的绘制位置,在目标画布中进行绘制,得到该目标账号的初始头像,并展示绘制得到的该目标账号的初始头像。
在一些实施例中,终端进行绘制的过程为:终端通过Canvas绘图技术,对该多个第一头像组件,进行图片聚合处理,生成一个统一的头像图片,作为该目标账号的头像。在该过程中,由于Canvas是一种支持透明的、可堆叠的绘制模式的绘图技术,因此,通过Canvas绘图技术,提取原页面中的多个第一头像组件,进而在目标画布中进行绘制,解决了由于原页面存在白色背景而导致出现空白间隙的问题。可选地,终端所生成的图片为 base64编码图片。其中,该base64是一种基于64个字符来表示二进制数据的编码。应理解地,头像实际上是图片的形式。
在步骤309中,终端响应于对该初始头像中目标头像组件的形状调整操作,确定该目标头像组件的调整参数,该调整参数用于调整该目标头像组件的形状。
其中,还目标头像组件为初始头像中的任一头像组件。形状调整操作可以为滑动操作。例如,手动的滑动操作或者基于鼠标的滑动操作。本公开实施例中,可调整部位包括头部形状、脸部形状或五官形状。例如,头部大小、脸部大小、眼睛大小、眼睛位置、鼻子大小、鼻梁高低、嘴巴大小、嘴巴厚度、下颚宽度等。
在一些实施例中,当用户想要调整该初始头像的形状时,对该初始头像中任一个第一头像组件进行形状调整操作,也即是滑动操作,则终端响应于该目标账号对该初始头像中任一个第一头像组件的形状调整操作,确定该形状调整操作的目标位置,将目标位置的位置参数确定为为该任一个第一头像组件的调整参数。其中,目标位置为该形状调整操作的终点位置。也即,响应于对初始头像中目标头像组件的形状调整操作,终端确定形状调整操作的终点位置;将该终点位置的位置参数确定为目标头像组件的调整参数。其中,调整参数是指形状调整操作在结束时的位置参数。
例如,若形状调整操作为手动的滑动操作,则调整参数为终端屏幕上手指接触点在结束时的位置参数,若形状调整操作为基于鼠标的滑动操作,则调整参数为终端屏幕上鼠标点在结束时的位置参数。
在一些实施例中,调整参数采用位置坐标来表示。通过利用形状调整操作在结束时的位置参数,来进行头像组件的形状调整,能够快速地确定出调整参数,便于后续对头像组件的调整过程。
在步骤310中,终端基于该调整参数,对该目标头像组件进行调整,得到该目标账号的目标头像。
在一些实施例中,终端确定出该目标头像组件的调整参数后,在该目标头像组件中,确定与该形状调整参数对应的元素点,将该元素点的位置参数,调整为调整参数,得到该目标账号的目标头像。其中元素调整点是指形状调整操作对应的调整点,如手指接触点对应的元素点,或者鼠标点对应的元素点。
在一些实施例中,终端在基于调整参数调整形状时,还能够对该目标头像组件的轨迹曲线进行优化处理,以确保生成平滑的轨迹曲线,使得调整后的头像组件的线条衔接流畅,改进了头像的视觉效果。
在一些实施例中,终端还能够基于对一侧的形状调整,来对另一侧进行对称的形状调整。以调整眼睛大小为例,若终端检测到目标账号对初始头像中第一眼睛组件(如左眼)的形状调整,则根据该第一眼睛组件的位置参数,确定第二眼睛组件(如右眼)的位置参数,并对第二眼睛组件进行对称的形状调整,使得终端能够实现对同类型元素的对称调整,提高了形状调整的效率。
需要说明的是,在调整头像形状的过程中,终端能够根据该形状调整操作的操作轨迹,展示该目标头像组件随该操作轨迹发生形状变化。例如,若该形状调整操作为手动的滑动操作,则随着用户在终端屏幕上手指接触点的滑动轨迹,展示该目标头像组件的形状变化情况。若该形状调整操作为基于鼠标的滑动操作,则随着终端屏幕上鼠标点的滑动轨迹, 展示该目标头像组件形状变化情况。
在步骤311中,终端生成该目标头像的二进制文件。
其中,二进制文件可理解为二进制图片。
在一些实施例中,终端生成该目标账号的目标头像后,将该目标头像的图片字符串转换为二进制的数据格式,得到该目标账号的目标头像的二进制文件。
在步骤312中,终端向服务器发送携带该二进制文件的存储请求,该存储请求用于指示该服务器存储该二进制文件。
在步骤313中,服务器接收该存储请求,存储该二进制文件。
在一些实施例中,服务器接收到终端发送的存储请求后,将该二进制文件存储至服务器硬盘中,或者,将该二进制文件存储至服务器所关联的头像信息数据库中。上述过程中,生成头像的二进制文件,并进行存储,实现了头像信息的存储和记录,便于后续目标账号再次登录时,能够快速地展示该头像。
本公开实施例提供的技术方案,通过展示多个头像组件,为用户提供了丰富的头像组件选择,由用户在多个头像组件中,进行挑选和组合,终端根据被选中的头像组件,生成初始头像,再由用户对所生成的初始头像进行形状的调整,来生成目标头像,实现了用户自定义头像,自定义步骤简单方便,人机交互效率高。而且,能够确定出与众不同的头像,有效减少了头像重复的情况。
图4是根据一示例性实施例示出的一种头像生成装置的框图。参照图4,该装置包括生成单元401,确定单元402和调整单元403。
生成单元401,被配置为基于多个第一头像组件,生成该目标账号的初始头像;
确定单元402,被配置为响应于对该初始头像中目标头像组件的形状调整操作,确定该目标头像组件的调整参数,该调整参数用于调整该目标头像组件的形状;
调整单元403,被配置为基于该调整参数,对该目标头像组件进行调整,得到目标头像。
在一些实施例中,该确定单元402包括:
位置确定子单元,被配置为响应于对该初始头像中目标头像组件的形状调整操作,确定该形状调整操作的终点位置;
参数确定子单元,被配置为将该终点位置的位置参数确定为该目标头像组件的调整参数。
在一些实施例中,该装置还包括展示单元,被配置为根据该形状调整操作的操作轨迹,展示该目标头像组件随该操作轨迹发生形状变化。
在一些实施例中,该装置还包括:
界面展示单元,被配置为展示头像生成界面,该头像生成界面包括多个头像组件,该多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件;
组件确定单元,被配置为基于在该头像生成界面中的选中操作,确定该多个第一头像组件。
在一些实施例中,该界面展示单元包括:
第一展示子单元,被配置为在该头像生成界面中,根据目标账号的属性信息,终端该属性信息对应的多个头像组件。
在一些实施例中,该界面展示单元包括:
第二展示子单元,被配置为在该头像生成界面中,根据目标账号的历史头像的头像类型,展示该头像类型对应的多个头像组件。
在一些实施例中,该界面展示单元,被配置为执行在该头像生成界面中,以缩略图形式向该目标账号展示该多个头像组件;
该组件确定单元,被配置为基于对该头像生成界面中多个缩略图的选中操作,确定该多个第一头像组件。
在一些实施例中,该装置还包括:
第一发送单元,被配置为向服务器发送获取请求,该获取请求用于获取该多个头像组件;
接收单元,被配置为接收该服务器基于该获取请求返回的该多个头像组件。
在一些实施例中,该装置还包括组件展示单元,被配置为执行:
若该选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示该选中操作对应的头像组件。
在一些实施例中,组件展示单元,还被配置为若该选中操作对应的头像组件与该至少一个第一头像组件不匹配,则展示至少一个第二头像组件中的一个第二头像组件,该至少一个第二头像组件与该至少一个第一头像组件对应。
在一些实施例中,该生成单元401包括:
绘制位置确定子单元,被配置为基于该多个第一头像组件的类型,确定该多个第一头像组件在目标画布中的绘制位置;
绘制子单元,被配置为基于该多个第一头像组件以及对应的绘制位置,在该目标画布中进行绘制,得到该目标账号的初始头像。
在一些实施例中,该装置还包括:
文件生成单元,被配置为生成该目标头像的二进制文件;
第二发送单元,被配置为向服务器发送携带该二进制文件的存储请求,该存储请求用于指示该服务器存储该二进制文件。
本公开实施例提供的技术方案,由用户在多个头像组件中,进行挑选和组合,来生成初始头像,再由用户对所生成的初始头像进行形状的调整,来生成目标头像,实现了用户自定义头像,自定义步骤简单方便,人机交互效率高。而且,能够确定出与众不同的头像,有效减少了头像重复的情况。
需要说明的是:上述实施例提供的头像生成装置在生成头像时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的头像生成装置与头像生成方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图5是根据一示例性实施例示出的一种终端500的框图。该终端500可以是:智能手 机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端500还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端500包括有:处理器501和存储器502。
处理器501可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器501可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器501也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器501可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器501还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器502可以包括一个或多个非易失性计算机可读存储介质,该非易失性计算机可读存储介质可以是非暂态的。存储器502还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器502中的非暂态的非易失性计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器501所执行以实现下述步骤:
基于多个第一头像组件,生成初始头像;
响应于对该初始头像中目标头像组件的形状调整操作,确定该目标头像组件的调整参数,该调整参数用于调整该目标头像组件的形状;
基于该调整参数,对该目标头像组件进行调整,得到目标头像。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
响应于对该初始头像中目标头像组件的形状调整操作,确定该形状调整操作的终点位置;
将该终点位置的位置参数确定为该目标头像组件的调整参数。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
根据该形状调整操作的操作轨迹,展示该目标头像组件随该操作轨迹发生形状变化。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
展示头像生成界面,该头像生成界面包括多个头像组件,该多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件;
基于在该头像生成界面中的选中操作,确定该多个第一头像组件。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
在该头像生成界面中,根据目标账号的属性信息,展示该属性信息对应的多个头像组件。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
在该头像生成界面中,根据目标账号的历史头像的头像类型,展示该头像类型对应的 多个头像组件。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
在该头像生成界面中,以缩略图形式展示该多个头像组件;
基于对该头像生成界面中多个缩略图的选中操作,确定该多个第一头像组件。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
向服务器发送获取请求,该获取请求用于获取该多个头像组件;
接收该服务器基于该获取请求返回的该多个头像组件。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
若该选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示该选中操作对应的头像组件。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
若该选中操作对应的头像组件与该至少一个第一头像组件不匹配,则展示至少一个第二头像组件中的一个第二头像组件,该至少一个第二头像组件与该至少一个第一头像组件对应。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
基于该多个第一头像组件的类型,确定该多个第一头像组件在目标画布中的绘制位置;
基于该多个第一头像组件以及对应的绘制位置,在该目标画布中进行绘制,得到该初始头像。
在一些实施例中,该处理器被配置为执行该程序代码,还用于实现下述步骤:
生成该目标头像的二进制文件;
向服务器发送携带该二进制文件的存储请求,该存储请求用于指示该服务器存储该二进制文件。
在一些实施例中,终端500还可选包括有:外围设备接口503和至少一个外围设备。处理器501、存储器502和外围设备接口503之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口503相连。具体地,外围设备包括:射频电路504、显示屏505、摄像头组件506、音频电路507、定位组件508和电源509中的至少一种。
外围设备接口503可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器501和存储器502。在一些实施例中,处理器501、存储器502和外围设备接口503被集成在同一芯片或电路板上;在一些其他实施例中,处理器501、存储器502和外围设备接口503中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路504用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路504通过电磁信号与通信网络以及其他通信设备进行通信。射频电路504将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。在一些实施例中,射频电路504包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路504可以通过至少一种无线 通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路504还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本公开对此不加以限定。
显示屏505用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏505是触摸显示屏时,显示屏505还具有采集在显示屏505的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器501进行处理。此时,显示屏505还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏505可以为一个,设置在终端500的前面板;在另一些实施例中,显示屏505可以为至少两个,分别设置在终端500的不同表面或呈折叠设计;在另一些实施例中,显示屏505可以是柔性显示屏,设置在终端500的弯曲表面上或折叠面上。甚至,显示屏505还可以设置成非矩形的不规则图形,也即异形屏。显示屏505可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件506用于采集图像或视频。在一些实施例中,摄像头组件506包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件506还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路507可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器501进行处理,或者输入至射频电路504以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端500的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器501或射频电路504的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路507还可以包括耳机插孔。
定位组件508用于定位终端500的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件508可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源509用于为终端500中的各个组件进行供电。电源509可以是交流电、直流电、一次性电池或可充电电池。当电源509包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端500还包括有一个或多个传感器510。该一个或多个传感器510包括但不限于:加速度传感器511、陀螺仪传感器512、压力传感器513、指纹传感器514、 光学传感器515以及接近传感器516。
加速度传感器511可以检测以终端500建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器511可以用于检测重力加速度在三个坐标轴上的分量。处理器501可以根据加速度传感器511采集的重力加速度信号,控制显示屏505以横向视图或纵向视图进行用户界面的显示。加速度传感器511还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器512可以检测终端500的机体方向及转动角度,陀螺仪传感器512可以与加速度传感器511协同采集用户对终端500的3D动作。处理器501根据陀螺仪传感器512采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器513可以设置在终端500的侧边框和/或显示屏505的下层。当压力传感器513设置在终端500的侧边框时,可以检测用户对终端500的握持信号,由处理器501根据压力传感器513采集的握持信号进行左右手识别或快捷操作。当压力传感器513设置在显示屏505的下层时,由处理器501根据用户对显示屏505的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器514用于采集用户的指纹,由处理器501根据指纹传感器514采集到的指纹识别用户的身份,或者,由指纹传感器514根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器501授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器514可以被设置在终端500的正面、背面或侧面。当终端500上设置有物理按键或厂商Logo时,指纹传感器514可以与物理按键或厂商Logo集成在一起。
光学传感器515用于采集环境光强度。在一个实施例中,处理器501可以根据光学传感器515采集的环境光强度,控制显示屏505的显示亮度。具体地,当环境光强度较高时,调高显示屏505的显示亮度;当环境光强度较低时,调低显示屏505的显示亮度。在另一个实施例中,处理器501还可以根据光学传感器515采集的环境光强度,动态调整摄像头组件506的拍摄参数。
接近传感器516,也称距离传感器,通常设置在终端500的前面板。接近传感器516用于采集用户与终端500的正面之间的距离。在一个实施例中,当接近传感器516检测到用户与终端500的正面之间的距离逐渐变小时,由处理器501控制显示屏505从亮屏状态切换为息屏状态;当接近传感器516检测到用户与终端500的正面之间的距离逐渐变大时,由处理器501控制显示屏505从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图5中示出的结构并不构成对终端500的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
图6是根据一示例性实施例示出的一种服务器的框图,该服务器600可因配置或性能不同而产生比较大的差异,可以包括一个或多个处理器(Central Processing Units,CPU)601和一个或多个的存储器602,其中,该一个或多个存储器602中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器601加载并执行以实现上述各个方法实施例提供的头像生成方法。当然,该服务器600还可以具有有线或无线网络接口、键盘以及输入 输出接口等部件,以便进行输入输出,该服务器600还可以包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种包括程序代码的存储介质,例如包括程序代码的存储器602,上述程序代码可由服务器600的处理器601执行以实现下述步骤:基于多个第一头像组件,生成初始头像;响应于对该初始头像中目标头像组件的形状调整操作,确定该目标头像组件的调整参数,该调整参数用于调整该目标头像组件的形状;基于该调整参数,对该目标头像组件进行调整,得到目标头像。
在一些实施例中,存储介质可以是非临时性非易失性计算机可读存储介质,例如,该非临时性非易失性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现下述步骤:基于多个第一头像组件,生成初始头像;响应于对该初始头像中目标头像组件的形状调整操作,确定该目标头像组件的调整参数,该调整参数用于调整该目标头像组件的形状;基于该调整参数,对该目标头像组件进行调整,得到目标头像。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。

Claims (38)

  1. 一种头像生成方法,其中,所述方法包括:
    基于多个第一头像组件,生成初始头像;
    响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
    基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
  2. 根据权利要求1所述的头像生成方法,其中,所述响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,包括:
    响应于对所述初始头像中目标头像组件的形状调整操作,确定所述形状调整操作的终点位置;
    将所述终点位置的位置参数确定为所述目标头像组件的调整参数。
  3. 根据权利要求1所述的头像生成方法,其中,所述方法还包括:
    根据所述形状调整操作的操作轨迹,展示所述目标头像组件随所述操作轨迹发生形状变化。
  4. 根据权利要求1所述的头像生成方法,其中,所述方法还包括:
    展示头像生成界面,所述头像生成界面包括多个头像组件,所述多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件;
    基于在所述头像生成界面中的选中操作,确定所述多个第一头像组件。
  5. 根据权利要求4所述的头像生成方法,其中,所述展示头像生成界面,包括:
    在所述头像生成界面中,根据目标账号的属性信息,展示所述属性信息对应的多个头像组件。
  6. 根据权利要求4所述的头像生成方法,其中,所述展示头像生成界面,包括:
    在所述头像生成界面中,根据目标账号的历史头像的头像类型,展示所述头像类型对应的多个头像组件。
  7. 根据权利要求4所述的头像生成方法,其中,所述展示头像生成界面,包括:
    在所述头像生成界面中,以缩略图形式展示所述多个头像组件;
    所述基于在所述头像生成界面中的选中操作,确定所述多个第一头像组件包括:
    基于对所述头像生成界面中多个缩略图的选中操作,确定所述多个第一头像组件。
  8. 根据权利要求4所述的头像生成方法,其中,所述方法还包括:
    向服务器发送获取请求,所述获取请求用于获取所述多个头像组件;
    接收所述服务器基于所述获取请求返回的所述多个头像组件。
  9. 根据权利要求4所述的头像生成方法,其中,所述方法还包括:
    若所述选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示所述选中操作对应的头像组件。
  10. 根据权利要求9所述的头像生成方法,其中,所述方法还包括:
    若所述选中操作对应的头像组件与所述至少一个第一头像组件不匹配,则展示至少一个第二头像组件中的一个第二头像组件,所述至少一个第二头像组件与所述至少一个第一头像组件对应。
  11. 根据权利要求1所述的头像生成方法,其中,所述基于多个第一头像组件,生成初始头像包括:
    基于所述多个第一头像组件的类型,确定所述多个第一头像组件在目标画布中的绘制位置;
    基于所述多个第一头像组件以及对应的绘制位置,在所述目标画布中进行绘制,得到所述初始头像。
  12. 根据权利要求1所述的头像生成方法,其中,所述方法还包括:
    生成所述目标头像的二进制文件;
    向服务器发送携带所述二进制文件的存储请求,所述存储请求用于指示所述服务器存储所述二进制文件。
  13. 一种头像生成装置,其中,所述装置包括:
    生成单元,被配置为基于多个第一头像组件,生成所述目标账号的初始头像;
    确定单元,被配置为响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
    调整单元,被配置为基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
  14. 根据权利要求13所述的头像生成装置,其中,所述确定单元包括:
    位置确定子单元,被配置为响应于对所述初始头像中目标头像组件的形状调整操作,确定所述形状调整操作的终点位置;
    参数确定子单元,被配置为将所述终点位置的位置参数确定为所述目标头像组件的调整参数。
  15. 根据权利要求13所述的头像生成装置,其中,所述装置还包括展示单元,被配置为根据所述形状调整操作的操作轨迹,展示所述目标头像组件随所述操作轨迹发生形状变化。
  16. 根据权利要求13所述的头像生成装置,其中,所述装置还包括:
    界面展示单元,被配置为展示头像生成界面,所述头像生成界面包括多个头像组件,所述多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件;
    组件确定单元,被配置为基于在所述头像生成界面中的选中操作,确定所述多个第一头像组件。
  17. 根据权利要求16所述的头像生成装置,其中,所述界面展示单元包括:
    第一展示子单元,被配置为在所述头像生成界面中,根据目标账号的属性信息,终端所述属性信息对应的多个头像组件。
  18. 根据权利要求16所述的头像生成装置,其中,所述界面展示单元包括:
    第二展示子单元,被配置为在所述头像生成界面中,根据目标账号的历史头像的头像类型,展示所述头像类型对应的多个头像组件。
  19. 根据权利要求16所述的头像生成装置,其中,所述界面展示单元,被配置为在所述头像生成界面中,以缩略图形式展示所述多个头像组件;
    所述组件确定单元,被配置为基于对所述头像生成界面中多个缩略图的选中操作,确定所述多个第一头像组件。
  20. 根据权利要求16所述的头像生成装置,其中,所述装置还包括:
    第一发送单元,被配置为向服务器发送获取请求,所述获取请求用于获取所述多个头像组件;
    接收单元,被配置为接收所述服务器基于所述获取请求返回的所述多个头像组件。
  21. 根据权利要求16所述的头像生成装置,其中,所述装置还包括组件展示单元,被配置为:
    若所述选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示所述选中操作对应的头像组件。
  22. 根据权利要求21所述的头像生成装置,其中,所述组件展示单元,还被配置为若所述选中操作对应的头像组件与所述至少一个第一头像组件不匹配,则展示至少一个第二头像组件中的一个第二头像组件,所述至少一个第二头像组件与所述至少一个第一头像组件对应。
  23. 根据权利要求13所述的头像生成装置,其中,所述生成单元包括:
    绘制位置确定子单元,被配置为基于所述多个第一头像组件的类型,确定所述多个第一头像组件在目标画布中的绘制位置;
    绘制子单元,被配置为基于所述多个第一头像组件以及对应的绘制位置,在所述目标 画布中进行绘制,得到所述目标账号的初始头像。
  24. 根据权利要求13所述的头像生成装置,其中,所述装置还包括:
    文件生成单元,被配置为生成所述目标头像的二进制文件;
    第二发送单元,被配置为向服务器发送携带所述二进制文件的存储请求,所述存储请求用于指示所述服务器存储所述二进制文件。
  25. 一种电子设备,其中,所述电子设备包括:
    一个或多个处理器;
    用于存储所述处理器可执行程序代码的存储器;
    其中,所述处理器被配置为执行所述程序代码,以实现下述步骤:
    基于多个第一头像组件,生成初始头像;
    响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
    基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
  26. 根据权利要求25所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    响应于对所述初始头像中目标头像组件的形状调整操作,确定所述形状调整操作的终点位置;
    将所述终点位置的位置参数确定为所述目标头像组件的调整参数。
  27. 根据权利要求25所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    根据所述形状调整操作的操作轨迹,展示所述目标头像组件随所述操作轨迹发生形状变化。
  28. 根据权利要求25所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    展示头像生成界面,所述头像生成界面包括多个头像组件,所述多个头像组件包括多个类型的头像组件,且每个类型的头像组件包括至少一个头像组件;
    基于在所述头像生成界面中的选中操作,确定所述多个第一头像组件。
  29. 根据权利要求28所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    在所述头像生成界面中,根据目标账号的属性信息,展示所述属性信息对应的多个头像组件。
  30. 根据权利要求28所述的电子设备,其中,所述处理器被配置为执行所述程序代码, 还用于实现下述步骤:
    在所述头像生成界面中,根据目标账号的历史头像的头像类型,展示所述头像类型对应的多个头像组件。
  31. 根据权利要求28所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    在所述头像生成界面中,以缩略图形式展示所述多个头像组件;
    基于对所述头像生成界面中多个缩略图的选中操作,确定所述多个第一头像组件。
  32. 根据权利要求28所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    向服务器发送获取请求,所述获取请求用于获取所述多个头像组件;
    接收所述服务器基于所述获取请求返回的所述多个头像组件。
  33. 根据权利要求28所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    若所述选中操作对应的头像组件与已选中的至少一个第一头像组件匹配,则展示所述选中操作对应的头像组件。
  34. 根据权利要求3所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    若所述选中操作对应的头像组件与所述至少一个第一头像组件不匹配,则展示至少一个第二头像组件中的一个第二头像组件,所述至少一个第二头像组件与所述至少一个第一头像组件对应。
  35. 根据权利要求25所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    基于所述多个第一头像组件的类型,确定所述多个第一头像组件在目标画布中的绘制位置;
    基于所述多个第一头像组件以及对应的绘制位置,在所述目标画布中进行绘制,得到所述初始头像。
  36. 根据权利要求25所述的电子设备,其中,所述处理器被配置为执行所述程序代码,还用于实现下述步骤:
    生成所述目标头像的二进制文件;
    向服务器发送携带所述二进制文件的存储请求,所述存储请求用于指示所述服务器存储所述二进制文件。
  37. 一种非易失性计算机可读存储介质,其中,当所述非易失性计算机可读存储介质 中的程序代码由电子设备的处理器执行时,使得所述电子设备能够实现下述步骤:
    基于多个第一头像组件,生成初始头像;
    响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
    基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
  38. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现下述步骤:
    基于多个第一头像组件,生成初始头像;
    响应于对所述初始头像中目标头像组件的形状调整操作,确定所述目标头像组件的调整参数,所述调整参数用于调整所述目标头像组件的形状;
    基于所述调整参数,对所述目标头像组件进行调整,得到目标头像。
PCT/CN2021/114362 2020-09-24 2021-08-24 头像生成方法及设备 WO2022062808A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011016905.X 2020-09-24
CN202011016905.XA CN112148404B (zh) 2020-09-24 2020-09-24 头像生成方法、装置、设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2022062808A1 true WO2022062808A1 (zh) 2022-03-31

Family

ID=73896726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114362 WO2022062808A1 (zh) 2020-09-24 2021-08-24 头像生成方法及设备

Country Status (2)

Country Link
CN (1) CN112148404B (zh)
WO (1) WO2022062808A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998478A (zh) * 2022-07-19 2022-09-02 深圳市信润富联数字科技有限公司 数据处理方法、装置、设备及计算机可读存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148404B (zh) * 2020-09-24 2024-03-19 游艺星际(北京)科技有限公司 头像生成方法、装置、设备以及存储介质
CN113064981A (zh) * 2021-03-26 2021-07-02 北京达佳互联信息技术有限公司 群组头像生成方法、装置、设备及存储介质
CN116542846B (zh) * 2023-07-05 2024-04-26 深圳兔展智能科技有限公司 用户账号图标生成方法、装置、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010066789A (ja) * 2008-09-08 2010-03-25 Taito Corp アバター編集サーバ及びアバター編集プログラム
CN101692681A (zh) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 一种在话机终端上实现虚拟形象互动界面的方法和系统
CN109791702A (zh) * 2016-09-23 2019-05-21 苹果公司 头像创建和编辑
CN112148404A (zh) * 2020-09-24 2020-12-29 游艺星际(北京)科技有限公司 头像生成方法、装置、设备以及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI439960B (zh) * 2010-04-07 2014-06-01 Apple Inc 虛擬使用者編輯環境
US9542038B2 (en) * 2010-04-07 2017-01-10 Apple Inc. Personalizing colors of user interfaces
CN108897597B (zh) * 2018-07-20 2021-07-13 广州方硅信息技术有限公司 指导配置直播模板的方法和装置
CN109361852A (zh) * 2018-10-18 2019-02-19 维沃移动通信有限公司 一种图像处理方法及装置
CN110189348B (zh) * 2019-05-29 2020-12-25 北京达佳互联信息技术有限公司 头像处理方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010066789A (ja) * 2008-09-08 2010-03-25 Taito Corp アバター編集サーバ及びアバター編集プログラム
CN101692681A (zh) * 2009-09-17 2010-04-07 杭州聚贝软件科技有限公司 一种在话机终端上实现虚拟形象互动界面的方法和系统
CN109791702A (zh) * 2016-09-23 2019-05-21 苹果公司 头像创建和编辑
CN112148404A (zh) * 2020-09-24 2020-12-29 游艺星际(北京)科技有限公司 头像生成方法、装置、设备以及存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "How does soul pinch its face, it's enough to know these skills - tianqing experience network", 2 April 2020 (2020-04-02), pages 1 - 16, XP055915794, Retrieved from the Internet <URL:https://www.tianqing123.cn/jy/452869.html> [retrieved on 20220426] *
ANONYMOUS: "How to Pinch Your Face in The Sims 4", 25 September 2017 (2017-09-25), pages 1 - 3, XP055915791, Retrieved from the Internet <URL:https://jingyan.baidu.com/article/39810a23be5fdeb636fda6ed.html> [retrieved on 20220426] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998478A (zh) * 2022-07-19 2022-09-02 深圳市信润富联数字科技有限公司 数据处理方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN112148404B (zh) 2024-03-19
CN112148404A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2022062808A1 (zh) 头像生成方法及设备
US11809633B2 (en) Mirroring device with pointing based navigation
WO2022048398A1 (zh) 多媒体数据拍摄方法及终端
US20220197393A1 (en) Gesture control on an eyewear device
WO2020233403A1 (zh) 三维角色的个性化脸部显示方法、装置、设备及存储介质
US11797162B2 (en) 3D painting on an eyewear device
US20220198603A1 (en) Recentering ar/vr content on an eyewear device
US11886673B2 (en) Trackpad on back portion of a device
US20220317774A1 (en) Real-time communication interface with haptic and audio feedback response
US11989348B2 (en) Media content items with haptic feedback augmentations
WO2022147158A1 (en) Communication interface with haptic feedback response
US20220206584A1 (en) Communication interface with haptic feedback response
WO2022140129A1 (en) Gesture control on an eyewear device
WO2022212175A1 (en) Interface with haptic and audio feedback response
WO2022140117A1 (en) 3d painting on an eyewear device
WO2022212174A1 (en) Interface with haptic and audio feedback response
WO2022147151A1 (en) Real-time video communication interface with haptic feedback
CN113609358B (zh) 内容分享方法、装置、电子设备以及存储介质
WO2020083178A1 (zh) 数字图像展示方法、装置、电子设备及存储介质
CN116320721A (zh) 一种拍摄方法、装置、终端及存储介质
US11861801B2 (en) Enhanced reading with AR glasses
CN114004922B (zh) 骨骼动画显示方法、装置、设备、介质及计算机程序产品
US20240127550A1 (en) Remote annotation and navigation using an ar wearable device
US20240077936A1 (en) Selecting ar buttons on a hand
US20220317773A1 (en) Real-time communication interface with haptic and audio feedback response

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21871181

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 29/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21871181

Country of ref document: EP

Kind code of ref document: A1