CN109215007B - Image generation method and terminal equipment - Google Patents

Image generation method and terminal equipment Download PDF

Info

Publication number
CN109215007B
CN109215007B CN201811110761.7A CN201811110761A CN109215007B CN 109215007 B CN109215007 B CN 109215007B CN 201811110761 A CN201811110761 A CN 201811110761A CN 109215007 B CN109215007 B CN 109215007B
Authority
CN
China
Prior art keywords
image
information
target
terminal device
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811110761.7A
Other languages
Chinese (zh)
Other versions
CN109215007A (en
Inventor
李巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811110761.7A priority Critical patent/CN109215007B/en
Publication of CN109215007A publication Critical patent/CN109215007A/en
Application granted granted Critical
Publication of CN109215007B publication Critical patent/CN109215007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image generation method and terminal equipment, which are applied to the technical field of communication and are used for solving the problems that the process of generating a user-defined emoticon by the terminal equipment is complicated and time-consuming. Specifically, this scheme is applied to terminal equipment, and this scheme includes: determining a target template image; acquiring first information in an image to be processed; and generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image formed by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information. The scheme can be particularly applied to the process of generating the user-defined expression package image by the user control terminal equipment.

Description

Image generation method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image generation method and terminal equipment.
Background
With the popularization of social media, people no longer satisfy the communication modes of simple characters, voice and the like, but need more interesting media to enrich social activities, so that various emoticons are produced at once. It follows that users are more inclined to use custom emoticons for social activities.
Specifically, the current user-defined production process of the emoticon may include: the user control terminal device extracts facial expression images of a user in a facial image, and then superimposes the facial expression images on the processed expression package template (such as the expression package template with the original expression images removed) to generate a customized expression package.
In order to ensure the display effect of the customized expression package, a user generally needs to manually adjust the tone style of the facial expression image selected by the user through each piece of software, so that the tone style of the facial expression image is consistent with that of the expression package template; and the user needs to manually superimpose the adjusted facial expression image on the expression package template to generate a customized expression package. Therefore, the process of generating the user-defined expression package by the terminal device is tedious and time-consuming.
Disclosure of Invention
The embodiment of the invention provides an image generation method and terminal equipment, and aims to solve the problems that the process of generating a user-defined expression package by the terminal equipment is complicated and time-consuming.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides an image generation method, which is applied to a terminal device, and the image generation method includes: determining a target template image; acquiring first information in an image to be processed; and generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image formed by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes: the device comprises a determining module, an obtaining module and a generating module; a determining module for determining a target template image; the acquisition module is used for acquiring first information in an image to be processed; and the generating module is used for generating a target image by adopting an image generating model corresponding to the target template image determined by the determining module according to the first information acquired by the acquiring module, wherein the target image is an image synthesized by the first information and second information in the target template image, and the first information and the second information are different types of information.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the computer program implements the steps of the image generation method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image generation method according to the first aspect.
In the embodiment of the invention, the terminal equipment can determine the target template image; acquiring first information in an image to be processed; and generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image formed by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information. Based on the scheme, the integration of the first information in the image to be processed and the second information in the target template image through the image generation model can be realized, and the user does not need to manually overlap the acquired information in the image to be processed and the target template image. Therefore, the process that the terminal equipment generates the image to be processed and the target template image into the target image can be simplified, and the display effect of the generated target image is improved. That is, the terminal device can simplify the process of generating the user-defined emoticon by the terminal device, and improve the display effect of the generated user-defined emoticon.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image generating method according to an embodiment of the present invention;
fig. 3 is one of schematic diagrams of an interface displayed by a terminal device according to an embodiment of the present invention;
FIG. 4 is a second flowchart illustrating an image generating method according to an embodiment of the present invention;
fig. 4a is a second schematic diagram of an interface displayed by the terminal device according to the embodiment of the present invention;
fig. 5 is a third schematic flowchart of an image generating method according to an embodiment of the present invention;
fig. 6 is a third schematic diagram of an interface displayed by the terminal device according to the embodiment of the present invention;
fig. 7 is a schematic structural diagram of a possible terminal device according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first information, the second information, and the like are for distinguishing different information, not for describing a specific order of information.
According to the image generation method provided by the embodiment of the invention, the terminal equipment can determine the target template image, acquire part of or all of information in the image to be processed and acquire the image generation model corresponding to the target template image. Specifically, the terminal device may generate the target image by using the image generation model according to information in the acquired image to be processed. Therefore, the information in the image to be processed and the information in the target template image can be integrated through the image generation model, and the user does not need to manually overlap the acquired information in the image to be processed and the target template image. Therefore, the process of generating the image by the terminal equipment can be simplified, such as the process of generating the user-defined emoticon by the terminal equipment.
It should be noted that, in the image generation method provided in the embodiment of the present invention, the execution main body may be a terminal device, or a Central Processing Unit (CPU) of the terminal device, or a control module in the terminal device for executing the image generation method. In the embodiment of the present invention, an image generation method executed by a terminal device is taken as an example, and the image generation method provided in the embodiment of the present invention is described.
The image described in the embodiment of the present invention may be a picture, and the image and the picture are not specifically distinguished for the sake of uniform description in the embodiment of the present invention.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image generation method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image generation method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image generation method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the image generation method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes the image generation method provided by the embodiment of the present invention in detail with reference to the flowchart of the image generation method shown in fig. 2. Wherein, although the logical order of the image generation methods provided by embodiments of the present invention is shown in a method flow diagram, in some cases, the steps shown or described may be performed in an order different than here. For example, the image generation method illustrated in fig. 2 may include S201 to S203:
s201, the terminal device determines a target template image.
For example, the target template image provided by the embodiment of the present invention may be an emoticon image (or emoticon) or an image with a specific style.
Wherein an emoticon image generally includes a facial image portion and a non-facial image portion. The facial image mentioned in the embodiment of the present invention may be a facial image or a facial feature image in one person image, and the non-facial image may be an image excluding the facial image or the facial feature image in one person image. The character image may be a real character image or a virtual character image (e.g., an animated character image).
In general, when the target template image is an expression bag image, the original face image in the target template image may be replaced by another face image, and the non-face image in the target template image is unchanged.
Illustratively, in the embodiment of the present invention, the styles of the images may include Chinese style, stereology, impression style, reality, super-reality, expressiveness, ink-wash painting style, oil painting style or color-lead style. Wherein the information of one image includes information indicating a style of the image. The target template image may be an image of a certain style.
In addition, in the embodiment of the present invention, the style of the image may also be an image parameter indicating contrast, chromaticity, and the like of the image.
It can be understood that, in the embodiment of the present invention, the terminal device may receive a selection input from a user to determine that an image indicated by the selection input is a target template image.
Exemplarily, as shown in fig. 3, a schematic diagram of an interface displayed by a terminal device according to an embodiment of the present invention is provided. Wherein, the area P1 in the interface 31 shown in fig. 3 includes the image a 1; the region P2 includes an emoticon image B1 to an emoticon image B6. For example, the emoticon image B3 shown in fig. 3 may be the target template image described above.
S202, the terminal equipment acquires first information in the image to be processed.
Optionally, the first information in the image to be processed is partial information or all information in the image to be processed. For example, the first information is an image in a region of the image to be processed, such as an image of a face region, i.e., a face image.
Illustratively, the first information in the image to be processed is a foreground image in the image to be processed.
It is understood that the terminal device may identify a face image in the image to be processed by using a face recognition technique, and use a face segmentation technique to identify the face image (i.e., the first information in the image to be processed) in the image to be processed displayed by the terminal device. For example, the image to be processed in the embodiment of the present invention may be the image a1 shown in the region P1 of the interface 31 in fig. 3, and the first information of the image a1 acquired by the terminal device may be a face image taken out of the frame of the line L1 on the image a1 shown in fig. 3.
It is emphasized that the image to be processed is a static image or a dynamic image; the image to be processed is a single image or a frame image in a video. The video can be acquired by the terminal device in real time, or acquired by the terminal device from a network side in real time, or prestored by the terminal device.
It should be noted that, in the embodiment of the present invention, the execution sequence of S201 and S202 is not limited, for example, the terminal device may execute S202 first and then execute S201.
And S203, the terminal equipment generates a target image by adopting an image generation model corresponding to the target template image according to the first information.
The target image is an image synthesized by first information in the image to be processed and second information in the target template image, and the first information and the second information are different types of information.
And the second information in the target template image is part of or all of the information in the target image.
Optionally, in this embodiment of the present invention, the type of information in one image may include: facial image type, non-facial image type, style of image, and content of image.
Illustratively, the first information in the image to be processed is the content in the image to be processed, and the second information in the target template image is the style in the target template image. At this time, the target image generated by the terminal device may be an image in which the content of the image to be processed and the style of the target template image are synthesized.
Or the first information in the image to be processed is a face image in the image to be processed, and the second information in the target template image is a non-face image in the target template image. The image to be processed may be a real character image, and the target template image may be a virtual emoticon image. At this time, the target image generated by the terminal device may be a target image obtained by synthesizing the face image of the image to be processed and the non-face image of the target template image, that is, a new expression bag image is generated. At this time, the target image may be a user-defined emoticon image.
Optionally, the image generation model corresponding to the target template image provided in the embodiment of the present invention may be obtained by training in advance according to the target template image by using a Deep Learning (Deep Learning) algorithm.
Specifically, because the image generation model provided in the embodiment of the present invention, for example, the image generation model corresponding to the target template image, is obtained by training based on a deep learning algorithm, the terminal device may better synthesize the first information in the image to be processed and the second information in the target template image by using the image generation model, so that the synthesized target image is more natural or has a stronger interest. I.e. the display effect of the target image generated by the terminal device can be improved.
It should be noted that, the image generation method and the terminal device provided by the embodiment of the present invention can determine a target template image; acquiring first information in an image to be processed; and generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image formed by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information. Based on the scheme, the integration of the first information in the image to be processed and the second information in the target template image through the image generation model can be realized, and the user does not need to manually overlap the acquired information in the image to be processed and the target template image. Therefore, the process that the terminal equipment generates the image to be processed and the target template image into the target image can be simplified, and the display effect of the generated target image is improved. That is, the terminal device can simplify the process of generating the user-defined emoticon by the terminal device, and improve the display effect of the generated user-defined emoticon.
In a possible implementation manner, in the Image generation method provided in the embodiment of the present invention, the Image generation model is a Conditional Generation Adaptive Network (CGAN) or Image style transformation (Image transfer) model.
Specifically, in the embodiment of the present invention, in a scenario in which the image generation model is a CGAN network:
before the terminal device executes the image generation method provided by the embodiment of the invention, the training device may train the target template image, a group of face images and a group of result images aiming at the target template image to obtain an image generation model corresponding to the target template image. When the target template image is an expression bag image, the set of result images for the target template image may be images in which the initial face image in the target template image is replaced by other face images.
Optionally, the display effect between the face image and the non-face image in each result image is natural, for example, the position where the target face image is superimposed on the non-face image is accurate (for example, the angle and the position of the area where the face image is located in the non-face image are accurate), and the chromaticity or the contrast of the target face image and the non-face image are consistent.
The target image generation model is specifically used for inputting first information in the image to be processed into a G network in the CGAN so as to integrate second information in the target template image and the first information in the image to be processed.
The training device provided in the embodiment of the present invention may be the terminal device, or may also be another device other than the terminal device, such as a server, which is not specifically limited in the embodiment of the present invention.
Illustratively, the CGAN may include a generation (G) network and a discriminant (D) network. Wherein the G network is an image-generating network that receives a random noise z from which an image is generated, denoted G (z). The D network is used to determine whether an image is "real", i.e., whether the image is an image generated by the G network. The input parameter of the D network is x, and x represents an image; d network outputs D (x), which represents the probability that x is a true image. Specifically, if d (x) is 1, it represents that x is 100% of the true image; and d (x) is 0, it means that x is not a real image, i.e. x is an image generated by the G network.
During the training process, the goal of the G-network is to try to generate a true image to spoof the D-network. And the goal of the D network is to distinguish between the images generated by the G network and the real images. Thus, the G network and the D network constitute a dynamic "gaming process". In this way, the G network in the CGAN can generate an image G (z) that is "spurious" and that is obtained as an input to the D network, and the input result D (G (z)) is 0.5, that is, it is difficult for the D network to determine whether the image G (z) generated by the G network is a true image. The training device can alternately train the D network and the G network; and the trained G network can be used for generating a user-defined expression bag image, namely the target image.
Specifically, in the embodiment of the present invention, the first information in the image to be processed may be used as the input of the G network, that is, the z.
Specifically, in the embodiment of the present invention, in a scene in which the image generation model is the image style conversion model:
before the terminal device executes the image generation method provided by the embodiment of the invention, the training device may train the target template image and a set of preset images to obtain an image generation model corresponding to the target template image.
Illustratively, the training device learns the style of the target template image (i.e., the style image) and the content in a preset image (i.e., the content image), and realizes the fusion of the style image and the content image in a certain proportion, so as to obtain the final style effect image.
The training device can use the existing mature network (such as VGG network) to perform feature representation on the target template image. The concrete mode is as follows: selecting a certain layer of the VGG network as the feature description of the target template image, and then training an image style conversion model, wherein the image style conversion model learns the style of the style image on one hand, and the content of the content image is reserved on the other hand. Therefore, the terminal equipment can generate the target image by adopting the image generation model corresponding to the target template image according to the first information in the image to be processed so as to integrate the first information in the image to be processed and the second information in the target template image.
The training device may be the terminal device.
Illustratively, the terminal device may generate the target image by using an image generation model corresponding to the target template image according to the first information in the image to be processed through the following formula. Here, the first information in the image to be processed may be all information of the image to be processed, that is, the first information is the image to be processed itself.
Figure BDA0001809031380000061
Wherein the content of the first and second substances,
Figure BDA0001809031380000062
for representing the image of the target template,
Figure BDA0001809031380000063
for representing the image to be processed and for,
Figure BDA0001809031380000064
for representing the target image. The above formula is the cost function in the training process of the above style conversion model (i.e. the above formula is the cost function
Figure BDA0001809031380000065
) The cost function contains the cost of a two-part, stylistic model (i.e., cost
Figure BDA0001809031380000066
) And cost of content model
Figure BDA0001809031380000067
The ratio between the two is adjusted by the coefficients α, β. The larger alpha is, the final output result (i.e. the target image) is
Figure BDA0001809031380000068
) The closer to the target template image, the more betaIf the size is larger, the final output result can keep more contents of the image to be processed.
For example, in this embodiment, the image to be processed is the image a1 in the above embodiment, and the target template image is an ink-wash painting-style image.
It should be noted that, in the image generation method provided in the embodiment of the present invention, since the image generation model may be a plurality of models, such as a CGAN or an image style conversion model, so that the generated target image may have a plurality of forms, the display effect of the target image generated by the terminal device may be further improved, and the interest of the target image generated by the terminal device may be increased.
Further, as shown in fig. 4, a schematic flow chart of another image generation method provided in the embodiment of the present invention is shown. In fig. 4, S204 may be further included after S203 shown in fig. 2:
and S204, the terminal equipment displays the target image on a first interface of the terminal equipment.
Exemplarily, as shown in fig. 4a, a schematic diagram of an interface displayed by another terminal device provided in the embodiment of the present invention is shown. The image a2 is included in the area P3 in the interface 32 of the terminal device shown in fig. 4 a. The interface 32 may be the first interface, and the image a2 may be the target image.
Further, the first interface further includes at least one function control; each function control is for indicating a function to be performed with respect to the target image.
For example, the at least one functionality control may include: quitting the control, adding a character control and saving the control.
The exit control is used for triggering the terminal equipment to exit from displaying the target image; the character adding control is used for triggering the terminal equipment to add characters on the target image; and the storage control is used for triggering the terminal equipment to store the target image.
For example, exit control 321, add text control 322, and save control 323 may also be included in region P4 of interface 32 shown in FIG. 4 a. The user can select and input any one of the quitting control 321, the adding text control 322 and the saving control 323 to trigger the terminal device to execute the function corresponding to the control. Therefore, the interestingness of the terminal equipment in the process of generating and displaying the target image is improved.
In the embodiment of the invention, the terminal device adopts the image generation model corresponding to the target image according to the image to be processed, and the generated target image is displayed on the interface of the device, so that a user can conveniently know the target image and can conveniently perform further operation on the target image, such as manual adjustment and the like.
In a possible implementation manner, the image generation method provided by the embodiment of the invention includes N face images, the first information is information of a target face image in the N face images, and N is a positive integer.
It is understood that the terminal device may use a face recognition technology to recognize a face image in the image to be processed.
Exemplarily, as shown in fig. 5, a schematic flow chart of another image generation method provided by the embodiment of the present invention is shown. The method shown in fig. 5 may further include, before S202 shown in fig. 4, S205 and S206:
and S205, the terminal equipment displays the image to be processed on a second interface of the terminal equipment.
For example, the second interface of the terminal device provided in the embodiment of the present invention may be the interface 31 shown in fig. 3.
Optionally, the first interface and the second interface are different areas on one screen (i.e., a display screen) of the terminal device.
The terminal device provided by the embodiment of the invention comprises at least two screens, and the first interface and the second interface are interfaces on different screens of the at least two screens.
Optionally, the terminal device including at least two screens provided in the embodiment of the present invention may be a folding screen type terminal device or a non-folding screen type terminal device. At least two screens in the folding screen type terminal equipment can be folded, and the folding angle between two adjacent screens in the at least two screens can be an angle between 0 degree and 360 degrees. At least two screens in the non-folding screen type terminal device may be arranged on different surfaces in the terminal device, for example, when the non-folding screen type terminal device is a mobile phone, the at least two screens (for example, two screens) may be arranged on the front surface and the back surface of the mobile phone, respectively.
Exemplarily, as shown in fig. 6, a schematic diagram of a display interface of another terminal device provided in the embodiment of the present invention is shown. In conjunction with fig. 3, region P1 in interface 61 in fig. 6 includes image a 1; the region P2 includes the emoticon image B1 to the emoticon image B6. FIG. 6 shows interface 62 having image A2 in region P3 and having exit control 321, add text control 322, and save control 323 in region P4. For example, the image a2 is the target image. In this case, the first interface may be the interface 62, and the second interface may be the interface 61.
S206, the terminal equipment receives a first input of the target face image from the user.
It is understood that the number of N face images included in the image to be processed is one or more, that is, N is an integer greater than or equal to 1.
It should be noted that the screen of the terminal device provided in the embodiment of the present invention may be a touch screen, and the touch screen may be configured to receive an input from a user and display a content corresponding to the input to the user in response to the input. The first input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as press input, long press input, slide input, click input, and hover input (input by a user near the touch screen) of a touch screen of the terminal device by the user. The fingerprint input is input by a user to a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like of a fingerprint identifier of the terminal equipment. The gravity input is input such as shaking of the terminal equipment in a specific direction, shaking of the terminal equipment for a specific number of times and the like. The key input corresponds to a single-click input, a double-click input, a long-press input, a combination key input, and the like of the user for a key such as a power key, a volume key, a Home key, and the like of the terminal device. Specifically, the embodiment of the present invention does not specifically limit the manner of the first input, and may be any realizable manner.
For example, the first input in the embodiment of the present invention may be a user selection input for a target face image included in the image to be processed, such as a click input for a position where the target face image is located on the second interface. For example, in conjunction with FIG. 6, the first input may be a user click input on the location of the face image that was fetched from the box L1 in the interface 61 of FIG. 6.
Accordingly, in the image generation method shown in fig. 5, S202 shown in fig. 4 may be replaced with S202 a:
s202a, responding to the first input, the terminal equipment acquires the first information.
In addition, the terminal device may also receive another input (i.e., a second input) by the user for determining the target template image. For example, the second input may be a user selection input of a target template image displayed by the terminal device, such as a click input of a position on the second interface where the target template image is located. For example, in conjunction with FIG. 6, the second input may be a user click input to the location of the emoticon image B3 in the interface 61 of FIG. 6.
For example, the description of the implementation manner of the second input may refer to the above description related to the implementation manner of the first input, and is not repeated herein in the embodiment of the present invention.
It should be noted that, in the image generation method provided in the embodiment of the present invention, the terminal may display the image to be processed on the second interface, and display the generated target image on the first interface, where the first interface and the second interface are interfaces on different screens of the terminal device, and an area of a display area (e.g., a display area where the first interface is located) on one screen of the terminal device is usually large. Therefore, the terminal equipment can simultaneously display the image to be processed and the target image, and the display effect of the displayed image to be processed and the target image is good.
In addition, since the terminal device can recognize N face images in the image to be processed, the user can be enabled to select a desired target face image from the N face images, that is, the first information in the image to be processed can be acquired. Furthermore, the interestingness of the terminal equipment in the process of generating the target image is improved.
In a possible implementation manner, in the image generation method provided in the embodiment of the present invention, the second interface further includes M template images, the first input is a drag input for dragging the target face image to a target position on the second interface, and M is a positive integer.
For example, the expression bag image B1-expression bag image B6 shown in fig. 3 or fig. 6 in the above embodiments may be the above M template images, where M is an integer greater than or equal to 6.
Specifically, S201 in the above embodiment may be replaced with S201 a:
s201a, in response to the first input, the terminal device determines a target template image from among the M template images corresponding to the target position.
For example, in conjunction with fig. 6, the second input may be a drag input by the user to drag the face image taken out of the box of line L1 in the interface 61 in fig. 6 to the position (i.e., the target position) where the expression bag image B3 is located.
In this way, the first input can trigger the terminal device to acquire the first information in the image to be processed and determine the target template image, so that the operation of the user in controlling the terminal device to generate the target image is simplified.
In a possible implementation manner, in the image generating method provided in the embodiment of the present invention, before the step S205, S207 or S208 may further be included:
s207, the terminal device obtains K first images, each first image comprises a target face image, the image to be processed is one of the images, and K is a positive integer.
For example, the target face image may be a face image of the user a.
It is understood that the terminal device may determine, by using a face clustering technique, images each including a target face image, that is, the K first images described above, from among the plurality of images. At this time, the K first images are all images including the face image of the user a.
S208, the terminal device acquires P second images, each second image comprises a face image, the image to be processed is one of the P second images, and P is a positive integer.
It is understood that the terminal device may determine, by using a face clustering technique, images each including a face image, i.e., the P second images. Wherein P is greater than or equal to N. Specifically, the terminal device may perform the above S208 first, and then perform the above S207.
Optionally, the user may trigger the terminal device to switch the image displayed on the terminal device into an image adjacent to the image to be processed in the P first images or the K first images by sliding input of the image to be processed displayed on the second interface of the terminal device. For example, a leftward sliding input may trigger the terminal device to switch an image displayed on the second interface to a previous image adjacent to the image to be processed; the right sliding input can trigger the terminal device to switch the image displayed on the second interface to the next image adjacent to the image to be processed.
Alternatively, as in the interface 61 shown in fig. 6, the control 611 displayed in the region P1 may trigger the terminal device to switch the image displayed on its second interface to the previous image adjacent to the image to be processed; the control 621 may trigger the terminal device to switch the image displayed on the second interface to a next image adjacent to the image to be processed.
It should be noted that, in the image generating method provided by the embodiment of the present invention, the terminal device may provide an image meeting the user requirement, such as an image including a facial image, to the user, which is favorable for the user to quickly select a required image to be processed.
Fig. 7 is a schematic diagram of a possible structure of a terminal device according to an embodiment of the present invention. The terminal device shown in fig. 7 includes a determining module 701, an obtaining module 702, and a generating module 703; a determining module 701, configured to determine a target template image; an obtaining module 702, configured to obtain first information in an image to be processed; the generating module 703 is configured to generate a target image according to the first information acquired by the acquiring module 702 by using an image generation model corresponding to the target template image determined by the determining module 701, where the target image is an image obtained by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information.
Optionally, the terminal device 70 further includes: a display module; a display module, configured to display the target image on the first interface of the terminal device 70 after the target image is generated by the generation module 703.
Optionally, the image to be processed includes N face images, the first information is information of a target face image in the N face images, and N is a positive integer; the display module is further configured to display the image to be processed on the second interface of the terminal device 70 before the obtaining module 702 obtains the first information of the image to be processed; the terminal device 70 further includes: a receiving module; the receiving module is used for receiving a first input of a target face image from a user; the obtaining module 702 is specifically configured to obtain the first information in response to the first input.
Optionally, the second interface further includes M template images, the first input is a drag input for dragging the target face image to a target position on the second interface, and M is a positive integer; the determining module 701 is specifically configured to determine, in response to the first input, a target template image from the template images corresponding to the target position in the M template images.
Optionally, the terminal device 70 includes at least two screens, and the first interface and the second interface are interfaces on different screens of the at least two screens.
Optionally, the image generation model is a conditional generation type countermeasure network CGAN or an image style conversion model.
The terminal device 70 provided in the embodiment of the present invention can implement each process implemented by the terminal device in the foregoing method embodiments, and for avoiding repetition, details are not described here again.
The terminal equipment provided by the embodiment of the invention can determine the target template image; acquiring first information in an image to be processed; and generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image formed by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information. Based on the scheme, the integration of the first information in the image to be processed and the second information in the target template image through the image generation model can be realized, and the user does not need to manually overlap the acquired information in the image to be processed and the target template image. Therefore, the process that the terminal equipment generates the image to be processed and the target template image into the target image can be simplified, and the display effect of the generated target image is improved. That is, the terminal device can simplify the process of generating the user-defined emoticon by the terminal device, and improve the display effect of the generated user-defined emoticon.
Fig. 8 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 8 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
Wherein, the processor 110 is configured to determine a target template image; acquiring first information in an image to be processed; and generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image formed by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information.
The terminal equipment provided by the embodiment of the invention can determine the target template image; acquiring first information in an image to be processed; and generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image formed by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information. Based on the scheme, the integration of the first information in the image to be processed and the second information in the target template image through the image generation model can be realized, and the user does not need to manually overlap the acquired information in the image to be processed and the target template image. Therefore, the process that the terminal equipment generates the image to be processed and the target template image into the target image can be simplified, and the display effect of the generated target image is improved. That is, the terminal device can simplify the process of generating the user-defined emoticon by the terminal device, and improve the display effect of the generated user-defined emoticon.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 8, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program is executed by the processor 110 to implement each process of the foregoing method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. An image generation method applied to a terminal device includes:
determining a target template image;
acquiring first information in an image to be processed;
generating a target image by adopting an image generation model corresponding to the target template image according to the first information, wherein the target image is an image synthesized by the first information and second information in the target template image, and the first information and the second information are different types of information; the information types are facial image types, non-facial image types and image styles;
the image generation model is a conditional generation type confrontation network CGAN or an image style conversion model; the image generation model is based on training the target template image, a set of face images, and a set of result images for the target template image, the image generation model being used to synthesize the first information and the second information.
2. The image generation method according to claim 1, further comprising, after the generating the target image:
and displaying the target image on a first interface of the terminal equipment.
3. The image generation method according to claim 2, wherein the image to be processed includes N face images, the first information is information of a target face image among the N face images, N is a positive integer;
before the first information of the image to be processed is acquired, the method further comprises the following steps:
displaying the image to be processed on a second interface of the terminal equipment;
receiving a first input of the target facial image by a user;
acquiring first information in an image to be processed, wherein the first information comprises:
in response to the first input, the first information is obtained.
4. The image generation method according to claim 3, further comprising M template images on the second interface, wherein the first input is a drag input dragging the target face image to a target position on the second interface, and M is a positive integer;
the determining of the target template image comprises:
in response to the first input, determining a template image of the M template images corresponding to the target location as the target template image.
5. The image generation method according to claim 3 or 4, wherein the terminal device includes at least two screens, and the first interface and the second interface are interfaces on different screens of the at least two screens.
6. A terminal device, characterized in that the terminal device comprises: the device comprises a determining module, an obtaining module and a generating module;
the determining module is used for determining a target template image;
the acquisition module is used for acquiring first information in the image to be processed;
the generating module is configured to generate a target image according to the first information acquired by the acquiring module by using an image generation model corresponding to the target template image determined by the determining module, where the target image is an image obtained by synthesizing the first information and second information in the target template image, and the first information and the second information are different types of information; the types of the first information and the second information are a facial image type, a non-facial image type and a style of an image;
the image generation model is a conditional generation type confrontation network CGAN or an image style conversion model; the image generation model is based on training the target template image, a set of face images, and a set of result images for the target template image, the image generation model being used to synthesize the first information and the second information.
7. The terminal device according to claim 6, wherein the terminal device further comprises: a display module;
the display module is configured to display the target image on a first interface of the terminal device after the target image is generated by the generation module.
8. The terminal device according to claim 7, wherein the image to be processed includes N face images, the first information is information of a target face image among the N face images, and N is a positive integer;
the display module is further configured to display the image to be processed on a second interface of the terminal device before the acquisition module acquires the first information of the image to be processed;
the terminal device further includes: a receiving module;
the receiving module is used for receiving a first input of the target face image from a user;
the obtaining module is specifically configured to obtain the first information in response to the first input.
9. The terminal device according to claim 8, wherein the second interface further includes M template images thereon, the first input is a drag input dragging the target face image to a target position on the second interface, M is a positive integer;
the determining module is specifically configured to determine, in response to the first input, the target template image from the template image corresponding to the target position among the M template images.
10. The terminal device according to claim 8 or 9, wherein the terminal device comprises at least two screens, and the first interface and the second interface are interfaces on different screens of the at least two screens.
11. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image generation method according to any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image generation method according to any one of claims 1 to 5.
CN201811110761.7A 2018-09-21 2018-09-21 Image generation method and terminal equipment Active CN109215007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811110761.7A CN109215007B (en) 2018-09-21 2018-09-21 Image generation method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811110761.7A CN109215007B (en) 2018-09-21 2018-09-21 Image generation method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109215007A CN109215007A (en) 2019-01-15
CN109215007B true CN109215007B (en) 2022-04-12

Family

ID=64985158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811110761.7A Active CN109215007B (en) 2018-09-21 2018-09-21 Image generation method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109215007B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903363A (en) * 2019-01-31 2019-06-18 天津大学 Condition generates confrontation Network Three-dimensional human face expression moving cell synthetic method
CN109949213B (en) * 2019-03-15 2023-06-16 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN110021051B (en) * 2019-04-01 2020-12-15 浙江大学 Human image generation method based on generation of confrontation network through text guidance
CN110705652B (en) * 2019-10-17 2020-10-23 北京瑞莱智慧科技有限公司 Countermeasure sample, generation method, medium, device and computing equipment thereof
CN111541950B (en) * 2020-05-07 2023-11-03 腾讯科技(深圳)有限公司 Expression generating method and device, electronic equipment and storage medium
CN112288861B (en) * 2020-11-02 2022-11-25 湖北大学 Single-photo-based automatic construction method and system for three-dimensional model of human face
CN114816599B (en) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 Image display method, device, equipment and medium
CN112861805B (en) * 2021-03-17 2023-07-18 中山大学 Face image generation method based on content characteristics and style characteristics

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159064A (en) * 2007-11-29 2008-04-09 腾讯科技(深圳)有限公司 Image generation system and method for generating image
CN103179341A (en) * 2011-12-21 2013-06-26 索尼公司 Image processing device, image processing method, and program
CN103778376A (en) * 2012-10-23 2014-05-07 索尼公司 Information processing device and storage medium
CN104753766A (en) * 2015-03-02 2015-07-01 小米科技有限责任公司 Expression sending method and device
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN107851299A (en) * 2015-07-21 2018-03-27 索尼公司 Information processor, information processing method and program
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
CN108401112A (en) * 2018-04-23 2018-08-14 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392974A (en) * 2017-07-13 2017-11-24 北京金山安全软件有限公司 Picture generation method and device and terminal equipment
CN107404577B (en) * 2017-07-20 2019-05-17 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN107680069B (en) * 2017-08-30 2020-09-11 歌尔股份有限公司 Image processing method and device and terminal equipment
CN107578459A (en) * 2017-08-31 2018-01-12 北京麒麟合盛网络技术有限公司 Expression is embedded in the method and device of candidates of input method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159064A (en) * 2007-11-29 2008-04-09 腾讯科技(深圳)有限公司 Image generation system and method for generating image
CN103179341A (en) * 2011-12-21 2013-06-26 索尼公司 Image processing device, image processing method, and program
CN103778376A (en) * 2012-10-23 2014-05-07 索尼公司 Information processing device and storage medium
CN104753766A (en) * 2015-03-02 2015-07-01 小米科技有限责任公司 Expression sending method and device
CN107851299A (en) * 2015-07-21 2018-03-27 索尼公司 Information processor, information processing method and program
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN107977928A (en) * 2017-12-21 2018-05-01 广东欧珀移动通信有限公司 Expression generation method, apparatus, terminal and storage medium
CN108401112A (en) * 2018-04-23 2018-08-14 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN109215007A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109215007B (en) Image generation method and terminal equipment
CN108184050B (en) Photographing method and mobile terminal
CN109218648B (en) Display control method and terminal equipment
CN108762634B (en) Control method and terminal
CN110658971B (en) Screen capturing method and terminal equipment
CN109857494B (en) Message prompting method and terminal equipment
CN110245246B (en) Image display method and terminal equipment
CN108683850B (en) Shooting prompting method and mobile terminal
CN109495616B (en) Photographing method and terminal equipment
CN111026316A (en) Image display method and electronic equipment
CN109815462B (en) Text generation method and terminal equipment
CN109901761B (en) Content display method and mobile terminal
CN111158817A (en) Information processing method and electronic equipment
CN109448069B (en) Template generation method and mobile terminal
CN111127595A (en) Image processing method and electronic device
US20220286622A1 (en) Object display method and electronic device
CN111010523A (en) Video recording method and electronic equipment
CN108600079B (en) Chat record display method and mobile terminal
CN110930410A (en) Image processing method, server and terminal equipment
WO2021082772A1 (en) Screenshot method and electronic device
CN110209324B (en) Display method and terminal equipment
CN109117037B (en) Image processing method and terminal equipment
CN110866465A (en) Control method of electronic equipment and electronic equipment
CN110012151B (en) Information display method and terminal equipment
CN109166164B (en) Expression picture generation method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant